Supporting safe and responsible use of artificial intelligence
The potential for artificial intelligence (AI) to improve social and economic wellbeing is immense. AI development and deployment is accelerating and is already permeating institutions, infrastructure, products and services. This often occurs undetected by those engaging with it.
Internationally, governments are introducing new regulations to address the risks of AI. The focus is on creating preventative, risk-based guardrails that apply across the AI supply chain, and throughout the AI lifecycle.
The Australian Government’s consultations on safe and responsible AI have shown that Australia’s current regulatory system is not fit for purpose to respond to the distinct risks that AI poses.
On 5 September 2024, the government opened consultation on proposed mandatory guardrails to shape the use of AI in high-risk settings. These guardrails focus on ensuring that AI systems being developed and used by organisations are tested, transparent and supported by clear accountability if things go wrong.
A principles-based definition of high risk has been proposed by the Australian Government. This involves considering, for example, whether an AI use poses risk to people’s physical or mental safety, or to their human rights – and how severe the impacts of this might be.
The government is advocating a risk-based approach to AI regulation, acknowledging that a vast number of uses of AI are considered low risk and should be enabled to flourish unimpeded.
To support the overall objective of developing community trust and promoting AI adoption, the government is taking a coordinated approach with 5 pillars of action:
- delivering regulatory clarity and certainty
- supporting and promoting best practice
- supporting AI capability
- Government as an exemplar
- engaging internationally.
From July 2023 to June 2024, the AI in Government Taskforce facilitated guidance and engagement on AI across government, and delivered a range of initiatives to help government harness the opportunities of AI technologies in a safe and responsible way. Building on this work, the Digital Transformation Agency (DTA) is continuing to develop and implement policies to position the government as an exemplar in the use of AI.
On 15 August 2024, the DTA released the policy for the responsible use of AI in government. The DTA will pilot a draft Commonwealth AI Assurance Framework to support a more consistent approach by agencies to assessing and mitigating the risks of AI use.
AI is a shared challenge and nations are responding collectively. Given that the technology is developed in other countries and crosses borders, the Australian Government is engaging internationally to shape global and regional approaches to safe and responsible AI. The Australian Government is supporting a domestic framework that is interoperable with the approaches of other countries.
Find out more
Industry, Science and Resources (n.d.) Supporting responsible AI: discussion paper, DISR website, accessed 26 July 2024.
Australian Government (n.d.) Policy for the responsible use of AI in government, digital.gov.au website, accessed 26 September 2024.