Is AI regulated in Australia? What lawyers should know

Australia currently lacks a specific regulatory framework to govern the use of artificial intelligence (AI), even though AI is advancing rapidly. Although there are existing regulations in place to govern AI, new safeguards may be launched in 2024 to address expanding risks. To use AI in a trustworthy manner requires knowledge of the current regulatory landscape and foresight of possible changes. If you are creating a framework for your organisation, it is crucial to stay one step ahead. Lawyers should be aware of the following information:

  • AI is regulated in Australia, but there is no AI-specific legislation. Instead, existing legislation (such as consumer, data protection, competition, copyright, and anti-discrimination laws) governs its use.
  • The government encourages using voluntary ethical frameworks, such as Australia’s AI Ethics Framework, to guide the responsible design, development, and implementation of AI.
  • There is increasing momentum for Australia to enhance existing regulations. The government is considering implementing backstops against AI’s potential risks, including introducing targeted regulation and governance.
  • Potential signs of forthcoming AI regulation include amendments to the Privacy Act in September 2023, the National AI Center’s AI Month initiative launched in November, and business groups uniting in November 2023 to offer the government the opportunity to manage the benefits and risks of AI. Additionally, the Department of Industry, Science and Resources released a discussion paper on “Safe and responsible AI in Australia” in June 2023, which explored potential regulatory frameworks and invited industry feedback on Australia’s current regulatory frameworks.
  • Proposed legislation and initiatives seek to introduce guardrails for AI-powered automation and the use of large data sets. Governments and organisations will regulate AI in different ways, including upcoming targeted and amending legislation, governance frameworks, international and national standards, and policy implementation within organisations for the use and development of AI.
  • AI governance is evolving through various means, such as the proposed EU AI Act (the world’s first comprehensive AI law), global treaties like “The Bletchley Declaration,” adoption of ethical frameworks like the OECD’s AI Principles, and the rise of ethical AI, which seeks to ensure that AI is being developed, implemented, and used responsibly.
  • Ethical AI is a growing field that aims to protect human rights and dignity. UNESCO released its Recommendation on the Ethics of Artificial Intelligence in 2021, which is based on fundamental principles such as transparency and fairness and emphasises the importance of human oversight of AI systems. The Recommendation includes guidance for applying ethical recommendations to policy action areas like data governance, social well-being, and the environment.
  • Ethical AI mainly focuses on generative AI, a powerful tool that uses large language models to scan unprecedented amounts of data and generate almost instantaneous responses to queries. Although AI is liberating time for professionals, allowing them to focus on value-adding activities, it also poses potential risks that users must adequately address.

Source: Thomson Reuters



, ,



Leave a Reply

Your email address will not be published. Required fields are marked *