Europe’s AI Act takes effect: What Australian businesses need to know for their use of AI

The European Union’s Artificial Intelligence Act (AI Act) officially became law on 1 August 2024, but its implementation will be phased over the next two to three years. The AI Act extends beyond the EU in certain situations, which means that Australian businesses must prepare for its potential impact on their use of AI technologies.

The AI Act, formally known as EU Regulation No. 2024/1689 on artificial intelligence, took effect on 1 August 2024. The obligations under this law will be gradually enforced over two to three years.

Applicability of the AI Act

The AI Act applies to all businesses operating within the European Union (EU). It categorises entities as “providers” (those who develop AI systems), “deployers” (those who use AI systems in commercial or professional activities), “importers” (those based in the EU who introduce AI systems from non-EU entities), and “distributors” (those who supply AI systems to the EU market but are not providers or importers, regardless of location).

Impact on Non-EU Entities

The AI Act also applies to providers outside the EU who develop and offer AI systems for sale or use within the EU. This includes non-EU providers and deployers whose AI systems produce outputs used within the EU. Therefore, Australian companies that offer AI systems for use in the EU, sell or offer to sell AI systems into the EU, or use AI systems to generate outputs used by EU-based businesses or individuals may be subject to the AI Act, even if they do not have a physical presence in the EU. Additionally, Australian manufacturers exporting products to the EU that incorporate AI systems may need to comply with the AI Act.

Scope of the AI Act

The AI Act regulates “Artificial Intelligence Systems” (AI Systems) and “General-purpose AI Models” (AI Models). An AI System is a machine-based system designed to operate autonomously and adaptively, capable of generating outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.

The AI Act categorises AI systems based on risk levels: “unacceptable risk,” “high risk,” “limited risk,” and “minimal risk.”

Unacceptable Risk AI Systems

AI systems that pose an “unacceptable risk” are banned outright, with limited exceptions. These include AI systems that manipulate behaviour, classify individuals based on social behaviour, create or expand facial recognition databases without targeting, infer emotions in workplaces or educational institutions, and assess criminal propensity based solely on profiling.

High Risk AI Systems

The “high risk” category includes most of the complex regulations. AI systems in this category, such as those used for biometric verification, education, recruitment, and public assistance, are subject to strict requirements. Providers of high-risk AI systems must comply with obligations related to risk management, data governance, technical documentation, transparency, accuracy, and security. Compliance also requires third-party conformity assessments before product launch and ongoing market surveillance.

Limited Risk AI Systems

“Limited risk” AI systems are subject to transparency obligations to ensure users are informed when AI is used. These include chatbots, AI systems that generate media content (e.g., deepfakes), and those deploying emotion recognition or biometric categorisation.

Minimal Risk AI Systems

“Minimal risk” AI systems, such as AI-enabled video games or spam filters, are not regulated under the AI Act.

Regulation of General Purpose AI Models

In addition to specific AI systems, the AI Act regulates “General Purpose AI Models,” including generative AI models like GPT-4 or Google’s Gemini. Providers of these models must disclose information regarding the model to downstream providers and be transparent about the content used to train the model. If a model poses a “systematic risk,” it will be subject to additional regulations.

### Penalties for Non-Compliance

Entities that fail to comply with the AI Act may face significant fines, including up to €35 million or 7% of global turnover for breaches involving “unacceptable risk” AI systems and up to €15 million or 3% for other violations.

### Implementation Timeline

Although the AI Act took effect on 1 August 2024, it will be fully enforced by 1 August 2026, with a staged implementation process. Key dates include:

– February 2025: Ban on prohibited AI systems.

– August 2025: Regulation of General Purpose AI Models.

– August 2026: Full enforcement of most remaining obligations, with some high-risk AI systems regulated by 2 August 2027.

Conclusion

Australian companies intending to offer AI systems in the EU, sell or offer AI systems into the EU, or export products incorporating AI systems into the EU should begin preparing to comply with the AI Act. Recommended steps include conducting an audit of AI systems, reviewing design and development processes for compliance, assessing supply chain impacts, monitoring EU Commission communications, and seeking legal advice.

Key Takeaways

– The EU’s AI Act applies to all entities involved in developing, deploying, importing, or distributing AI systems in the EU, regardless of location.

– Australian companies affected by the AI Act should start planning for compliance.

Source: Lexology

Leave a Reply

Your email address will not be published. Required fields are marked *