YOU ARE AT:AI-Machine-LearningTime to comply – AI Act is published in Europe, in force...

Time to comply – AI Act is published in Europe, in force by August

The Artificial intelligence (AI) Act has been published today (July 12) in the Official Journal of the European Union, the gazette of record for the European Union (EU). It confirms a three-phase schedule for the implementation of its various provisions according to the risk associated with aspects of the AI systems they seek to regulate. The ultimate deadline is August 2 2026, but the first deadline is in less than three weeks, on August 2, when its rules about ‘general provisions’ and ‘prohibited practices’ and systems are made binding for EU member states. 

These are covered in chapters one (pp. 48-51) and two (pp. 51-53) in the 150-page document. Article 113 in the document further sets a deadline of August 2 2025, a year later, for regulatory provisions around ‘notifying authorities’ about ‘high-risk AI systems’ (section four, chapter three, pp. 70-76), establishment of an AI Office and AI Board (chapter seven, pp. 95-100), and enforcement of ‘penalties’ for non-compliance (chapter 12, pp. 115-118). A year, from August 2026, the full scope of the AI Act applies, with a few exceptions, such as its retrospective application to AI products that are already in the market. The whole thing will be finalised from 2 August 2027.

The new AI Act, set to inform AI policy across the globe, sets a common regulatory and legal framework for the development and application of AI in the EU. It was proposed by the European Commission (EC) in April 2021 and passed in the European Parliament last month (May 2024). Other countries are pursuing their own versions, but the EU model is expected to set the template for these, too. “That’s the Brussels effect,” said Dan Nechita, Head of Cabinet for Dragoş Tudorache, Member of European Parliament, and the person in charge of shepherding the AI Act through “so many” rounds of votes, told Digital Enterprise Show last month.

He said: “Like with the GDPR, where we decided, okay, this is how to protect personal data. GDPR is not perfect, but it has had a global influence. The AI Act will be the same.” The legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules. It is presented as a corporate tool with a democratic purpose, which does not confer rights on individuals, but regulates original providers and professional users. Its most controversial measure is its treatment of facial recognition technology in public places, categorised as high-risk but not banned. Amnesty International says general usage of facial recognition should be banned.

The AI Act sets out four levels of risk: unacceptable risk, high risk, limited risk, and minimal risk; plus it includes an additional category for general-purpose AI.  Of the four official risk categorisations, applications in the first group (“unacceptable risk”) are banned and applications in the second (“high-risk”) are required to comply with security and transparency obligations, and also go through conformity testing. Limited-risk applications have only transparency obligations, and minimal-risk apps are not regulated. “The bulk of the regulation applies to AI systems that have a very, very significant impact on the fundamental rights of humans,” explained Nechita last month.

In particular, this relates to the use of AI in employment decisions, law enforcement, and migration – in places where “the use of the AI can discriminate and, ultimately, put you in jail or deny you a job or social benefits”. He said: “And all of those are high risk cases [at the top of the pyramid]… “Medium risk cases, going down the pyramid, would be those AI systems that can manipulate or influence people – like chatbots and deep fakes, for example… The act obliges [in those cases] some transparency – so the AI says, ‘Hey, look, I’m an AI; I’m not actually your psychologist’. And then everything else, about 80 percent of the AI systems out there, [are categorised as low-risk].”

The AI Act stipulates the creation of various new institutions to promote cooperation between member states, and to ensure bloc-wide compliance with the regulation. These include a new AI Office and European Artificial Intelligence Board (EAIB). The AI Office in charge of “supervising the very big players who build very powerful systems at the very frontier of AI”. The EAIB is to be composed of one representative from each member state, and tasked with its consistent and effective application across the union. These two bodies will be complemented by supervisory authorities at national level, banded together as a new Advisory Forum and Scientific Panel, offering guidance variously from the enterprise and academic sectors, plus from civil society.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.