AI Act approved by the European Parliament – but what comes next?
MEPs in Strasbourg have overwhelmingly (523 - 46) now approved the EU’s AI Act. It heralds the long-awaited arrival of what is the first globally significant attempt at a standalone regulatory framework for artificial intelligence systems.
It’s tempting to assume that the vote represents the endgame for the legislation and that we now move straight to implementation, but it’s worth taking a step back to consider what’s still to come, and what challenges – legislative and political - the Act is likely to face once in place.
Timetable to implementation
To take the most obvious fact first, the AI Act isn’t yet in force. It will now proceed to legal / linguistic finalisation, it then needs approval by the Council (i.e. Member States) and will only enter into force once published in the Official Journal of the EU, expected in May or June 2024. After that, obligations of the Act come into force on a phased basis, including:
- on prohibited AI systems after 6 months;
- the obligations on general purpose AI models (GPAI) following on after 12 months;
- the requirements on ‘high risk’ AI systems (at least those identified in Annex III of the Act) coming in after 24 months; and
- limits on remaining high risk systems outside Annex III (affecting the likes of cars, aircraft and medical devices) commencing after 36 months.
Further guidance to follow
In addition to this phased introduction, the EU Commission will have a tight set of deadlines to meet in providing businesses with additional legal certainty through the publication of further guidance:
- on GPAI within 9 months; and
- on high risk classifications (under Articles 6 and 82) after 18 months.
Whether the Commission can, in that time, develop standards and guidance that will not need to be subject to continued and potentially material revision remains to be seen, and we only need look at the stalled progress of the AI Act itself, when trying to accommodate generative AI before it was passed, to understand how the evolution of AI technologies in the intervening period could provide another moving target.
Room for improvement?
Putting to one side the strict legal obligations of the Act, it’s impossible to ignore the broader ‘noise’ around its introduction. Even some MEPs closely involved in its development have already expressed doubts as to whether, in its current form, it is truly fit for purpose. Concerns are centred around:
- the need for much more detailed guidance and standards to underpin the broad framework of the Act;
- clarification of the relationship of the Act with the overlapping and parallel legal obligations – notably GDPR and wider IP laws;
- duplication of regulatory and governance oversight between the newly-established EU AI Office and other existing EU regulatory bodies;
- needing a more focussed approach to supporting continuing innovation in AI – allowing for appropriate sandboxes and ensuring the Act is not applied disproportionately to SMEs;
- much like the debate around regulator capacity and capability in the UK, ensuring the AI Office is sufficiently resourced and supported to enable it to fully discharge its obligations; and
- ensuring that the hefty rights to fine companies under Article 71 are effectively suspended unless and until these requirements have been fully achieved. The Commission’s (and Member State AI bodies’) appetite for enforcement has of course always been a question hanging over the Act since it started to take shape, and that question only becomes more important now the Act is entering into force. The Commission is in theory tasked with ensuring clarity and consistency across member states, but whether that certainty can be established in the early days of the Act we don’t yet know.
The international context
Reception of the Act beyond the EU’s borders has been mixed, and despite the legislation’s extra-territorial reach it’s unclear whether the EU’s potential ‘first mover advantage’ will be borne out. Only this week, multilateral talks in the Council of Europe relating to a separate treaty for protection of human rights in AI applications looked close to breaking point, as the US in particular tried to exempt its companies from the treaty’s requirements. It signposts a wider disparity in global views on AI regulation. In the UK meanwhile there appears little appetite (from either party likely to form the next government) to pursue a similar ‘once in a generation’ regulation for AI. The focus instead is on developing sector regulator expertise to manage AI risks. The USA and China will also inevitably have a key role to play in setting the tone for wider global regulation. The ideal outcome is that any disparate regulatory approach between countries is smoothed out by consensus at the likes of the G7, OECD and UN around well-understood and consistent definitions, guidance and standards. Whether that consensus can be reached between key power blocs who still see AI as a regulatory ‘race’ is unclear.
Conclusion
As the complexities of the environment into which the AI act is being launched become clearer, our key takeaway would be that businesses developing or intending to deploy AI systems shouldn’t take the passing of the AI Act as a single line in the sand. There’s much still to be done for the EU to fill the gaps in the framework laid out by the legislation and instil confidence in how it will be applied. There is also still plenty of scope for things to change, as pressures from within and beyond the EU influence how the law will ultimately be implemented and enforced.