techUK responds to regulators' strategic approaches to artificial intelligence

On 30 April, 12 of the UK’s regulators published their strategic action plan for AI, at the request of Michelle Donelan, Secretary of State for Science, Innovation and Technology as part of the UK’s pro-innovation approach to AI regulation.

Regulators who were asked to publish a plan by the end of April included the ICO, CMA, FCA, Ofcom, Bank of England, Equality and Human Rights Commission, and Ofgem amongst others. With time, we could see more plans published by other regulators.

Since the Government’s response to its AI White Paper went live, publication of these plans marks its first most significant milestone, and serves as a litmus test on the extent to which the Government is able to deliver the regulatory coordination and capabilities that industry has called for.

 

Regulators’ plans at a glance

Regulators were advised by the Government on what their strategic action plan for AI should contain, including how they are interpreting and applying the AI principles of the White Paper, an analysis of risks within their remit and their existing capabilities and activities to address them, as well as a forward-looking plan of activities for the coming 12-months.

In line with expectations, member regulators from the Digital Regulation Cooperation Forum (DRCF), Ofcom, FCA, ICO and the CMA have presented the most comprehensive plans, backed by the resources, expertise and knowledge needed to deliver on the Government’s agenda. Their plans give clear signals to industry where their priorities lie – from the CMA focussing on consumers and their ongoing work in foundation models markets, to Ofcom prioritising its new online safety responsibilities, and the ICO and FCA presenting a “business as usual” programme of work, demonstrating AI is not new terrain for them.

The DRCF regulators have also showed a united front, all acknowledging and showcasing the collaborative efforts of the forum, as well as plans for joint work through initiatives such as the AI and Digital Hub, or joint statements, with the ICO and Ofcom sharing one only last week on online safety and data protection.

Of note, the ICO made clear in several areas of its plan that they believe existing data protection laws and initiatives suitably address the risks of AI technologies, particularly highlighting its work related to facial recognition technology, and children’s privacy, both areas which are receiving considerable public and parliamentary scrutiny. Similarly, the ICO has warned of the use of emotion analysis technology, and has identified biometrics as a priority area for the year ahead.

This regulatory approach to risk sits in contrast to the EU, where legislation (EU AI Act) categorises AI use cases by risk including a list of banned applications from emotion recognition in the workplace to predictive policing solely based on profiling. With this in mind, the ICO’s action plan begins to sketch out the benefits of the UK’s approach to AI regulation, which allows it to be flexible and dynamic compared to the prescriptive nature of the EU’s.

 

What about the other regulators?

However, the success of the UK’s approach to AI regulation cannot fall solely on the existing strengths of our digital regulators. While the action plans demonstrate the benefits of the UK’s approach, it in equal parts also demonstrates the areas where more targeted support and intervention is required by the Government.

Plans from the Equalities and Human Rights Commission (EHRC) and the Medicines and Healthcare Products Regulatory Agency (MRHA) demonstrate a desperate need for additional resources to support the day-to-day work of these regulators, such as increasing their inhouse expertise and capabilities. Currently the only additional funding available is from the Government’s £10m Regulator’s AI Capability Fund, which will be helpful for discrete projects, but is not an uplift in overall funding across the UK’s regulatory system.

For example, the MHRA's strategic approach to AI notes there are “approximately three full-time equivalent employees” working on workstreams in relation to the use of AI as a medical device, with this projected to rise to 7.5 full-time staff in the next 12 months. With the number of companies seeking to expand or newly deploy AI-based tools for health and care applications only set to exponentially increase, this level of regulatory resourcing is highly unlikely to meet growing demand.

This limited resource is also concerning for the EHRC which has acknowledged its important role in addressing potential risks and harms in AI technologies and has made clear of its lack of capacity to meet its new regulatory demands. Much like the ICO, the EHRC references projects related to facial recognition technology and biometrics, which is another signal as to where regulatory efforts may be placed in the near future.

In comparison, Ofgem presented a light update, likely to be supplemented by the outcome of its open consultation on AI in the energy sector, which may begin to hint at some of the risks of a lack of regulatory coordination which is under the responsibility of the Government’s new “Central Function”. Similarly, The Office for Product Safety and Standards is yet to publish their plan.

With most plans now out, the Government has committed to reviewing responses as part of its wider work on AI regulation. This will be key as businesses grapple with making sense of these plans to inform how they innovate responsibly.

 

techUK’s overall assessment

Overall, publication of the regulators’ strategic plans is a welcomed first step in the Government’s approach to AI regulation, which provides the space for highly capable regulators to flex their strengths and chart the course for how AI technologies develop in their domain. This is particularly important given the responsibilities of the UK’s digital regulators to give due consideration to the UK’s economic growth and innovation.

It also demonstrates the Government’s iterative approach in action, with regulators already committing to future updates on their programmes of work, and some already full steam ahead with delivering on the plan such as the ICO’s consultations on generative AI, and the publication of the CMA’s update report on foundation models.

However, the plans also show what is at risk if the Government’s does not deliver on its promises to support regulators with sufficient resource, expertise and leadership through its coordinating role – namely the "Central Function” – and inter-Ministerial Group to ensure effective coordination across government.

If there is not going to be an immediate increase in funding for regulators who will struggle to develop full and comprehensive plans for AI on top of their existing duties, then Government will need to bring forward new ideas to support a more efficient use of resources and better coordination.

techUK has argued for increases in funding to support the regulatory system as it seeks to implement the AI Whitepaper. However, in addition to this, the Government could seek to encourage and facilitate greater sharing and pooling of resources and expertise. This could include pooled experts and sharing of compute capacity as well as greater strategic direction from Government. Further ideas on how to improve our approach to regulation are included in techUK’s Seven Tech Priorities for the next Government (available here).

With the Government set to publish an update on its steering committee to support and guide the activities of the regulators techUK will be looking for action to address this resource shortfall across parts of the regulatory system and a stronger sense of strategic direction now that the UK’s AI regulatory model is up and running.

 


 

Dani Dhiman

Dani Dhiman

Policy Manager, Artificial Intelligence and Digital Regulation, techUK