On 6 February 2024, the government published a response to the AI White Paper published in March 2023, which we covered in our earlier articles, Artificial intelligence will be challenged and the UK approach to AI regulation. The response followed a 12-week public consultation in which individuals and organisations were engaged through written consultation, roundtables, workshops, and insight from the regulators.
The consultation asked 33 questions to garner the respondents’ views on issues relating to the AI White Paper and the realm of AI generally. The questions covered issues such as transparency, legal responsibility, how broad or narrow guidelines or laws should be, who should regulate, support tools that should be available, and the overall approach that should be taken.
A united front
There were few answers in which the respondents expressed a wholly united stance, with various views and ideas for each question. Here are some of the areas in which there appeared to be the most agreement among the respondents:
- Cross-sectorial principles: There was strong support for revised cross-sectorial principles that will cover broader risks posed by AI technologies and add some specific standards to the currently drafted principles.
- Legal framework: Respondents widely felt that the current legal framework to remedy AI-related harms is inadequate within the UK and across borders. They also felt that organisations need a legally responsible person for AI, similar to a Data Protection Officer for GDPR oversight in companies.
- Transparency: The responses made it clear that transparency regarding when organisations are using AI is important for building public trust, creating accountability, and making it easier to seek redress.
- Resources: There was a wide acknowledgement that regulators would require increased resources to monitor and enforce legislation effectively.
- Delivery: Many respondents believed the proposed framework would benefit from central delivery, and most respondents felt the government was best placed to deliver and provide oversight of the central functions. However, regulators are the best placed to implement the principles themselves in their own sectors. Further, the current framework in the White Paper needs further clarification on liability across the AI life cycle.
Overall, the responses acknowledge that legislation may ultimately be necessary. Still, the preferred option is an agile and more flexible approach to AI regulation at this time.
What’s next?
Since the publication of the White Paper, several regulators, such as the CMA and the ICO, have published their reviews and guidance on AI systems in their sectors. More regulators, such as the Office of Gas and Electricity Markets and Civil Aviation Authority, are working on their strategies to be published. The government has asked several regulators to publish an update outlining their strategic approach to AI by 30 April 2024.
The government has established a team to deal specifically with cross-sectoral risk monitoring. It plans further targeted consultations and intends to publish an “Introduction to AI assurance” in spring 2024. The government plans to establish a steering committee with government representatives and key regulators. It has said it is investing in regulators to enable them to work together and improve practical tools to address AI risks and opportunities. The government will also review the current regulatory powers to identify any gaps.
There is much more work to do, so watch this space.
Contact Holly Anderson today for more information on AI regulation.
Note: This article is not legal advice; it provides information of general interest about current legal issues.