top of page
Writer's pictureAnchor Point

#EU: Facing the Complexities of #AI Regulation



Regulating artificial intelligence (AI) brings forth a host of challenges, encompassing deepfakes, facial recognition, and existential threats. Throughout history, tech regulation has often lagged behind the industry it aims to govern. For instance, the UK's online safety bill and the EU's Digital Services Act have arrived nearly two decades after the inception of Facebook. Meanwhile, AI continues to advance rapidly, exemplified by the widespread adoption of ChatGPT, which boasts over 100 million users. As concerns about the uncontrollable AI race grow, it becomes imperative for authorities to address these issues promptly.


The European Union (EU), as is often the case with technology, takes an initial step with the AI Act. In the United States, Senate Majority Leader Chuck Schumer has released a framework for developing AI regulations, prioritizing goals like security, accountability, and innovation, with particular emphasis on the latter. In the United Kingdom, Rishi Sunak has organized a global summit on AI safety scheduled for the autumn. However, the EU's AI Act, which has been in the making for two years, marks the first serious attempt to regulate this transformative technology comprehensively.


The EU explicitly prohibits systems that pose an "unacceptable risk," including those that manipulate individuals. It highlights alarming scenarios such as voice-activated toys encouraging dangerous behavior in children, as well as "social scoring" and predictive policing systems that rely on profiling and biometric identification methods. High-risk AI systems, those that jeopardize safety or fundamental rights, will undergo thorough assessment before market entry and continuous monitoring during use. This category encompasses systems employed in critical areas such as education, law enforcement, and border control, as well as products falling under the EU's product safety legislation. Nevertheless, critics argue that the associated costs and compliance efforts might prove daunting, particularly for startups.


Systems with limited risk will have to meet minimal transparency requirements, and users must be made aware when they are interacting with AI. This includes systems generating image, audio, or video content, like deepfakes. The EU parliament suggests specific proposals for generative AI, requiring platforms such as Google and Facebook to promptly flag AI-generated content. Additionally, AI companies will be obliged to publish summaries of the copyrighted data used to train their AI systems—a field where transparency remains largely inadequate.


On the other hand, AI systems with minimal or no risk, such as those employed in video games or spam filters, will face no additional obligations under the AI Act. According to the European Commission, the "vast majority" of AI systems used in the EU fall into this category. Breaches of the act could result in fines of up to €30 million or 6% of global turnover, indicating the seriousness with which the EU approaches AI regulation.


Foundation models, which serve as the basis for generative AI tools like ChatGPT, rely on extensive datasets for training. The European Parliament draft mandates that services such as ChatGPT must disclose the sources of all data used for training the machine. To address the high risk of copyright infringement, developers of AI chatbots will need to publish all works of scientists, musicians, illustrators, photographers, and journalists used in their training. They must also demonstrate compliance with relevant laws throughout the training process. Furthermore, the legislation emphasizes the need for human oversight and redress procedures when deploying AI systems, requiring a thorough assessment of their impact on fundamental rights before implementation.


The EU aims to finalize the draft by the end of the year, following the mid-June vote by MEPs to push through an amended version of the original draft tabled by the European Commission. Trilateral discussions between the commission, the EU parliament's AI committee chairs, and the Council of the European Union seek to refine and finalize the legislation. Lisa O'Carroll, the Guardian's Brussels correspondent, closely follows the AI Act and highlights the contentious issue of real-time facial recognition, which is banned under the MEP proposals. Law enforcement agencies view this technology as a crucial tool in fighting crime and civil offenses. Real-time facial recognition is already in use in parts of China, where it monitors drivers for speeding, mobile phone use, or drowsiness behind the wheel.


Additionally, the French government plans to employ real-time AI facial recognition at the upcoming Olympics to mitigate potential threats, such as crowd surges. However, if the AI Act were in place, this practice would need to be reversed. The EU hopes that its regulation will set the "gold standard" for major players like Google and Facebook, leading them to adopt these new laws as their global operational framework—a phenomenon known as the "Brussels effect."


The influence of the AI Act extends beyond the EU's borders. Charlotte Walker-Osborn, a technology lawyer specializing in AI, acknowledges the EU's global influence in tech regulation, as demonstrated by laws like the General Data Protection Regulation (GDPR). While the AI Act sets a significant benchmark, other countries such as the United States, United Kingdom, and China are also formulating their own measures. Consequently, tech companies, businesses, and entities operating within the scope of these regulations will face the challenge of complying with diverse requirements.


Critics of the AI Act propose alternative approaches. Dame Wendy Hall, Regius Professor of computer science at the University of Southampton, favors a pro-innovation stance similar to the UK government's white paper released in March. She believes that although regulations should ensure responsible and safe AI development, it is too early in the AI development cycle to definitively determine the appropriate regulatory framework.


Notably, key industry players have expressed their perspectives. Sam Altman, CEO of OpenAI, the company behind ChatGPT, has stated that if they cannot comply with the AI Act, they will cease operations in the EU. However, Altman supports the idea of audits and safety tests for high-capability AI models. Microsoft, a major backer of OpenAI, recognizes the need for legislative guardrails and alignment efforts at an international level. The company welcomes the implementation of the AI Act. Google DeepMind, the UK-based AI division of Google, emphasizes the importance of supporting AI innovation within the EU through the act.


Nevertheless, researchers from Stanford University caution that major AI players, including Google, OpenAI, and Meta (formerly Facebook), exhibit uneven compliance with the draft EU AI Act's requirements, particularly concerning the summarization of copyrighted data in their models.


In conclusion, the regulation of AI presents complex challenges that must be confronted by politicians, watchdogs, and the public. The EU's AI Act represents a significant step toward comprehensive regulation, with its focus on addressing unacceptable risks and protecting fundamental rights. While the EU aims to establish itself as the global standard-bearer for AI regulation, other countries are also formulating their own measures. The influence of the AI Act will extend beyond the EU, requiring companies and entities to navigate varying regulatory frameworks. Critics advocate for alternative approaches that balance innovation and regulation. Industry players, such as OpenAI, Microsoft, and Google DeepMind, express differing perspectives on the act, emphasizing the importance of compliance, legislative guardrails, and support for AI innovation. As the EU fine-tunes the final draft of the AI Act, its implementation will be closely watched to gauge its impact on the future of AI regulation worldwide.

bottom of page