Empires of Exceptionalism: Lessons from the EU AI Act and Attempts at AI Legislation in California
Published By: Hosuk Lee-Makiyama Claudia Lozano
Subjects: Digital Economy European Union North-America
Summary
- The EU AI Act, which passed after three years of negotiations and much ambivalence, sets a comprehensive regulatory framework that reflects the European Union’s unique needs that are not translatable to other jurisdictions.
- The proposed bill in California (SB 1047) illustrates this case in point. The bill could potentially mislead the public into thinking that AI is already under effective regulatory control.
- Lawmakers must recognize that the EU differs from other systems. The EU AI Act is not only shaped by the desire to slow down global AI development, allowing Europe to catch up with rivals like the US and China: The EU lacks an enforceable constitution, and the Act prohibits its governments to impose social scoring and discriminatory measures.
- SB 1047 only addressed humanity-ending disasters and had unclear practical value. Meanwhile, the EU AI Act was passed into law to prevent internal competition and Europe’s history of authoritarianism.
- Existing laws in California and the EU already regulate most AI use cases. In addition, the EU AI Act reviews existing regulations to ensure they are still fit for purpose, something California failed to do in SB 1047. A critical takeaway is to examine existing obligations or executive powers (including enforcement agencies and public funding) to achieve the desired policy objectives.
- The diverging outcomes of the two laws show the importance of avoiding mismatches between policy objectives and legislative scope. AI regulation cannot substitute for other policy objectives, like privacy and antitrust.
- Also, lawmakers must weigh the cost of failed regulation against the risks of non-regulation. California did not see any immediate costs of inaction, whereas delayed action by the EU might have led to the fragmentation of its internal market. Also, given its collective decision-making, the EU can escape accountability for policy failures due to premature laws in a way the leaders in other political systems cannot.
Authors gratefully acknowledge the support and inputs from Fredrik Erixon and Dyuti Pandya in the initial drafting of the paper.
1. Background: AI Regulations in California and the EU
On September 30th, California Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) – in what would have been the first attempt to regulate artificial intelligence in the United States.[1] The controversial bill passed both the State Senate and the Assembly and would have established new requirements for large AI model developers following the new regulations in the European Union (EU).
State Senator Scott Wiener – the architect of SB 1047 – devised a precautionary measure against ‘extreme’ risks with safeguards to prevent catastrophic harm, resulting in mass casualties. Proponents of the bill include two of the most cited AI researchers: the “Godfathers of AI,” Geoffrey Hinton and Professor Yushua Bengio,[2] who view SB 1047 as a “crucial, light touch and measured first step” and a bare minimum for effective regulation.
The media has done a disfavor by simplifying the discussion to a schism between academics on one side and the giant tech lobby on the other. The former is sounding the alarm on how the federal government is falling behind technology and Europe, whereas the latter sees AI regulation as a freeze on innovation.[3] However, SB 1047 has also faced strong opposition from top Democrats and the private sector outside of Silicon Valley –[4] like a Shibuya crossing of multiple fault lines. Ultimately, Governor Newsom vetoed the bill, arguing that it focused solely on the most expensive large-scale models, which would have caused the public to believe, mistakenly, that this rapidly evolving technology is under control.[5]
The White House has responded to SB 1047 with the ‘’Memorandum on Advancing the United States’ Leadership in AI”, staking out a government-wide approach to AI, heavily slanted towards national and economic security objectives.[6] It is now inevitable that the AI debate will move to the federal level. Ahead of such discussion, the legislative success in the EU and failure in California raise some pertinent questions.
The two laws sought to address distinctly different types of societal failures and argued differently on why AI technology should become subject to regulation. The proponents of legislation in the State of California and the EU look at very different objectives, given very different deficiencies within their respective legal systems.
At the onset, it bears a reminder that Californian and European laws already regulate most AI use cases. Practically all identified risks are covered by existing criminal codes or citizen rights. For instance, an insurer that discriminates against customers based on race violates state and federal anti-discrimination laws, whether AI was used or not. Similarly, building a nuclear bomb is illegal regardless of whether the terrorist consults Chat GPT or not. Since the specifications required to make bombs are practically public knowledge, it is access to fissile materials is what is preventing the proliferation of weapons of mass destruction.
Given such circumstances, most jurisdictions take a “wait and see” approach, as they have existing laws that prevent undesirable outcomes. In rapidly evolving areas like technology or finance, technology-neutral legislation is often superior to a specific product law, lex specialis, which is quickly outdated. Most other governments are avoiding second-guessing future market failures and deferring from regulating the technology until it has reached its maturity,[7] relying on soft law and guidelines to safeguard security, civil rights, and privacy.
To this day, the Artificial Intelligence Act in the EU is the only law regulating AI through hard law. The EU AI Act was a source of inspiration – or even some form of peer pressure – on Californian legislators as the public discourse on generative AI and the role of Big Tech loomed.
However, three years of conflictual negotiations among anxious or ambivalent EU institutions and Member States preceded the ratification of the EU AI Act. The result is a negotiated compromise highly tailored to Europe’s unique needs. Besides the well-known motive of EU industrial planners to use its regulatory influence to slow down AI development to give itself time to catch up with its commercial rivals like the US and China, the EU must choose a hard law due to its complicated supranational structure.
The EU is an inverse of the US constitutional framework, where fundamental rights are exercised at state levels and with varying scope, protection, and means of redress for each Member State, rather than at the EU (i.e., federal) level. EU Member States compete more fiercely for influence and competitiveness than US states do, and European countries have a more recent history of mass surveillance and genocide. Europe’s rationale, formed by its history and politics, does not easily translate to other countries.
At the core – SB 1047 is limited to a safeguard against the creation of a hypothetical (and vaguely sci-fi-inspired) omnipotent AI that unleashes a disaster, while the EU rules bind nearly all use cases, including oppressive use by its governments – at a time when some EU members could be veering towards authoritarianism. Of course, the EU AI Act comes with its fair share of mercantilist or populist accessories – but the EU is also a more dynamic and evolving legal system (sui generis according to its vocabulary), and the EU co-executives can sweep a failed law under the rug through selective enforcement or revisions in technocratic committees.
Taken together, the EU offers very few lessons to other jurisdictions on how they should govern AI and its use. The failed legislative attempt in the State of California shows how the states and the federal government must agree on the scenario they seek to preempt and whether the existing legal basis and powers suffice to achieve that goal before they regulate AI through binding law.
[1] California Legislative Information, SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, March 2024 https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
[2] Bengio, Yoshua Bengio, California’s AI safety bill will protect consumers and innovation. 15 August 2024. Fortune.com. Accessed at: https://fortune.com/2024/08/15/yoshua-bengio-californias-ai-safety-bill-will-protect-consumers-innovation-tech/
[3] Lessig, Big Tech Is Very Afraid of a Very Modest AI Safety Bill. 30 August 2024. The Nation. Accessed at: https://www.thenation.com/article/society/sb-1047-ai-big-tech-fight/
[4] Criddle and Hammond, California governor vetoes bill to regulate artificial intelligence. 29 September 2024. Financial Times. Accessed at: https://www.ft.com/content/b3b92693-a960-4b6c-a503-f2792c77b04d
[5] Office of the Governor Gavin Newsom, SB 1047 Veto Message, 29 September 2024. Accessed at: https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf
[6] The White House, Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, 24 October 2024. Accessed at: https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
[7] India: NITI Aayog, AI For All – National Strategy For Artificial Intelligence, June 2018. https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf
Australia: Government of Australia, AI Ethics Principles, https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
United States: The White House, Blueprint for an AI Bill of Rights, a non-binding set of principles meant to ensure that AI systems protect people’s rights, safety, and privacy. Accessed at: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
2. SB 1047 Failure in California and Its Causes
California may be the home to the most influential technology cluster in the world, where companies and the home to 32 of the world’s 50 leading AI companies create an unparalleled ecosystem of innovation, investment, influence, and market power. However, merely looking at the economic structure of California vis-à-vis Europe is an untruthful depiction of why the legislation failed in California, whereas it passed in the EU.
AI has captured widespread attention amid an election cycle in Europe and the US. Deepfakes and ChatGPT sparked intense debate in California and Europe on controlling a technology that seems novel and transformative. And neither the US nor the EU is a stranger to real threats to their democratic processes, cyberattacks, misinformation, or deepfakes used by foreign geopolitical adversaries (or homegrown teens) at a relatively low cost.
The drafter of SB 1047, State Senator Scott Wiener, argued the industry itself acknowledges that advanced AI systems carry inherent risks while industry self-regulation falls short of protecting the public,[1] and his bill sought to provide “oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” [2]
Rightly or wrongly, the current debate accuses Silicon Valley of fostering a mentality of “move fast and break things,” focusing on rapid innovation at the expense of ethical considerations.[3] Senator Wiener urged to learn from past failures on social media and data privacy before it’s too late: “We have a history with the technology of waiting for harms to happen, and then wringing our hands.”[4] Therefore, much significance was also attached to SB 1047 as an action against Silicon Valley’s power and influence on our social interactions.[5]
The Mismatch Between Material Risks and Objectives
Since its beginnings as an advocacy project to protect the public from unnecessary and dangerous AI uses, SB 1047 also started a nationwide conversation for action on AI safety:[6] A shift from the hand-off regulation of the tech industry and their exponential profit margins – by bringing accountability to companies in the absence of federal-level policymaking.[7]
Such an agenda that sets out to curb the economic influence of technology companies comprises a broad set of policies against technology producers. However, if passed, SB 1047 would have established new requirements for large AI models,[8] only for the sake of avoiding the most extreme and unlikely forms of risks,[9] or critical harms.[10]
This approach differs distinctly from the EU, which applies to almost every situation – for developers and users alike. Users of covered models (i.e., extremely large models) under SB 1047 must commit to responsible use by complying with safety protocols set up by the developers, taking reasonable steps, and preventing misuse by malicious actors.
The scope of SB 1047 is consequently much narrower than the EU legislation. A “covered model” under SB 1047 is limited to an AI model that either uses over 1026 operations and costs more than $100 million to train or is fine-tuned or “derived” from such a model with at least three times 1025 operations and costs over $10 million. At the moment, there are no models that meet these requirements, which is why some critics argue the bill would have no legal effect de facto.[11]
Developers of such models must safety assessments and implement safeguards to entirely shut down the model to ensure it does not cause “critical harm” – i.e., extreme outcomes such as leading to the creation or use of weapons of mass destruction (WMDs), cyberattacks causing mass casualties or at least $500 million in damages, and autonomous AI conduct resulting in “significant harm,” which would be a criminal act if done by a human.[12]
The proposal addressed legislative harm more akin to War Games or Terminator than The Social Network. SB 1047 leaves room for companies to innovate in areas with minimal risks.[13] Developers must conduct protocols and annual reviews detailing the model’s risks, mitigation, and compliance through third-party audits. Also, SB 1047 creates a new entity for oversight – the Frontier Model Division – within the Department of Technology with powers to impose civil penalties for violations.[14]
Criticism and Cause of Failure
Despite Governor Newsom recognizing these risks as “urgent,” the rationale for his veto is grounded in how these risks must addressed with caution. Thus, establishing an AI regulatory framework could give the public a false sense of security about controlling the technology.[15] The opposition includes other leading Democrats and fellow Californians in the Congress, led by the former Speaker Nancy Pelosi.
The Democrat consensus is that SB 1047 “is well-intentioned but ill-informed” – with current frameworks for scientific risk assessment for AI underdeveloped – which, in turn, affects SB 1047’s evaluations, benchmarks, and standards.[16] In particular, Congresswoman Zoe Lofgren – a Ranking Member of the House Committee on Science, Space and Technology who proposed the first federal AI legislation in Congress –[17] points at the bill’s scope and there is “little scientific evidence of harm of mass casualties or harmful weapons created from advanced AI models’’.[18] Other democrats – Representatives Anna Eshoo and Ro Khanna[19] – suggest that SB 1047 will “stifle technical innovation in Silicon Valley and purposely designed to end open source AI development.[20]”
Such internal critics point to some important causes behind the stalled legislation in California. Firstly, the public perception of a lofty debate between scholars in tweed jackets and CEOs in hoodies. However, academic and business credentials in ethics or software engineering rarely translate into expertise in corporate liability law or applied microeconomics. Second, SB 1047 drew its inspiration from the EU AI Act but failed to recognize that the risks facing the EU are unique to Europe, ignoring California’s own legal and economic context. Assuming identical legislative harm and failure in Europe and California is a far more critical lapse before we discuss whether SB 1047 “strikes the right balance.”
Objectives for a binding law in California – i.e., why we regulate – included defending constitutional rights and privacy; protecting critical infrastructure and our democracy against cybersecurity threats, disinformation, or even WMDs; and lack of transparency and abuse of dominant position by companies. Whether Governor Newsom said yay or nay on SB 1047 does not change the fact that it did not fail to tick many of these boxes.
[1] Scott Weiner, Senator Wiener Responds to Governor Newsom Vetoing Landmark AI Bill. 29 September 2024. Accessed at: https://sd11.senate.ca.gov/news/senator-wiener-responds-governor-newsom-vetoing-landmark-ai-bill
[2] Ibid.
[3] London School of Economics, AI makes Silicon Valley’s philosophy of ‘move fast and break things’ untenable, November 2023, The London School of Economics, Accessed at: https://blogs.lse.ac.uk/medialse/2023/11/22/ai-makes-silicon-valleys-philosophy-of-move-fast-and-break-things-untenable/
[4] Zeff, California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it, 30 August 2024. TechCrunch. Accessed at: https://techcrunch.com/2024/08/30/california-ai-bill-sb-1047-aims-to-prevent-ai-disasters-but-silicon-valley-warns-it-will-cause-one/
[5] Senate Judiciary Committee, 03/29/24- Senate Judiciary. References to Big Tech: On third party auditing ‘Big Tech companies should not be grading their own homework with respect to reasonable safety protocols’, ‘Big tech companies now dominate breakthroughs in the field. In 2022, the tech industry created 32 significant machine learning models’, ‘we caution that dependence on private funds, which will likely come from the very Big Tech companies developing risky AI systems subject to the bill’s requirements, can implicitly facilitate regulatory capture’. https://leginfo.legislature.ca.gov/faces/billAnalysisClient.xhtml?bill_id=202320240SB1047#
[6] Olle, Veto of Popular AI Safety Legislation Ignores Overwhelming Public Support for Accountability of Big Tech. Press Release. Economic Security Project Action. 29 September 2024. Economic Security.us. Accessed at: https://economicsecurity.us/news/veto-of-popular-ai-safety-legislation-ignores-overwhelming-public-support-for-accountability-of-big-tech/
[7] Ibid.
[8] Criddle and Hammond, California governor vetoes bill to regulate artificial intelligence. 29 September 2024. Financial Times. Accessed at: https://www.ft.com/content/b3b92693-a960-4b6c-a503-f2792c77b04d
[9] SB 1047, Public Safety and Law Enforcement, Section 11546.41 (b)(3), Healthcare Section 11546.41 (b)(2), Education and Employment, Section 11546.41 (b)(1), Transportation, Section 11546.41 (b)(2), Consumer protection Section 11546.41 (b)(4)
[10] SB-1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Chapter 22.6 Definition of Critical harm: (g)(1). Accessed at: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
[11] 80000hours.org. Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy – 80,000 Hours.(2024)
[12] Section 22602f on the SB 1047. Accessed at: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
[13] California Legislative Information, Senate Floor Analyses, August 2024 https://leginfo.legislature.ca.gov/faces/billAnalysisClient.xhtml?bill_id=202320240SB1047#
[14] SB 1047 Section 22609. https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
[15] Criddle and Hammond, California governor vetoes bill to regulate artificial intelligence. 29 September 2024. Financial Times. Accessed at: https://www.ft.com/content/b3b92693-a960-4b6c-a503-f2792c77b04d
[16] Democrats: Zoe Lofgren, Ro Khanna, Anna Eshoo, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and J. Luis Correa. Lofgren, Z. et al., In Letter to Governor Newsom from the Congress of the United States. 15 August 2024. Accessed at: https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf
[17] The National Artificial Intelligence Initiative Act – Zoe Lofgren, Letter to Scott Weiner. Congress of the United States House of Representatives Committee on Science, Space and Technology. 7 August 2024. Accessed at: https://lofgren.house.gov/sites/evo-subsites/lofgren.house.gov/files/evo-media-document/8.7.24%20to%20Senator%20Wiener.pdf
[18] SB 1047 Section 22609. https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
[19] Ibid.
[20] Bengio, Yoshua Bengio, California’s AI safety bill will protect consumers and innovation. 15 August 2024. Fortune.com. Accessed at: https://fortune.com/2024/08/15/yoshua-bengio-californias-ai-safety-bill-will-protect-consumers-innovation-tech/
3. The EU AI Act – A Tailored Design for Its Needs
The EU AI Act was passed into law in April 2024 after more than three years of preparation and deliberations between the European Commission (the EU executive branch), the European Parliament (the pan-European legislative chamber), and the representatives of the Member State governments in the Council.
While the trajectory of the EU AI Act takes the opposite direction from the Californian legislative journey, early drafts reveal that the AI Act was initially an unthinking and ungainly response to data collection rather than algorithmic functioning. Although there were early concerns around facial recognition, autonomous driving, and targeted online advertising that had a significant impact on the drafting of the Act, it was the public release of generative AI (gen-AI) like OpenAI’s Chat GPT, which became a veritable Sputnik moment for the European capitals that significantly sped up a legislative process that might have otherwise failed too.
The Act has also faced criticism for trying to introduce binding laws before fully understanding the impact of AI technologies. Unlike other economies waiting for potential negative spillovers to emerge, the Act was driven by a desire to become the guinea pig for the world, prioritizing speed over precision. The EU has already imposed general restrictions on data collection through the General Data Protection Regulation (GDPR) in a similar fashion. Later, it also targeted US technology firms specifically through the Digital Services Act, Digital Markets Act, and its antitrust probes, which resulted in substantial fines for companies like X, Google, and Facebook.
A Real Political Impetus for Legislation
So, to paraphrase, Europe is not a stranger to “regulate fast and break things.” The EU policy objective and intention to halt the platform economy to help its retail, telecom, and media industries catch up is a well-discussed and rehearsed topic. Thanks to EU advocacy, policy concepts like the Brussels effect, data sovereignty, digital autonomy, or “fair share” have entered the global parlor in digital policy. For the EU, slowing down the global race benefits its multinationals, who rely on exports rather than productivity improvements to grow and, therefore, are slower to invest in new technologies.
The public release of ChatGPT, Midjourney, Dall-E, and other Gen AI tools caused minor hysteria in Europe during the French and EU parliamentary election campaigns in the spring of 2024 when the candidates from the mainstream parties were fighting for their livelihoods. Completing the AI Act became one of the items that the candidates wanted to show off in their re-election campaigns, feeding into the European public fears of US multinationals, job displacement, misinformation, and loss of control over creative and decision-making processes.
Such commercial and political impetus also affected the material scope of the law. Unlike the minimal scope of SB 1047, the AI Act covers all use cases involving AI and covers both producers and users.[1] The EU’s regulatory approach is not solely targeting technology companies but more competitive US and Chinese firms in traditional sectors that are more competitive thanks to AI. Foreign manufacturers and service providers have harnessed AI to optimize operations, reduce costs, and enhance customer experience, making them more agile and market-responsive than their European counterparts.
In effect, the EU AI Act establishes normative and binding rules on both developers and users that take precedence over national laws. The final provisions are similar to a product liability regulation typical of Civil Law tradition – i.e., filled with detailed ex-ante obligations tailored to create a specific market outcome rather than avoiding disasters. Here is where the EU AI Act significantly diverges from the risk-based approach: Rather than case-by-case decisions that are subject to proportionality and cost-benefit analysis to avoid overregulation, the EU AI Act designates entire sectors and use-cases as high-risk by default even if it does not present substantial harm.
The catalog of high-risk systems includes biometric identification, critical infrastructure, education, HR and recruitment processes, credit scoring, social services, and law enforcement.[2] These sectors are subjected to similar mitigation and transparency obligations as SB 1047 for models that could cause critical harm or end-of-humanity scenarios. In conclusion, the AI Act is much broader than a product regulation as it covers all instances where business processes involve a cluster of AI technologies (that are only vaguely defined) rather than just a product or a sector.
EU Structural Flaws Necessitate Binding Rules
The EU system is an inverse mirror of the US, where the supranational functions (equivalent to the federal level) are primarily concerned with commercial regulations that often apply with direct effect onto all the EU Member States, similar to how international law applies in monistic systems. Conversely, matters relating to national security, the justice system, and citizens’ rights are the prerogatives of the Member States. Hence, the scope and enforcement of fundamental rights and discrimination laws still vary depending on local laws and customs.
While the Charter on Fundamental Rights of the EU became a binding part of the EU law in 2009, the European Court of Justice – the highest legal instance of EU law – can only rule on whether a particular EU law or institution conforms with the Charter but lacks the jurisdiction to rule on purely national laws that lack links to EU legislation. Previous digital laws, such as GDPR (or derogated acts concerning the transfer of personal data to the US), are critically important to the EU integration since they create such a nexus between fundamental rights and EU law.
As the EU is merely at the halfway point to federalism, the absence of enforceable constitutional rights at the federal level leads to situations that may seem perverse to non-Europeans. The EU needs binding laws to establish a nexus that binds the national governments in the Member States, including judiciaries or border and law enforcement agencies, into its scope.
Many often forget how Europe has a more recent history of mass surveillance, authoritarianism, and ethnic cleansing than the US. Twelve Eastern European countries came from behind the Iron Curtain, while Spain, Portugal, and Greece were under martial rule. The EU AI Act, understandably, prohibits certain practices – such as social scoring by public authorities or real-time biometric identification in public spaces – as “unacceptable risks” and prohibited. Furthermore, Annex III of the AI Act complements these prohibitions by specifying the number of public authority use cases, including education, essential public services, law enforcement, and migration, systems that assist judges in trials or elections as “high-risk” AI systems with specific requirements on risk management and human oversight. Since each EU Member State is sovereign, the EU AI Act must take the shape of hard law to bind its Member State governments – especially when some European governments could be veering toward ethnocentric populism and authoritarianism.
Furthermore, the hard law approach is deeply rooted in the European political economy since its largest economies – e.g., France and Germany – must retain their competitiveness against the other EU countries. Without hard laws that “lock in” the small and progressive countries in the periphery, Sweden, Ireland, or Estonia would continue to offer flexible regulations or lax enforcement to attract foreign direct investments away from the countries at the core.
In the past, many US internet and banking subsidiaries chose to incorporate in Ireland thanks to its flexible implementation of EU data privacy directives and corporate income tax exemptions, given that the EU Single Market (unlike the US) operates without any inter-state barriers on payments and financial services. Even the European Commission (the EU executive) expresses this justification in its justification for the law: “to facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.” [3]
[1] Almada, M., & Petit, N., The EU AI Act: a medley of product safety and fundamental rights? 2023, Robert Schuman Centre for Advanced Studies Research Paper, (2023/59).
[2] Water: Article 6 and Annex III(2). High-risk if their failure could lead to a substantial impact on the environment or human life.
Healthcare: Article 6 and Annex III (5) AI Act
Education: Article 6 and Annex III (3) AI Act
Employment: Article 6 and Annex III (4) AI Act
[3] OJEU, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). June 2024. Accessed at: http://data.europa.eu/eli/reg/2024/1689/oj
4. Comparing California and Europe: Lessons for Other Jurisdictions
Although California’s SB 1047 and the EU AI Act were both immediate responses to public concerns with significant legislative objectives overlapped, their scope, material provisions, and exceptions aim to create very different outcomes. Lawmakers of other jurisdictions must not overlook the structural differences between state-level regulation in California and supranational governance in the EU when looking for legislative benchmarks for their countries. Looking at the legislative process from a comparative lens has offered clear takeaways for future attempts to regulate AI in the US on federal or state levels or for any other jurisdiction when considering legislation.
Do Not Conflate the Negative Impact of AI with Other Issues
To begin, both the EU and California’s legislative push is towards stricter accountability on companies as a response to the perceived excesses and ethical lapses within the tech industry, i.e., developers, who face criticism for their market power or failing to adequately address user issues like data privacy, algorithmic bias, and the broader societal impacts of their technologies. However, SB 1047 does much less towards those stated objectives, aside from capping investments in California to escape its administrative obligations. In contrast, the EU successfully passed its law thanks to the internal competition and potential risk of state-level abuses.
The lesson here is unmistakable. AI regulation cannot substitute or even supplement data privacy or antitrust enforcement against Silicon Valley. While some cultures may share European Luddite tendencies, only a few countries share Europe’s history as surveillance states and willingly tie their law enforcement agencies, government agencies, banks, and telecom providers to the mast in such a manner. Germany and France are also in a unique position to be able to impose common regulatory restrictions upon themselves and their more nimble neighbors whenever the net economic results are positive.
Review Existing Laws Before Drafting New Ones
SB 1074 would have also introduced a dual-liability regime, where developers face overlapping claims – one under SB 1047 for large-scale harms and another under existing tort law and product liability laws for defective products or negligent practices. The EU law often creates overlapping liabilities under national and EU law, but such conflicts of laws are often intentional to pave the way for more supranational harmonization. In other words, such duplication of responsibilities in Europe is a feature. But it is a bug elsewhere, including US state-level legislation.
Ironically, the EU AI Act would have also provided a workable solution for California to avoid duplication of responsibilities and dual liability. Annex II of the EU AI Act lists all existing laws that the EU must review to ensure they are “fit for purpose” in light of the new AI developments and determine whether they should be revised to align with the AI Act’s objectives, ensuring legal consistency and avoiding conflicts. A wholesale review of existing regulations would avoid creating new and potentially redundant obligations, ensuring legal stability and coherence and enabling sector-specific adaptations while avoiding conflicts with regulations at both state and federal levels.
Kill switch and human-in-the-loop principles outlined under SB 1047 are already mandatory under existing cybersecurity and software standards such as the government-developed NIST SP 800-53 on Security and Privacy Controls for Information Systems and Organizations, ANSI/RIA R15.06 and ISO 10281 Safety Requirements for Industrial Robots and Robot Systems, or ISO 26262 for road vehicles, just to take a few examples. There are efforts to impose an international ban on autonomous lethal weapons and prohibit AI from controlling nuclear weapon systems within the Non-Proliferation Treaty, which are also clearly outside of California’s ability to enforce such bans.
In conclusion, an immediate first step for California would have been to review its existing state laws. This lesson also applies to other jurisdictions as all legal systems have a catalog of business regulations that inevitably conflict with a horizontal AI regulation – and where it is not always desirable that the latter take precedence as lex specialis or posterior.
Do Not Overlook Executive Powers
For some, Governor Newsom’s veto seems like a win for Big Tech. For others, adopting preemptive regulation like SB 1047 would have been contrary to appropriate evidence-based policymaking. Lobbyists and advocates often speak in absolutes: non-regulation or “industry self-regulation” amounts to anarchy – or any new law will “stifle innovation.”
However, policymaking is not exclusively about drafting and passing laws. The most powerful branch in market governance is the executive office – not the legislative. As evident from the recent memorandum on AI and national security,[1] a government may issue executive orders and have market enforcement powers against either AI users or developers through agencies like FTC, OSTP, FCC, SEC, and DOE (which already runs an Office for Artificial Intelligence and Technology).
More importantly, the executive branch is always in control of public funding, with the means to fiscally incentivize good market behavior or tax undesirable outcomes. The Biden Administration has issued an AI Bill of Rights, focusing on protecting individual rights in the face of growing AI technologies. Other soft law tools, such as administrative guidelines, technical standards, and direct funding, allow flexibility and adaptability and provide a framework to set incentives for ethical development and governance.
Weighing the Cost of a Flawed Regulation Against Non-Regulation
That does not suggest that hard law is always the last resort – but lawmaking is for situations where legislative harm is known and unavoidable. Here is where the cases of California and Europe diverged: SB 1047 grappled with unclear costs of inaction, whereas the large EU members like Germany and France were well aware of the cost of legal fragmentation within the Single Market from previous experiences of imperfect harmonization.
In addition, the EU can disregard regulatory failures like few political systems can. The EU is still an evolving legal system that is unique and blends aspects of both intergovernmental and supranational systems. It retains the features of a technocratic international organization where the EU co-executives can sweep a failed law under the rug through selective enforcement or withdraw it in technocratic committees. Despite huge tolls on productivity and competitiveness, political accountability for digital regulations is almost nonexistent in the EU. The EU collective decision-making process builds on compromises among multiple supranational institutions, 27 governments, and pan-European political groups that are largely unknown to most citizens.
There is a real risk that companies may decide to incorporate into other jurisdictions or simply not release models in Europe over its regulations.[2] Earlier this year, Meta chose not to roll out advanced multimodal AI systems in Europe because of the regulator’s “unpredictable” behavior,[3] as well as complications with GDPR.[4] Apple made a similar decision regarding Apple Intelligence over compliance issues with the Digital Markets Act.[5] There is much concern among analysts and policy advisors about Europe’s declining productivity, which is directly linked to the poor uptake of digital technologies. Yet no personal or political accountability is ever designated – even for some of its most controversial policies, regulations, or funding decisions that have significantly impacted business operations over the past decade.
The self-confidence to ignore the costs of regulatory failure, productivity declines, and vote of no-confidence from the market is a privilege rarely bestowed upon political officials and party groups in democracies.
[1] The White House, Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, 24 October 2024. Accessed at: https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
[2] SB 1047 Section 22609. https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
[3] Milmo, Meta pulls plug on release of advanced AI model in EU, 18 July 2024, The Guardian. Accessed at: https://www.theguardian.com/technology/article/2024/jul/18/meta-release-advanced-ai-multimodal-llama-model-eu-facebook-owner
[4] Eartherbed, Meta won’t release its multimodal Llama model in the EU, 18 July 2024, The Verge. Accessed at: https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations
[5] Montgomery, Apple delays launch of AI-powered features in Europe, blaming EU rules, 21 June 2024, The Guardian. Accessed at: https://www.theguardian.com/technology/article/2024/jun/21/apple-ai-europe-regulation
OJEU, Art 2(5), Recital (26), Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). June 2024. Accessed at: http://data.europa.eu/eli/reg/2024/1689/oj
Bibliography
Almada, M., & Petit, N. (2023). The EU AI Act: a medley of product safety and fundamental rights? Robert Schuman Centre for Advanced Studies Research Paper, (2023/59).
Bengio, Y. (15 August 2024). Yoshua Bengio California’s AI safety bill will protect consumers and innovation. Fortune.com. https://fortune.com/2024/08/15/yoshua-bengio-californias-ai-safety-bill-will-protect-consumers-innovation-tech/
Blake Montgomery. (21 June 2024). Apple delays launch of AI-powered features in Europe, blaming EU rules. The Guardian. https://www.theguardian.com/technology/article/2024/jun/21/apple-ai-europe-regulation
California Legislative Information. (March 2024). SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
California Legislative Information. (March 2024). Senate Judiciary Committee 03/29/24- Senate Judiciary. https://leginfo.legislature.ca.gov/faces/billAnalysisClient.xhtml?bill_id=202320240SB1047#
California Legislative Information. (September 2024). AB-2602 Contracts against public policy: personal or professional services: digital replicas. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2602
Copyright Law of the United States. (December 2022). https://www.copyright.gov/title17/title17.pdf
Criddle, C., George Hammond. (29 September 2024). California governor vetoes bill to regulate artificial intelligence. Financial Times. https://www.ft.com/content/b3b92693-a960-4b6c-a503-f2792c77b04d
Eartherbed, J. (18 July 2024). Meta won’t release its multimodal Llama model in the EU. The Verge. https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations
Ebers, M. (2024). Truly Risk Based Regulation of AI: How to implement the EU’s AI Act. SSRN http://dx.doi.org/10.2139/ssrn.4870387
Government of Australia. (2024). AI Ethics Principles. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
Lessig, L. (30 August 2024). Big Tech Is Very Afraid of a Very Modest AI Safety Bill. The Nation.com. https://www.thenation.com/article/society/sb-1047-ai-big-tech-fight/
Lofgren, Z et al. (15 August 2024). In Letter to Governor Newsom from the Congress of the United States. https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf
London School of Economics. (November 2023). ‘AI makes Silicon Valley’s philosophy of ‘move fast and break things’ untenable’. https://blogs.lse.ac.uk/medialse/2023/11/22/ai-makes-silicon-valleys-philosophy-of-move-fast-and-break-things-untenable/
Milmo, D. (18 July 2024). Meta pulls plug on release of advanced AI model in EU. The Guardian. https://www.theguardian.com/technology/article/2024/jul/18/meta-release-advanced-ai-multimodal-llama-model-eu-facebook-owner
NITI Aayog. (June 2018). AI For All – National Strategy For Artificial Intelligence https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf
Office of the Governor Gavin Newsom. (29 September 2024). SB 1047 Veto Message. https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf
OJEU. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
Olle, T. (29 September 2024). Veto of Popular AI Safety Legislation Ignores Overwhelming Public Support for Accountability of Big Tech. Economic Security.us. https://economicsecurity.us/news/veto-of-popular-ai-safety-legislation-ignores-overwhelming-public-support-for-accountability-of-big-tech/
Scott Weiner. (29 September 2024). Senator Wiener Responds to Governor Newsom Vetoing Landmark AI Bill. https://sd11.senate.ca.gov/news/senator-wiener-responds-governor-newsom-vetoing-landmark-ai-bill
The White House (2024) Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Zeff, M. (2024). California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it. TechCrunch. ttps://techcrunch.com/2024/08/30/california-ai-bill-sb-1047-aims-to-prevent-ai-disasters-but-silicon-valley-warns-it-will-cause-one/
80,000 Hours (2024) Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy. 80000hours.org