Amended California AI Regulation Bill Passes Committee

0:00
/1861

Facts

  • SB 1047, a piece of California legislation seeking to regulate artificial intelligence (AI) models, passed the state's Appropriations Committee on Thursday following amendments proposed by AI start-up Anthropic and several others who have opposed the bill.[1]
  • The legislation would require AI systems to be tested for safety before their release and come with built-in safety guardrails. It would also allow the state attorney general to sue AI makers if their products cause serious harm — a scale back from the original proposal that would have permitted legal action before such harm occurred.[2][1]
  • The revisions have also withdrawn the formulation of a new government agency, the Frontier Model Division, opting to instead integrate the Board of the Frontier Models with the current Government Operations Agency. The board — to be made up of nine advisors — would issue safety guidance and regulations and regulate compute thresholds.[1]
  • The 'Safe and Secure Innovation for Frontier Artificial Intelligence Models Act' was first introduced in February and passed the state Senate in a 31-1 vote in May. It now moves on for a vote in California's Assembly before heading to Gov. Gavin Newsom for final approval.[3][1]
  • Industry figures and researchers opposed to the bill include Stanford professors Fei-Fei Li and Andrew Ng, Meta AI chief Yann LeCun, and companies such as Google, Apple, and Amazon. Opponents have argued that the bill would harm the AI industry in California.[4]
  • Bill author Scott Wiener says that the regulations are needed to 'get out ahead of' future harms that AI might cause. The bill has received the support of several AI researchers including Gary Marcus, Max Tegmark, Stuart Russell, Geoffrey Hinton and Yoshua Bengio, with a poll from the Artificial Intelligence Policy Institute finding that 65% of Californians support the bill.[5][4][6][7][8][9]

Sources: [1]TechCrunch (a), [2]New York Times, [3]The Mercury News, [4]TechCrunch (b), [5]AI Policy Institute, [6]Garymarcus, [7]Semafor, [8]Omny.fm and [9]Time.

Narratives

  • Narrative A, as provided by The Nation. AI developers tout the need for safety regulations for good publicity before turning around and stifling any attempt to have it come to fruition. A well-funded group of developers and investors have spread misinformation and fear about the bill in order to shield themselves from the harms of their products and rake in cash without any oversight. This bill is full of common-sense provisions that are popular amongst researchers and the public and are necessary to mitigate the long-term harms of AI.
  • Narrative B, as provided by Ft. If SB 1047 passes, regulations will snuff out the AI industry in the US and allow countries like China to dominate the AI sphere. This bill is based almost entirely on hypothetical, worst-case scenario harms that will never materialize and rely more on fear than sound reasoning. The bill's liability clauses could penalize everyone who does any work with AI and end development entirely. We need rational regulation made on a national level in consultation with business leaders.

Predictions