California is a world leader in artificial intelligence — which means we’re expected to help figure out how to regulate it. The state is considering multiple bills to those ends, none attracting more attention than Senate Bill 1047. The measure, introduced by Sen. Scott Wiener (D-San Francisco), would require companies producing the largest AI models to test and modify those models to avoid facilitating serious harm. Is this a necessary step to keep AI responsible, or an overreach? Simon Last, co-founder of an AI-fueled company, and Paul Lekas, a public policy head at the Software & Information Industry Assn., gave their perspectives.
This bill will help keep the tech safe without hurting innovation
By Simon Last
As co-founder of an AI-powered company, I’ve witnessed the breathtaking advancement of artificial intelligence. Every day, I design products that use AI, and it’s clear these systems will become more powerful over the next few years. We will see major progress in creativity and productivity, alongside advancements in science and medicine.
However, as AI systems grow more sophisticated, we must reckon with their risks. Without reasonable precautions, AI could cause severe harms on an unprecedented scale — cyberattacks on critical infrastructure, the development of chemical, nuclear or biological weapons, automated crime and more.
California’s SB 1047 strikes a balance between protecting public safety from such harms and supporting innovation, focusing on common sense safety requirements for the few companies developing the most powerful AI systems. It includes whistleblower protections for employees who report safety concerns at AI companies, and importantly, the bill is designed to support California’s incredible startup ecosystem.
SB 1047 would only affect companies building the next generation of AI systems that cost more than $100 million to train. Based on industry best practices, the bill mandates safety testing and the mitigation of foreseen risks before the release of these systems, as well as the ability to turn them off in the event of an emergency. In instances where AI causes mass casualties or at least $500 million in damages, the state attorney general can sue to hold companies liable.
These safety standards would apply to the AI “foundation models” on which startups build specialized products. Through this approach, we can more effectively mitigate risks across the entire industry without burdening small-scale developers. As a startup founder, I am confident the bill will not impede our ability to build and grow.
Some critics argue regulation should focus solely on harmful uses of AI rather than the underlying technology. But this view is misguided because it’s already illegal to, for example, conduct cyberattacks or use bioweapons. SB 1047 supplies what’s missing: a way to prevent harm before it occurs. Product safety testing is standard for many industries, including the manufacturers of cars, airplanes and prescription drugs. The builders of the biggest AI systems should be held to a similar standard.
Others claim the legislation would drive businesses out of the state. That’s nonsensical. The supply of talent and capital in California is next to none, and SB 1047 won’t change those factors attracting companies to operate here. Also, the bill applies to foundation model developers doing business in California regardless of where they are headquartered.
Tech leaders including Meta’s Mark Zuckerberg and OpenAI’s Sam Altman have gone to Congress to discuss AI regulation, warn of the technology’s potentially catastrophic effects and even ask for regulation. But the expectations for action from Congress are low.
With 32 of the Forbes top 50 AI companies based in California, our state carries much of the responsibility to help the industry flourish. SB 1047 provides a framework for younger companies to thrive alongside larger players while prioritizing public safety. By making smart policy choices now, state lawmakers and Gov. Gavin Newsom could solidify California’s position as the global leader in responsible AI progress.
Simon Last is co-founder of Notion, based in San Francisco.
These near-impossible standards would make California lose its edge in AI
By Paul Lekas
California is the cradle of American innovation. Over the years, many information and tech businesses, including ones my association represents, have delivered for Californians by creating new products for consumers, improving public services and powering the economy. Unfortunately, legislation making its way through the California Legislature is threatening to undermine the brightest innovators and targeting frontier — or highly advanced — AI models.
The bill goes well beyond the stated focus of addressing real concerns about the safety of these models while ensuring that California reaps the benefits of this technology. Rather than targeting foreseeable harms, such as using AI for predictive policing based on biased historical data, or holding accountable those who use AI for nefarious purposes, SB 1047 would ultimately prohibit developers from releasing AI models that can be adapted to address needs of California consumers and businesses.
SB 1047 would do this by in effect forcing those at the forefront of new AI technologies to anticipate and mitigate every possible way that their models might be misused and to prevent that misuse. This is simply not possible, particularly since there are no universally accepted technical standards for measuring and mitigating frontier model risk.
Were SB 1047 to become law, California consumers would lose access to AI tools they find useful. That’s like stopping production of a prescription medication because someone took it illegally or overdosed. They would also lose access to AI tools designed to protect Californians from malicious activity enabled by other AI.
To be clear, concerns with SB 1047 do not reflect a belief that AI should proliferate without meaningful oversight. There is bipartisan consensus that we need guardrails around AI to reduce the risk of misuse and address foreseeable harms to public health and safety, civil rights and other areas. States have led the way in enacting laws to disincentivize the use of AI for ill. Indiana, Minnesota, Texas, Washington and California, for example, have enacted laws to prohibit the creation of deepfakes depicting intimate images of identifiable individuals and to restrict the use of AI in election advertising.
Congress is also considering guardrails to protect elections, privacy, national security and other concerns while maintaining America’s technological advantage. Indeed, oversight would be best handled in a coordinated manner at the federal level, as is being pursued through the AI Safety Institute launched at the National Institute of Standards and Technology, without the specter of civil and criminal liability. This approach recognizes that frontier model safety requires massive resources that no state, even California, can muster.
So although it is essential for elected leaders to take steps to protect consumers, SB 1047 goes too far. It would force emerging and established companies to weigh near-impossible standards for compliance against the value of doing business elsewhere. California could lose its edge in AI innovation. And AI developers outside the U.S. not subject to the same transparency and accountability principles would see their position strengthened, inevitably putting American consumers’ privacy and security at risk.
Paul Lekas is the head of global public policy and government affairs for the Software & Information Industry Assn. in Washington.
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.