in , ,

Amid controversy, California lawmakers pass a bill to protect AI

Read Time:2 Minute, 47 Second

A groundbreaking bill to control artificial intelligence (AI) was passed by lawmakers in California. This has led to a heated argument about the future of innovation and safety in the tech industry. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), as the bill is called, was passed by the California Assembly on Wednesday. It now needs to be signed by Governor Gavin Newsom.

The bill, which is being pushed by Democratic state senator Scott Wiener of San Francisco, aims to make sure that people who work on developing advanced AI models, which are sometimes called “frontier” AI, follow strict safety rules. “Our bill to protect AI, SB 1047, just passed the Assembly floor.” Wiener said after the vote, “I’m proud of the broad coalition that pushed this bill. This coalition really believes in both innovation and safety.”

But the way to passing wasn’t easy; there was a lot of resistance. Critics, including some Democratic lawmakers in the U.S., said that the bill’s possible harsh penalties could stop new ideas from coming up in a field that is just starting out. Democratic Congresswoman Nancy Pelosi, for example, was worried and said, “The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed.” Pelosi and other high-ranking party members have told Wiener that they don’t agree with the bill’s method, which they think might be too strict.

Even though the bill caused a lot of trouble, tech stars like Elon Musk still reluctantly supported it. Musk has been very open about how dangerous AI could be if it is not regulated. He supported the bill on the social media site X, saying, “This is a tough call and will make some people upset,” but he stressed that AI’s risks make it necessary for government control.

See also  Canada's Conservatives Are Set to Destroy the Liberals in the Upcoming Election

The suggested law would require developers to take a number of safety measures, such as testing AI models before they are used, modeling hacker attacks, making cybersecurity stronger, and protecting people who blow the whistle on wrongdoing. In the original form of the bill, violations were punishable by jail time. But in order to get the bill passed, lawmakers changed the punishments to be civil, like fines.

The bill was praised by Dan Hendrycks, head of the Center for AI Safety. He called it “a workable path forward” to reduce important AI risks. He stressed how important it is to have strong protections in place to stop the bad use or unexpected effects of powerful AI technologies.

As the bill makes its way to Governor Newsom’s desk, it’s still not clear what he will do about it. He has until September 30 to either veto the bill or sign it into law. If passed, California’s law could help other states figure out how to balance innovation and control in the AI world, which is changing very quickly.

The California discussion is part of a larger national talk about how to regulate AI. The National Conference of State Legislatures says that at least 40 states have proposed bills this year to control AI, and that six states have already passed laws or resolutions that target the technology. This work and the future of AI control in the US could be affected by how California’s AI safety bill turns out.

What do you think?

US Policy Impact on Taiwan’s Defense Against China: A Lobbying Group’s Perspective

Former top general: Ukraine will lose the war unless the U.S. steps up