California became first state in the United States to pass law to regulate cutting-edge AI technologies. Now experts are divided over its impact. They agree that the law, Transparency in Frontier Artificial Intelligence Act, is a modest step forward, but it still far from actual regulation.
The first such law in US, it requires developers of largest frontier AI models – highly advanced systems that surpass existing benchmarks and can significantly impact society – to publicly report how they incorporated national and international frameworks and best practices into their development processes.
It mandates reporting of incidents, a large-scale cyber-attacks, deaths of 50 or more people, large monetary losses and other safety-related events caused by AI models it also puts in place whistleblower protections.
A research scientist at Northeastern University’s Institute for Experiential AI said “It is focused on disclosures. But given that knowledge of frontier AI is limited in government and the public, there is no enforceability even if the frameworks disclosed are problematic.”
California, the home of the world’s largest AI companies, so legislation there could impact global AI governance and users across the world.
Senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS) says the absence of a national law on regulating large AI models; California’s law is “light touch regulation”. She analyzed the differences between last year’s bill and the one signed into law in a forthcoming paper. She found that the law, which covers only the largest AI frameworks, would affect just top few tech companies. She also found the law’s reporting requirements are similar to the voluntary agreements tech companies had signed at Seoul AI summit last year, softening its impact.
Comment