The recent passage of California’s SB 1047 safety bill has sparked significant discussion among industry experts, particularly in the technology sector.

Dr Pardis Shafafi, an anthropologist and Global Responsible Business Lead at the experience design company Designit, provided a detailed response to the bill, highlighting its implications and the concerns it raises within the AI community.

Dr Shafafi emphasised that the bill’s origin in California is significant, given the state’s role as the home of Silicon Valley, a global hub for technology and innovation. According to Shafafi, the bill could serve as a critical test case for broader legislation that might eventually be implemented across the United States and beyond. Its location in California positions it at the forefront of technological regulation, potentially setting a precedent for future laws and policies.

Divided Opinions on AI Innovation

While the bill is seen as necessary by many, it has also created a divide among AI technology organisations. Some industry players express concern that the regulations could stifle innovation. On the other hand, companies like Anthropic, which played a significant role in the redrafting process, are now more supportive of the bill. Dr Shafafi noted that this involvement raises questions about the influence that leading AI stakeholders have on the development of such regulations.

One of the most contentious aspects of the bill is the modification of the section on pre-harm enforcement. The revised wording now limits the Attorney General’s ability to pursue civil penalties unless actual harm has occurred or there is an imminent threat to public safety. Dr Shafafi warned that this change could undermine the bill’s effectiveness in holding companies accountable before harm occurs, potentially allowing certain risks to go unchecked.

Concerns Over Accountability and Enforcement

Dr Shafafi also highlighted concerns about the bill’s ability to enforce accountability in the AI industry. She argued that while the bill is intended to act as a deterrent, it does not impose a formal responsibility on companies to consider potential harms during the development process. This lack of proactive regulation could leave gaps where unforeseen, undetected, or unintended harms might arise, precisely because the necessary frameworks and standards to predict such risks are not in place.

This concern points to a broader issue within the industry, where the rapid pace of AI development often outstrips the ability of regulations to keep up. Without a robust framework for anticipating and addressing potential harms, there is a risk that the bill’s enforcement mechanisms will only come into play after damage has been done, limiting its overall effectiveness.

The passage of SB 1047 in California is likely to influence the future landscape of AI regulation, both within the United States and internationally. As the tech industry continues to evolve, the need for effective and forward-thinking regulation becomes increasingly important. The concerns raised by experts like Dr Shafafi suggest that while SB 1047 is a step in the right direction, there is still much work to be done to ensure that AI technology develops safely and responsibly.