This journal entry provides an update on state efforts to regulate artificial intelligence. Sounds dense, but I will try to keep it straightforward. As always, there are a number of fine details, but to stay entertaining, I will keep it high level. I welcome any more specific questions.
Overall, this entry attempts to distill these new laws down in a way that gives an idea of how state governments view recent advances in the technology.
Since my last entry on artificial intelligence, several states have passed laws regulating aspects of the technology. Each of these laws has their own tilt, with their own distinct objectives.
Notably, the first state in the country to pass such a law was the State of Utah, which in my opinion took a common-sense consumer protection approach.
Others, like SB 1047 (pending California Governor Gavin Newsom’s signature), have taken a more ominous tone.
Starting with the technological advances, for those who haven’t been following along, generative artificial intelligence is “accelerating” (to use a buzz word common in the industry).
“Hyper realistic” videos and images are continuing to blur the line between the authentic and computer generated.
Strides are being made in allowing developers to simply speak their desired code into existence.
Google’s DeepMind can now generate a real-time interactive video game (trained on the classic Doom), with the promise of greatly enhancing immersive VR experiences.
And some people are starting to believe that AI generated images are more real than non-artificial images.
Seems like a lot is happening fast, so how are our state governments going about addressing these advances?
In Utah, our legislators have started off with a fairly common-sense approach. Certain “regulated occupations” (i.e. doctors, lawyers) who deploy generative AI “must prominently” disclose that you are interacting with AI, or AI created content, at the beginning of any communication.
(As someone in a “registered profession”, I’m letting you know now that I create my own content and images.)
Further, “unregulated occupations” who are subject to Utah’s consumer protection laws “must clearly and conspicuously” disclose the use of generative AI when asked by a consumer.
Seems like a good start; simply letting people know that they are in fact not interacting with an actual human might be the best way to train people to recognize increasingly advanced content. That should be even more important when consumers are seeking out specialized knowledge, like medical or legal advice.
In contrast, California’s SB-1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”, takes a more ominous tone. Perhaps this is a hint at what is to come with further “acceleration”, from a state who has a high concentration of private computing power.
Overall, the bill is less concerned with consumer protection (in the traditional sense), and focuses more on “artificial intelligence safety incidents”, which I guess is a different type of “consumer protection”.
But what exactly is an “artificial intelligence safety incident?”
According to the legislation, they are events that “demonstrably increase the risk of a critical harm,” whether that is due to a model “autonomously engaging in behavior other than at the request of a user”, stealing or the “escaping” of a model’s weights, “critical failures of technical or administrative controls,” or unauthorized use of a model.
(a non exhaustive list)
So I guess the next question is “what is a critical harm?”
Unsurprisingly, the answer to that question isn’t very pleasant, and includes:
“The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties;
Mass casualties…resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure;
Mass casualties…resulting from an artificial intelligence model engaging in conduct that…acts with limited human oversight, intervention, or supervision… [and] results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence….”
Or “other grave harms to public safety and security that are of comparable severity to the harms” described above.
Seems like a pretty dark take on the technology, but given how some industry experts agree with the legislation, it might be a sober one.
So I suppose my third and final question for this entry is “how does California plan to prevent these “artificial intelligence safety incidents?””
Apparently, through several mechanisms, including (but not limited to):
Requiring entities with large compute to “implement reasonable administrative, technical, and physical cybersecurity protections” to prevent misuse “in light of the risks associated with the model;”
“assess whether the covered model is reasonably capable of causing or materially enabling a critical harm;”
implement the capability to promptly enact “a full shutdown” of the model;
create and implement a written “safety and security protocol” and conducting annual reviews of that protocol;
submit to third party auditing of that safety and security protocol;
submit unredacted reports to the California Attorney General verifying compliance with the law (which appear to be immune from Freedom of Information Act requests);
And finally (again, not an exhaustive list), letting the California Attorney General know when an “artificial intelligence safety incident” has occurred.
So what are we to take from all this legislation?
In terms of Utah’s law, it seems there is a concern about consumers being unable to distinguish authentic from artificially generated content. Perhaps some “conspicuous” labeling will help consumers learn to better spot the difference, especially as those boundaries blur.
For California, there seems to be concerns about something far more nefarious, and a desire for the government to play a role in ensuring we avoid a disastrous outcome.
Either way, artificial intelligence will probably continue to accelerate, and in the meantime, Utah and California businesses will need to be aware of their increased obligations related to its use.