Assembly Member Alex Bores knows a thing or two about artificial intelligence. The 34-year-old state lawmaker, who represents Manhattan’s East Side, has a masters degree in computer science and worked for four years at Palantir – the controversial software and data analytics company co-founded by conservative megadonor Peter Thiel – before quitting due to concerns about the company’s work with U.S. Immigration and Customs Enforcement.
There are already some AI guardrails on the books, including Local Law 144, which mandates bias audits for employers using automated employment decision tools, and the LOADinG Act, which regulates automated decision-making systems within state agencies. But in an attempt to get ahead of a technology that is growing exponentially in complexity, Bores has introduced over a half dozen AI-related bills in the state Legislature, with plans to introduce even more.
City & State sat down with Bores to discuss some of his proposed AI regulations and the concerns and challenges that artificial intelligence brings along with it. This interview has been edited for length and clarity.
The New York Artificial Intelligence Consumer Protection Act is in committee right now. What would it do if passed?
This is a bill that's setting basic consumer protection standards for AI that is making decisions that will impact consumers. It is based on a bill that passed in Colorado last year and was the first in the nation. I don't think it's the be-all and end-all of AI consumer regulation. In fact, state Sen. Kristen Gonzalez has another consumer bill that will probably move quicker this session. But this is meant to, at a baseline, align New York's regulations with what's going on in every other state.
Is the legislation just targeting private entities or the public sector as well?
It's targeting uses of AI making consequential decisions that directly impact consumers. I have eight or so AI bills that are kind of my priority for this session, this being one of them. I tend to think of AI legislation with three binary categories. First is your outlook on AI – that can be either pessimistic or optimistic. Then I look at the time scale – are you dealing with things that are near-term or are you dealing with long-term risks? Then you have the scope – they are use case-specific or general to AI.
Most bills in state legislatures throughout the country are pessimistic near-term and either general or use case specific. (This bill is) not really saying AI is bad, but it's saying we need some guardrails. It's dealing with things that are already here and affecting consumers, and it's general.
What are the other AI-related bills you’re working on?
There's a data transparency bill which requires basic disclosure of the types of data that go into any major AI system, including if it uses copyright data or if it uses personally identifiable information. There's a bill to strengthen protections around name, image and likeness of people. That comes out of this past summer when OpenAI released a voice that sounded remarkably similar to Scarlett Johansson's, and she published a big letter that said we need legislation to strengthen this.
I have two more that are dealing with data provenance. Instead of trying to detect what is false, you set up a system that can prove what is true. There is this open, free, not proprietary standard called C2PA that can tell you whether a piece of content was real or was AI-generated. It only works if you have a critical mass of people using that standard. You need to get to 90, 95% of people who are tagging their images or audio or video with C2PA, so that everyone gets used to it, and if I don't see this extra data on here, I instantly don't trust it.
With these last two, we're moving from near-term to long-term, and it's thinking about the risks from really advanced AI research, labs and algorithms. There’s one called the RAISE Act, which is setting standards that labs have to follow. If you are working on this extremely advanced AI, you need to have a safety plan. That safety plan needs to be audited by a third party. You need to disclose if you have a critical safety incident. If an employee raises fear of it, you can't fire them. For the other, instead of the government telling these companies exactly what they can and can't do, let's just let the companies figure it out but set up a system whereby they are strictly liable for anything that goes wrong. We're just going to align the economic incentives so that if something goes wrong, it is assumed you were at fault.
In terms of outlook, do you think it's good to go about AI regulation in an optimistic way?
It's not a perfect analogy, but I think of it as a ball of uranium. It can be used to build a nuclear bomb or it can be used to make nuclear power that powers thousands, if not millions, of homes, right? When you’re dealing with things like nuclear power, you want there to be some basic safety standards so that we get the benefits and not the downsides of it.
The California bill that inspired the RAISE Act targeted large frontier models developed by companies like Google and OpenAI, and it was eventually vetoed by the governor after significant opposition from those companies. Are you worried the RAISE Act could suffer the same fate?
First of all, it's not targeting them. Most of them are already mostly in compliance with what is required in the bill. It's setting a floor of what you need to do in terms of safety that I think most of them are pretty close to clearing other than perhaps a few specific details. It's preventing there from being an economic incentive for any of them to say, ‘We want to rush ahead. Let's go quickly and let's not take safety into account.’ It's not meant to penalize or stop research by any means.
I've been engaging with all of those companies so everyone that you can think of has seen one or maybe two drafts of this bill as it's been developed, and I take their feedback into account as we go. I come from the tech industry. I know that sometimes things are written in a way that's impossible to comply with. That's not my goal. I want a bill that actually makes New Yorkers safer and that means a bill that works within the confines of what's possible.
You were a data scientist at Palantir, have a background in programming and are a young elected official. Talk to me about how that has helped you craft legislation regarding AI.
There's some young people with old ideas, and I think there's some more experienced legislators with new ideas. So I wouldn't necessarily say that that's correlated. But yes, I worked in tech the vast majority of my career, so I know not only how many of these companies think and run their internal processes and what their economic incentives are, but I also understand the technology itself.
I've helped to develop early versions – not anything like what we're on the frontier of now – and to deploy it in places that needed it. I want to make sure this legislation can actually be implemented. So I approach it from how these systems actually work and what the incentives are. I'm trying to design the legislation holistically to have the best impacts.
Is it realistic to have federal regulation of AI, given the current presidential administration?
No. It would obviously be preferable for many of these issues if sane policy was passed at the federal level. That hasn't happened yet, and I don't have much hope for it happening in the next four years. The U.S. should have a privacy law, instead of it having to be state by state. The U.S. should have a law around regulating the frontier models. The U.S. should be encouraging C2PA in as many places and requiring disclosure of data transparency. But the states are the laboratory of democracy. If the federal government is not acting right, that's our responsibility to do what we can to protect our citizens.
What do you think people need to know when it comes to AI regulation?
The thing that everyone needs to keep in mind with AI is that it is moving faster than any technology in human history, and the speed with which it's moving will only increase. What we talk about doing today will need different things in six years, in six months, maybe in six weeks, and so that's not always a natural fit for the legislative process.
I think it's a really exciting time to be legislating in this space. When I ran in 2022 and I mentioned I’ll be the first Democrat with a degree in computer science, a common response to that was, “So what?” ChatGPT came out late 2022, and no one has ever said that to me since.