Manhattan Borough President Mark Levine has taken a keen interest in artificial intelligence since his college days as a computer programer. After the release of ChatGPT, he assembled lawmakers, academics and advocates in a closed-door meeting to brainstorm what regulations should govern AI and the pros and cons of using the technology across different sectors of industry. In June, Levine’s office released a report detailing how the city could use AI while reducing threats to privacy and copyrighted works.
Why did you become interested in AI in the first place?
I majored in physics in college and way back in the day I used to program in an ancient programming language called Fortran so I keep my eye on that world. And like a lot of people I was astounded when ChatGPT came out last December. And in the last 10 months I have been on a major quest to understand that technology and what it means for the future of the city. I’ve probably given myself a master’s degree in machine learning at this point. I both see enormous promise and great peril and I don’t feel like New York City is grappling with what’s at stake. There are decisions that need to be made today in the direction this goes for the city. So I’ve been using my voice and my platform because I don’t think we can ignore this in five years and not face dire implications.
What concerns do you have about AI?
It’s the pace of disruption that is coming that most concerns me. I think we’re facing a level of technological change comparable to the past 25 years compressed into the next five years, with enormous destabilizing impacts on the job market, on education, on the way the government needs to do business, and on election integrity. The city has levers that it can pull to help mitigate some of that, but it’s going to take a real serious concerted effort.
You released a report about AI. What recommendations have you made regarding generative AI?
We looked first at city government itself and started with the basic idea that every city needs to have a strategy based on AI. In the absence of guidelines, people are using ChatGPT in every office in New York City, in the public sector and the private sector. It’s already found its way into people’s workflow and we say very clearly how we need guardrails so the biased information that these tools can spread does not affect the impact of the work of city agencies.
There’s also an upside for how this can improve city services. It’s famously hard to enroll in food stamps in New York. There’s probably 200,00 people who would qualify but are not enrolled. Perhaps AI tools would make it incredibly easy. There’s a huge backlog in closing affordable housing finance deals. The contracts and legal work is so time consuming. That’s a perfect area where ChatGPT could make it quicker and easier and that could yield a lot more apartments.
The city government probably spends $1 billion on technology and we could use that to set new standards for the industry. We are only going to buy AI projects that adhere to standards of safety and ethics. For example, we’ll buy no products that create AI images that don’t respect creators that give them the right to remove their creations from training models or be compensated for them. They are now products that adhere to those standards and there are some that don’t, and we’d only buy AI products that do.
There needs to be major changes in what we teach and how we teach in public schools. We got a stat at a City Council hearing that 69% of kids graduate from high school without having a single computer science experience. That’s just unacceptable considering the world students are graduating into today.
This needs to be considered part of the core curriculum. In the Cold War, the country pivoted and every kid got chemistry. This is a similar level of urgency. Almost every job could be impacted by AI, so we need to teach young people to understand those tools. They’re going to be competing for jobs with people who might know those tools better than them.
And the way we teach, it makes it too easy to cheat on essays and math homework. We need to turn it on the head so students are learning at home with customized tools and do practice together. There is huge potential to give instruction to a kid or in the language they’re most comfortable speaking. This could help teachers prepare individualized quizzes targeted to the ability of every student, and help teachers prepare reading lists. It’s incredible.
What reactions have you received from your recommendations? Has any progress been made in those areas?
Most people in the city don’t care about it. I won’t sugarcoat that. It’s seen as too abstract and too far in the future and we have urgent crises we need to address. And I get it.
I just think all of us have to push ourselves to change at a pace we’ve never done before. This is not like the 20 years it took for the PCs to really take hold in offices. This is going to be a much faster pace of change and all of us have to up our game.
What are some other ways that the city government is and could be utilizing AI in the coming years?
I think it could be really challenging for New Yorkers to access government services, including the federal government, and the ability to access services in plain English or any other language on Earth could transform the way we interact with the government.
I think a lot about community boards. We could have instantaneous transcription in 100 languages for every community board hearing and subcommittee meeting, which would really be democratizing to give regular New Yorkers access where they could easily hear what’s being talked about in the community board. So there’s almost limitless potential. There’s also downside and risk.
What city legislation should be passed or regulations implemented around the use of AI in government or in general?
There should be a rule that any AI chatbot needs to identify itself as not being human. There should be rules around elections that campaign finance boards should require that anyone participating in the program must label AI-generated content that if someone uses AI to make a video doctored at one of their opponents or of themselves it’s labeled clearly so people aren’t misled.
I’ve talked about ways to use our buying power to ensure that we’re only spending our money on ethical AI. One thing I had in the report is we would create a center of AI safety like the New York Genome Center where we tap the incredible research community on AI safety issues. There’s so much the city should be doing.
NEXT STORY: This week’s biggest Winners & Losers