Artificial intelligence can be found in many places. How safe is the technology? : NPR

0
6
facebook default wide s1400 c100.jpg
facebook default wide s1400 c100.jpg

NPR’s A Martinez speaks with Jack Clark, co-founder of artificial intelligence company Anthropic, about AI safety concerns.



MICHEL MARTIN, HOST:

So let’s talk about artificial intelligence now, or AI. It can be found in all sorts of places today, including phones, weapons and household smart devices. But there are already concerns about whether the capabilities of the technology have outstripped any guardrails to prevent misuse. We reached out to one company, Anthropic, that says it’s working to make AI safer. Our colleague, A Martínez, spoke earlier with the company’s co-founder, Jack Clark, and A asked him to describe his own concerns with AI.

JACK CLARK: It’s an amazing time in AI right now where systems are getting better far more quickly than our ability to evaluate them. So is our AI system or any AI system a nice AI system or a bad AI system? It’s actually hard to tell. There’s room for greater government involvement, greater civil society involvement, greater academic involvement in the development of AI because people are nervous because AI is being developed by a very small set of private sector actors.

A MARTÍNEZ, HOST:

What do you mean, though, government involvement? Isn’t that one of the biggest dangers is that our lawmakers barely have any understanding of what AI is?

CLARK: People are waking up, including lawmakers, to how AI has a role in national security. It has a role in geopolitics. We’ve seen AI in various forms being used in the war in the Ukraine. So I think that what you’re seeing among policymakers is a pretty rapid desire to get up to speed on where it is, and they’re much more engaged now than they’ve ever been.

MARTÍNEZ: So considering how fast things do move, especially with AI, right now in May of 2023, what would be your No. 1 concern?

CLARK: My No. 1 concern about AI right now is AI systems can do more things than their creators know that they can do. It’s kind of like if we were in the business of making cars, after you release the car, someone discovers it can fly or go underwater, and you had no idea as the car manufacturer. That’s where AI is today. Systems get released. Then some 17-year-old with a laptop discovers that the system can do a completely wild thing that its creators did not anticipate.

MARTÍNEZ: So if that’s the case, what would be an easy way to try and tamp that down, or at least just figure out a way where it doesn’t move as quickly?

CLARK: So there is one exciting thing happening. In August this year in Las Vegas, there’s a hacking conference called DEF CON. And at that conference, Google, Microsoft, OpenAI, Anthropic – my company – and many others are going to have their systems be red-teamed by thousands and thousands of hackers. We think a future thing that policymakers might want you to do is before you release the system, have it get attacked by people trying to misuse it and trying to break it, and then you can learn from that. And you have this kind of build-it, break-it, fix-it dynamic.

MARTÍNEZ: Now, you used to work at OpenAI, which created ChatGPT, and then you left to found Anthropic. And you left to create what your company calls a safer ChatGPT. So what exactly does that look like?

CLARK: So one thing we’ve done is we’ve tried to find ways to make safety more at the core of our technology. So something that we’ve released this week is the so-called constitution behind our language model, Claude, for ways that the AI system should behave. And we’ve done that because, otherwise, AI systems learn values by interacting with people, and it’s really hard to figure out what the values are that they’ve learned.

MARTÍNEZ: Jack, the debate around artificial intelligence feels very much like the now. But I think sometimes we are so in the now that we don’t see the next. Are we in the right place right now in these discussions that we’re having?

CLARK: Something which most technologists say privately when you talk about AI policy, in two or three years the systems are going to be far more powerful, and the problems are going to be far weirder, and we can’t really anticipate them today. So I think when people are regulating this technology, they’re treating it like a normal technology which evolves relatively slowly and relatively predictably. This technology evolves very quickly and relatively unpredictably. So if anything, my main takeaway is the future is going to be a lot weirder than the present, and we should have our minds kind of pointed towards that, as well as dealing with these challenges we have today.

MARTÍNEZ: That’s Anthropic co-founder Jack Clark. Jack, thanks a lot.

CLARK: Thanks very much.

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Previous articleGoogle’s Pixel Fold Could Finally Make Foldable Phones Exciting
Next articleBest Free PDF Editor For Mac
Abraham
Expert tech and gaming writer, blending computer science expertise