Apple’s headset challenges, and what AI can learn from nuclear safety

0
2
ap23156701839104 e1686049017605.jpg
ap23156701839104 e1686049017605.jpg

The ‘one more thing’ announced by Apple at its Worldwide Developers Conference (WWDC) this year was the industry’s worst-kept secret. The Apple Vision Pro, the tech giant’s gamble on making mixed reality headsets a thing, has received a mixed reception. Most of the concern has centered on the eye-watering $3,499 cost.

But there’s a bigger problem: Whether there’ll be enough apps available to make the cost of the device worth it. It’s a real challenge to redesign apps for an entirely new interface—and developers are concerned. Read the full story.

—Chris Stokel-Walker

To avoid AI doom, learn from nuclear safety

For the past few weeks, the AI discourse has been dominated by those who think we could develop an artificial-intelligence system that will one day become so powerful it will wipe out humanity.

So how do companies themselves propose we avoid AI ruin? One proposed solution comes from a new paper by DeepMind et al that suggests that AI developers should evaluate a model’s potential to cause “extreme” risks before even starting any training.

The process could help developers decide whether it’s too risky to proceed. But potentially it’d be more helpful for the AI sector to draw lessons from a field that knows a thing or two about very real existential threats—safety research and risk mitigation around nuclear weapons.

—Melissa Heikkilä

Melissa’s story is from The Algorithm, her weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

Previous articleRay tracing: Most consumers still don’t use GeForce graphics cards with RTX, says NVIDIA
Next articleQuiz: Do you know how to keep your boss from spying on you?
Abraham
Expert tech and gaming writer, blending computer science expertise