If you’re worried that AI could go horribly wrong, you have company. Speaking at the World Governments Summit in Dubai via video call, Sam Altman, CEO of ChatGPT-developer OpenAI, said the dangers of the technology keep him awake at night. AP has more, including Altman’s reiterated call for an International Atomic Energy Agency-like body that would oversee AI. https://flip.it/84hyCU #Tech#AI#ArtificialIntelligence#SamAltman
OpenAI's Sam Altman voices another warning about the technology his company is responsible for. He says he's worried about "very subtle societal misalignments" instead of killer robots roaming the streets.
In the Disconnect Roundup, I explain the threat isn’t superintelligent AI but the CEOs who believe in it and are feeding the world to computers. Plus, recommended reads, labor updates, and other tech news!
Our tech overlords tell us the AIs could enslave us, but they’re the ones serving up the world to computers on a silver platter and dreaming of becoming computers themselves then building them throughout the galaxy. They’re the real risk to humanity.
Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.
He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.
We’re over a year into this cycle of AI hype, but how does the real impact of the technology compare to what tech CEOs have spent all their time warning us about?
On #TechWontSaveUs, I spoke to @timnitGebru about how they distracted us from the real problems with AI to shape regulation and serve themselves.
Axios catches up with OpenAI's CEO at Davos today. Sam Altman warned that AI capabilities are evolving at such a rapid pace that it will require "uncomfortable" decisions such as allowing the tech to be customized and built around different individual values, which may not always align with his own.