Interview: Max Tegmark on Superintelligent AI, Cosmic Apocalypse, and Life 3.0
Ask Max Tegmark why people should read his new book and get involved in the discussion about artificial intelligence, and you get a weighty answer. Forget about book sales, this is about cosmic destiny: The fate of the universe may well be determined by the decisions made “here on our little planet during our lifetime,” he says.
In his book, Life 3.0: Being Human in the Age of Artificial Intelligence, Tegmark first explains how today’s AI research will likely lead to the creation of a superintelligent AI, then goes further to explore the possible futures that could result from this creation. It’s not all doom and gloom. But in his worst case scenario, humanity goes extinct and is replaced with AI that has plenty of intelligence, but no consciousness. If all the wonders of the cosmos carry on without a conscious mind to appreciate them, the universe will be rendered a meaningless “waste of space,” Tegmark argues.
Tegmark, an MIT physics professor, has emerged as a leading advocate for research on AI safety. His thoughtful book builds on the work of Nick Bostrom, who famously freaked out Elon Musk with his book Superintelligence, which described in meticulous detail how a supercharged AI could lead to humanity’s destruction.
Max Tegmark on . . .
- Why He Disagrees With Yann LeCun
- “I Really Don’t Like It When People Ask What Will Happen in the Future”
- What Types of AI Safety Research We Should Fund Now
- The Question of Consciousness
- Cosmic Optimism vs Cosmic Pessimism
- AI as the “Child of All Humanity”