AI and the Apocolypse

Vernor Vinge is a mathematician and fiction writer.  Vinge is thought to have coined the term “the singularity” to describe the inflection point when machines outsmart humans. He views the singularity as an inevitability, even if international rules emerge controlling the development of AI (artificial intelligence). “The competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first,” he wrote in a 1993 essay. As for what happens when we hit the singularity? “The physical extinction of the human race is one possibility,” he writes.

Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Elon Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.

     Swedish philosopher Nick Bostrom is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” he writes. “A Disneyland without children.”

Comments are closed.