Intelligence, Inside and Outside.

Elon Musk: Superintelligent AI Is An Existential Risk To Humanity

Elon Musk thinks the advent of digital superintelligence is by far a more dangerous threat to humanity than nuclear weapons. He thinks the field of AI research must have government regulation. The dangers of advanced artificial intelligence have been popularized in the late 2010s by Stephen Hawking, Bill Gates & Elon Musk. But Musk alone is probably the most famous public person to express concern about artificial superintelligence.

Existential risk from advanced AI is the hypothesis that substantial progress in artificial general intelligence could someday result in human extinction or some other unrecoverable global catastrophe.

One of many concerns in regards to AI is that controlling a superintelligent machine, or instilling it with human-compatible values, may prove to be a much harder problem than previously thought.

Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals.

An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, endanger or even destroy modern civilization. Such risks come in forms of natural disasters like Super volcanoes, or asteroid impacts, but an existential risk can also be self induced or man-made, like weapons of mass destruction. Which most experts agree are by far, the most dangerous threat to humanity. But Elon Musk thinks otherwise. He thinks superintelligent AI is a far more greater threat to humanity than nukes.

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by “alarmist” messages, or worrying that such messages will lead to cuts in AI funding.

Read More  How AI Is Helping Airlines Mitigate The Climate Impact Of Contrails

One can’t help themselves but wonder, if funding in AI research is truly more important than the possibility of strong AI wiping out humanity.

Hopefully, we will have the choice to collectively decide whats our best move and not leave the matter in the hands of a small group of people to unilaterally make that decision for us.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

AI Gets Better Every Day. Here’s What That Means For Stopping Hate Speech

Next Post

Build A Chatbot Resume On Google Cloud

Read next