My long time colleague and one of the deepest thinkers on AI, Stuart Russell, has just answered this question in the form of a book.
It’s worth reading his thought provoking discussion in its entirety, but I’ll try to summarize his primary arguments. He begins by quoting a fictional situation where a group of literary critics are asked to imagine a situation where one of them is actually judged as right. The thought never occurs to them since intellectual combat with one’s opponents is what drives much of them.
Similarly long time AI researchers like Stuart and me who have diligently worked on AI for four decades have largely been so preoccupied with the intellectual challenge of getting machines to be intelligent that we’ve never really asked the more important question: what if we succeed?
Just pause for a moment, put down your coffee cup, disconnect your headphones, just unplug for a moment. And once again, ask yourself: what if AI is successful? Of course, as Stuart says, that would be the single biggest event in recent scientific and technological progress: the creation of a super intelligent species, smarter than us, vastly more powerful in every way.
As an analogy, Stuart asks you to imagine humanity receiving an email from an alien species saying in effect: WE ARE ARRIVING IN 30 YEARS! What would be the reaction? Pandemonium would be an understatement. Our worst fears may be realized. We are going to be invaded by an alien species whose true power and intent we cannot comprehend.
As a matter of fact, that is exactly the predicament we find ourselves in. As a concrete example, the founder of Open AI, Sutskever, a respected deep learning scientist, recently predicted that AGI is likely to be possible within as short a time frame as 5 years!
https://link.medium.com/SmB7Rijfm1
Of course, not everyone is so confident. I’m not sure deep learning by itself will get us there. But that’s just my guess. Others believe AGI will happen, but over a longer time frame.
This is when AI’s top researchers think artificial general intelligence will be achieved
Now, the key point is that no one is saying AI will fail. It’s a question of when, not if. That should be in some ways a terrifying prospect. Why? Because we don’t know how truly intelligent machines will behave. Will we able to control them? Will they work on our behalf? Will we become redundant? These questions should be as important as control of nuclear weapons was in the 20th century. Nations went to great lengths to ensure nuclear technology was limited. The nonproliferation treaty was signed by most countries.
Nothing of the sort is happening in AI. No one is running around asking difficult questions. Stuart is perhaps the first leading AI researcher willing to stick his neck out and say: hey, wait a minute! We might be working on something that might end up causing our demise as humans. We should rethink this enterprise. Can we somehow guarantee that AI systems will act on our behalf?. That they will be a force for good, not evil.
As an analogy, think of the current runaway train represented by large social networks. They are clearly easily manipulated and in many countries have been perverted to do ethnic cleansing, rig elections, lynch innocent women by falsely accusing them as witches, promote serial mass murder and worse. None of this is by design. The creators of these platforms did not sanction these abuses. Their intentions were clearly honorable. The abuses happened nonetheless.
Similarly AI researchers like Stuart and me hope that none of our research could be subverted to more nefarious ends. But can we guarantee that? No. Stuart’s book is a long argument that calls out for the AI community to pivot before it’s too late, and outlines an approach that tries to guarantee AI systems remain “human compatible” by design.
Is this too late already? Will anyone listen? Stuart makes the analogy to counter those who say such fears are greatly exaggerated by imagining an AI researcher like him or me driving a bus filled with all of humanity towards a cliff, and calming the passengers by saying the bus is sure to run out of gas before it reaches the cliff. Would you trust us?
This is my biggest fear. That we are successful. That we create something we cannot control. Einstein wrote a famous letter to Roosevelt during the Second World War imploring Roosevelt to set up a facility to accelerate the development of atomic weapons. He and other knowledgeable physicists understood the immense threat posed if Hitler could possess such weapons. Ultimately the Manhattan project led by brilliant physicists succeeded in building these weapons. Einstein came to bitterly regret his role in this effort stating he wished he could have burned his hands instead.
Stuart’s book is a timely reminder that the development of technology as powerful as AI is always a mixed blessing. Except we are working on creating an alien species, one which may have the power to rule over us.
Web search engines access more data than any human could read in a million lifetimes. Imagine if they could actually understand everything they read. What then? Would they be compassionate? The history of life on earth suggests otherwise. Every species, including our own, achieved supremacy by obliterating our competitors. Why would super intelligent AI systems behave any differently?
There’s much to think about here.