Some thoughts on the future of AI

Artificial Intelligence is coming, no matter what dire warnings Stephen Hawking may give us. He told the BBC, in a recent interview: “The development of full artificial intelligence could spell the end of the human race.”

I would not argue with Professor Hawking, he is rather intelligent after all, but he has been known to change his mind. In 2004, Nature, the international weekly journal of science wrote:
Hawking has always stuck resolutely to the idea that once information goes into a black hole, there is no way out. Until now. When news@nature.com asked about his change of heart, Hawking smiled and wrote: “My views have evolved.”

There are so many countries working in the field of Robotics and computing that one day, even accidentally, computing will overtake the human mind. How long will it be before a computer can be programmed to improve its own software? When robots take over the full design, production and assembly of computers, how long will it be before they are able to improve things. Advances in technology will then accelerate at rates far beyond our imagining. Today, a three year old computer seems slow and out of date. In three years time, perhaps it will be a three month old computer that feels old.

In business, there is always the thought that “if we don’t do it first, one of our competitors will, and we will be left behind in the race for sales”. It is this thought that will drive AI until it is the robots who are suggesting the improvements they need . . . and actioning those improvements themselves.

Boy do we need Asimov’s three laws to be working by then!

Stephen Hawking is not alone. Elon Musk, CEO of Tesla and SpaceX, has also expressed his concerns about AI. CNET wrote about Musk on a recent web page headed ‘Elon Musk: “We are summoning the demon” with artificial intelligence’:
In June, Musk raised the specter of the “Terminator” franchise, saying that he invests in companies working on artificial intelligence just to be able to keep an eye on the technology. In August, he reiterated his concerns in a tweet, writing that AI is “potentially more dangerous than nukes.” Just a few weeks ago, Musk half-joked on a different stage that a future AI system tasked with eliminating spam might decide that the best way to accomplish this task is to eliminate humans.

On the opposing side, we have Eric Schmidt of Google (who of course are pouring money into the field of robotics development). According to a report by Wired Business, Eric is telling us not to fear the artificially intelligent future.

Newsweek Tech & Science wrote that…
Google chief executive Eric Schmidt says fears over artificial intelligence and robots replacing humans in jobs are “misguided”. In fact the CEO of one of the world’s most powerful tech companies says AI is likely going to make humanity better.

Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University, is the author of Superintelligence: Paths, Dangers, Strategies (published July 2014, ISBN 978-0199678112). Jason Dorrier, writing on SingularityHUB under the heading “Can AI save us from AI?”, says that the book might just be “the most debated technology book of the year. Since its release, big names in tech and science, including Stephen Hawking and Elon Musk, have warned of the dangers of artificial intelligence.”

Here are a few of the reviews published on Amazon under their product description for Superintelligence:

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)

Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)

There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)

Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)

To finish on a lighter note, this extract from the film Bicentennial Man demonstrates perfectly my own view of how Artificial Intelligence should be . . . I just hope it turns out like this.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s