Blog > AI

Artificial Intelligence – Humanity’s Most Momentous Invention In Six TED Talks

Artificial Intelligence – Humanity’s Most Momentous Invention In Six TED Talks

AI Took My Job!

Ken Jennings’ name is vaguely familiar to people, but why? Because his profound knowledge on all things trivial led to him being the unbeatable champion of a TV game show called Jeopardy! It also put him in the gunsights of IBM. They spent thousands of hours, invested millions of dollars, all just to build a machine named WATSON that could defeat him playing that TV-derived game. See how Ken deals with the consequences of coming out on the losing end of a battle with a computer that took his job – a fate which, according to this Oxford study, is likely to affect 47% of people over the next 20 years…

Working Together?

Synergy, says Garry Kasparov, is essential to the way humans and machines operate. As the World Chess Master for over a decade, he too became a target for IBM. They spent thousands of hours and millions of dollars to beat him at his own game.

Kasparov doesn’t fret about being beaten by a machine. He worries that he might not be beaten by a machine; that human beings will choose to cower in their caves rather than work to create something greater than themselves. Machines can calculate, he says, and humans can understand; machines have instructions, and humans have purpose. When you add AI’s sheer speed to our desire to learn, it is an almost unbeatable combination. We’re better together. The question, of course, is whether Kasparov is right that there is some special, rather mysterious faculty of human “understanding” that machines cannot outperform, which many experts have come to doubt and reject...

In Control

Based on the real risk and virtual long-term certainty that AI will surpass humans in all domains of understanding, Sam Harris recommends a much more cautious approach.
It is inevitable that we will advance our technology in all fields, and that includes AI itself. Imagine trying to get all of humanity to give up research, or to stop improving some aspect of their current lives, and you quickly see that it is infeasible. Sustained technological advancement, Harris argues, must eventually lead to superintelligent systems.

He has no definite solution to the momentous risk/opportunity management question he poses, except that more of our best scientific resources should be allocated to it. We likely only have one chance to get it right…

Part of the Solution (Literally)?

Whereas Sam Harris hypothesizes that a safer way to create superintelligent AIs would be to incorporate them in our own biological brains, Ray Kurzweil suggests that we create a thinking-complex in the Cloud, and insert nanobots into our brains that are capable of linking to it upon need.

We have 300 million “thinking modules” in our neocortex (the part of our brain most strongly associated with our abstract intelligence). Imagine if you could connect to a billion more in order to solve a problem.

You may not remember the name of the two lead actors in the 1989 movie that you loved called “When Harry Met Sally.” Your memory isn’t perfect, and so you pull out your smart device and add a few thousand more computers to the task via Google’s search engine. Suddenly you are reminded that it was Meg Ryan and Billy Crystal, which opens a mental floodgate, and you recall that Carrie Fisher and Bruno Kirby were also in it, and that triggers even more memories.

Being able to connect instantly to a Synthetic Neocortex would make it possible to solve immensely complex problems with relative ease.

Seeing the Possibilities

Fei-Fei Li talks about how AI will advance once it has significant vision capability. She and her team built a massive picture database (called ImageNet) and trained an AI to recognize images through constant exposure to millions of images that were labelled and described. Over the years the AI has been taught to analyse pictures more deeply, and learned to construct accurate sentences such as “This picture is of a large airplane sitting on a runway” when it “sees” that picture.

Something a child can do automatically is often very difficult to teach to a computer. (Cf. Moravec’s Paradox, stating that contrary to initial assumptions in robotics research, some high-level reasoning requires very little computation, whereas low-level sensorimotor skills require enormous amounts of computational resources.) You have probably used the ImageNet technology with TinEye, Google Image, Root About, or Karma Decay, among many others where you can upload an image and then have its source identified, or its contents described in text, or read aloud to help people with limited vision understand images.

Once AIs can accurately identify what they see, they provide us with a whole new set of insights. They can think about a million times faster than us and find patterns that escape our visual-cum-cognitive notice.

Morality & the Machine: Aligning AI with Our Values

The area of AI content-learning that may ultimately determine our fate the most is human morality. Despite substantial disagreement, our values as human beings are fairly universal and very close in the huge space of all possible values. Machine intelligences will have none of them by default – consider the possibility of “paperclip maximizing” superintelligences. It is up to us to ensure that, in time for the AI take-offs that are likely to happen this century, we find a reliable way of teaching the machines to value and pursue the things we value; to understand us and to seek the things we ourselves seek. In order for an AI’s decisions to be safe and beneficial, they have to pass (directly or indirectly) through the human filter.

Philosopher Nick Bostrom proposes to make this problem a research priority for humanity. His TED Talk makes us edgy about the AI possibilities. But it also stresses that the AI alignment task, while momentous and enormously challenging, is not hopeless.