Many technical experts have come to predict something genuinely earth-shattering: In a few decades, artificially intelligent algorithms are quite likely to (far) outsmart the most capable humans in every cognitive area – from social skills, humor and creativity to science and technological innovation.
One factor that’s driving the power of artificial intelligences is the availability of big data: Though many AI technologies have been around for decades, they can now take advantage of datasets sufficiently large to enable impressive learning and results. The ability to access and process large volumes of data leads to rapid evolution in the performance of machine-learning applications. Big data also enables the iterated extraction of knowledge about how to access and produce even bigger data, leading to a “data explosion.”
There is a bit of controversy about whether artificial intelligence will come to surpass humans, but the main debate is about when. However, it’s questionable how action-guiding even the latter debate is: If there were even a small chance that superhuman AI will be around in a few decades, it would be imperative to prepare for this scenario to the best of our ability. Why? Because it’s guaranteed to turn the world upside down if it materializes – in business, in science and technology, in politics and in our private lives. It would be as disruptive as the evolutionary transition from ape-like to human brains: The only reason our species has come to dominate this planet is its superior intelligence. The survival and wellbeing of chimps and other species has come to depend a lot more on us than on them. Analogously, artificial intelligences will have superior power to influence the achievement or frustration of all of our goals.
How can we best prepare for such a scenario? Depending on whether a superhuman AI’s goals are stably aligned with our goals, the world may look like heaven or like hell after the transition. Our task is thus to figure out and implement the best strategies – both technical and societal – to maximize the probability that AI goals will be aligned with human goals. Experts disagree on how hard this task should be expected to be, with some believing it’s as futile for humans to try and control the goals of superior AI as it would have been for chimps to attempt to control our goals. But this view is open to two fundamental objections: Firstly, in contrast to chimps we humans understand there is a control problem; and secondly, even a very small chance of success would be sufficient reason to try very hard to influence AI outcomes, given the earth-shattering stakes.
If you’re interested in using your career to this end, check out the work by the Foundational Research Institute, the Machine Intelligence Research Institute and this AI career guide by 80,000 Hours.
Professor Nick Bostrom directs the Future of Humanity Institute at Oxford University and has pioneered strategic AI research. He spoke about his book “Superintelligence: Paths, Dangers, Strategies” at Google – it’s a great introductory resource:
Performed a fascinating analysis you’d like to publish and share? Found a cool dataset that should be featured on our page? Contribute to our blog!