AI’s underlying  technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will.

The artist Adam Ferriss created this image using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network.

Deep learning represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

Read the original article on MIT Technology Review.

The Dark Secret at the Heart of AI
by Will Knight

Responses

  1. Tony Perkins

    The third wave of the Internet is being driven by companies leveraging artificial intelligence, machine learning and Big Data analytics to address, and actually take advantage of, the “infobesity” epidemic. There is currently a boom in new services leveraging such advancements to curate, analyze and interpret information and provide people and businesses critical insights and power never previously imagined.

    But just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for A.I. to explain everything it does.

    “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Jeff Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”