Riedl calls his approach “rationalization,” which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. “If we can’t ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf,” Riedl says. But those explanations, however soothing, prompt another question, he adds: “How wrong can the rationalizations be before people lose trust?”

Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the “generator” learns rules about imagemaking and can create synthetic images. A second “adversary” network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.

BACK AT UBER, Yosinski has been kicked

“If we can’t ask … why they do something and get a reasonable response back, people will just put it back on the shelf.” Mark Riedl, Georgia Institute of Technology

Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create “more volcano,” the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some

AI IN ACTION

Neural networks learn the art of chemical synthesis Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. “You need the right ingredients and a recipe for how to combine them,” says Marwin Segler, a graduate student at the University of Münster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens. They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says,

chemistry “can be very subtle. It’s hard to write down all the rules in a binary way.” So Segler, along with computer scientist Mike Preuss at Münster and Segler’s adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. “The more data you feed it the better it gets,” Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch. The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out

SCIENCE sciencemag.org

95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines. Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says it’s too soon to know how well Segler’s approach will work. But Wender, who is also applying AI to synthesis, thinks it “could have a profound impact,” not just in building known molecules but in finding ways to make new ones. Segler adds that AI won’t replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it can’t design and carry out a full synthesis—by itself. Of course, AI developers have their eyes trained on those other tasks as well. —Robert F. Service

7 JULY 2017 • VOL 357 ISSUE 6346

Published by AAAS

27

Downloaded from http://science.sciencemag.org/ on August 21, 2017

out of his glass box. Uber’s meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. He’s out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office maze—until he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one. First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, “more volcano.” Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.

at night. Some by day. And some, perhaps, with flaws—which would be clues to the classifier’s knowledge gaps. Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine “a bird sitting on a branch,” the network—using instructions translated by the GAN—generated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. “This hints at what will be an important direction in AI neuroscience,” Yosinski says. It was a start, a bit of a blank map shaded in. The day was winding down, but Yosinski’s work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Uber’s maze of cities, computers, and humans. He didn’t get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. He’d learn them all soon. j

AI in Action: Neural networks learn the art of chemical synthesis Robert F. Service

Science 357 (6346), 27. DOI: 10.1126/science.357.6346.27

http://science.sciencemag.org/content/357/6346/27

PERMISSIONS

http://www.sciencemag.org/help/reprints-and-permissions

Use of this article is subject to the Terms of Service Science (print ISSN 0036-8075; online ISSN 1095-9203) is published by the American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. 2017 © The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. The title Science is a registered trademark of AAAS.

Downloaded from http://science.sciencemag.org/ on August 21, 2017

ARTICLE TOOLS

AI in Action: Neural networks learn the art of chemical synthesis.

AI in Action: Neural networks learn the art of chemical synthesis. - PDF Download Free
110KB Sizes 0 Downloads 7 Views