IBM gets neural networks to conceptually think

@ 2019/01/14
MIT-IBM Watson AI Lab find GANs a powerful tool


Boffins working at MIT-IBM Watson AI Lab have been playing around with GANs, or generative adversarial networks, because they are providing clues about how neural networks learn and reason.

For those who came in late, GANs have been used for AI painting, superimposing celebrity faces on the bodies of porn stars and other cultural feats. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lot of dog photos, and it can create new dogs; feed it lots of faces, and it can create new mug-shots.

Researchers from the MIT-IBM Watson AI Lab realised GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.

David Bau, an MIT Ph.D. student who worked on the project, told MI,T Technology review that there was a chance for the team to learn what a network knows from trying to re-create the visual world.

So the researchers began probing a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky. They wanted to see whether it would learn to organise the pixels into sensible groups without being explicitly told how.

Stunningly, over time, it did. By turning “on” and “off” various “neurons” and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed colour from photo to photo in the training set.

No comments available.