Peer Into The Mind’s Eye

Neuroscientists Peer Into The Mind’s Eye. Science Friday. 2019-05-03. show study It sounds like a sci-fi plot: Hook a real brain up to artificial intelligence, and let the two talk to each other. That’s the design of a new study in the journal Cell, in which artificial intelligence networks displayed images to monkeys, and then studied how the monkey’s neurons responded to the picture. The computer network could then use that information about the brain’s responses to tweak the image, displaying a new picture that might resonate more with the monkey’s visual processing system.

Generated images are surreal. Like images from dreams.

Artists (and dreams) develop techniques (e.g. caricature) by basically figuring out how our brains work.

Dr. Carlos Ponce described the experiments. Previous state of the art, prior to machine learning, humans selected images from the Internet to display to monkeys whose neurons in the visual cortex were instrumented with electrodes to measure responses. It's a little hit-or-miss if the humans choose images that suit the instrumented cells. Even so, with these experiments we identified specific cells which respond to faces or hands or other objects in the world.

The introduction of machine learning allows adversarial generative networks to create artificial images, not subject to selection bias of the humans, fed by the feedback of the instrumented cells. Surreal faces emerged from the feedback loop between the cell and the generative networks.

Dr. Margaret Livingstone is interested in art and the brain. Images recorded from instrumented neurons yielded gnome-like, or leprechaun-like images. We know from earlier experiments that extreme faces, caricatures, trigger the neurons more consistently. We think neurons encode "how things are distinct, how they differ from everything else."

It's a population code (which means you have to record from all the neurons to get a complete picture). We know face cells are interested in how far apart the eyes are. Instead of having a dozen cells encoding specific examples of plausible distances between the eyes, you can have two cells, one which encodes the narrow distance, the other encodes the wide distance, and you get everything in between for free by the ratio of those distances. It's an efficient way to encode the population of faces.

(sounds like git internals)

Artists figured out it is easier to for us recognize a person from a caricature than from a line drawing. They don't express it in neurobiological terms, but they figure out how our brains work.

Talking about color vision in primates. We have only three cone types in our eyes and yet we see millions of colors. We have one cell that really responds to long and inhibited by short, and another which is the reverse. From those two cells we can derive everything in between.

We study vision because it is a model for the rest of the brain.

We are born with neurons that follow a few simple rules. Neurons that fire together should stay connected. After birth these same rules allow you to learn to see what you encounter. Because the brain is just a statistical learning machine. Whatever it sees, it gets good at recognizing and discriminating.

.

We have to build new knowledge on top of what we have already learned.