Shaastra 2018’s Spotlight lecture series kicked off on Thursday with a lecture by Dr. Mriganka Sur. He is currently the Newton Professor of Neuroscience, Director of the Simons Center for the Social Brain at MIT, as well as the head of the Department of Brain and Cognitive Sciences at MIT. He is also a recipient of multiple honours and awards including the Charles Judson Herrick Award of the American Association of Anatomists, the A.P. Sloan Fellowship and the McKnight Development Award to name a few.
The lecture, titled ‘Neuroscience and the future of AI’, opened to a packed CLT. Neuroscience and Artificial Intelligence have made significant strides of progress in recent times. Originally intended to mimic the brain- the world’s most complex decision-making system, ‘neurons’ in artificial neural networks have evolved to be rather independent entities from neurons in biological systems. However, despite being inspired by the nervous system, AI and Neuroscience have not made significant progress together.
Still, AI has succeeded in challenging tasks that only humans were perceived to be capable of. Convolutional Neural Networks (or CNNs), a type of neural network architecture, have worked brilliantly for Computer Vision – imparting human-like visual recognition capabilities to computers. This progress in computer vision has motivated applications beyond automatic tagging in Facebook, most notably research in self-driving cars.
If progress in AI can develop more advanced neural network architectures capable of closely mimicking the functioning of real neurons (and consequently brains), there could be a phenomenal shift in how we study, model, and understand neuroscience. We could then understand why certain genetic mutations occur, decode the mysteries of various neurological diseases, and perhaps even develop cures for them.
The lecture started off with a basic overview of neural networks, of neuroscience, and ended with what we can realistically expect from AI in the future.
Dr. Sur started off by posing a simple hypothetical conversation between two friends on a call:
-‘Let’s go for a walk’
-‘Isn’t it raining?’
A conversation as simple as this could be extremely challenging for AI to interpret. The two sentences do not have any direct correlation. As humans, we understand the conversation because it is supplemented by our knowledge of the world – which was formed independently of this conversation. If it’s raining outside, there could be puddles, we might have to wear a lot of rain gear to avoid getting wet, and it might be an overall unpleasant activity for leisure. To achieve this level of correlation, neural networks must be trained on enormous datasets of everyday conversation, and might still fail at understanding such conversations. And here lies one of the significant shortcomings of AI; it runs on massive amounts of data – unlike the human brain, and still cannot compete when it comes to comprehension.
However, there still is a reason to believe that neural networks are closer to nervous system than ever before. Dr. Sur carried out a groundbreaking experiment regarding the specialization of cells, starting in the 1980s. Neurologists were of the impression that nerve cells in our brain were highly specialized to perform specific functions.; for example, the auditory nerves were to hear and the visual nerves were to see. Nerves from our sensory organs grow into our thalamus and then proceed to different regions in the cortex where these signals are processed and interpreted. In this experiment, scientists ‘rewired’ the brains of ferrets by destroying the pathway that brings auditory information to the thalamus. In response to this, the optic nerve made a double connection and wired itself to both the visual and auditory regions in the cortex. Nerves in the auditory region are usually organized in lines, whereas in the visual region – they are organized in pinwheels. In these rewired ferrets, the nerves connected to the auditory region were also in pinwheels, confirming the double connection of the optic nerves.
These ferrets were trained to look one way to light (a visual cue) and the other way to sound (an auditory cue). The ferrets were rewired on one side of the brain (so that the other side serves as a control). As a consequence, the ferret would be deaf on one side but would be capable of vision on both sides. The results agreed with the hypothesis. The real discovery lied in what follows next: the connection between the thalamus and the visual region of the cortex was snapped. The auditory cortex was now getting all its signals from the retina. Would the ferrets still respond to light from the rewired side? They did. The ferrets were ‘seeing’ from their auditory cortex. This path-breaking discovery was published in Nature magazine in 2000, and was named ‘brain plasticity’. Neurologists are now of the opinion that cells are only pre-programmed to have certain scaffolded structures, but are heavily dependent on the input to specialize in their function. Electrical signals can ‘alter’ the brain. The newborn cortex interprets the world through spikes in electrical activity. Visual inputs teach nerves to ‘see’, auditory inputs teach nerves to ‘hear’, but nerves only take up specialized functions because they are constantly fed the same type of input. Thus, these nerves organize themselves into more specialized and functional regions after they are ‘stabilized’ by a bombardment of input from the external environment.
This ‘plasticity’ of the brain is quite akin to how neural networks ‘learn’. A CNN trained to identify cats in a picture is initially just a neural network which is fed with pictures of cats as inputs. It adjusts its internal parameters to optimize the process with every cycle of input, and with a large enough dataset a ‘cat identification system’ is built. What if pictures of dogs are now fed into the same neural network? Through ‘transfer learning’, we can adjust the same neural network to now identify dogs, by feeding it pictures of dogs as inputs. In due course of tuning hyperparameters, we can convert this cat identification system to identify dogs instead, albeit with more effort, training and less accuracy than a regular dog identification system.
What then prevents us from believing that AI can revolutionize neuroscience? The answer is that neural networks are still not capable of having internal states. We cannot yet design neural networks with a ‘working memory’ or incorporate the idea of ‘attention’. The experiment carried out with ferrets was also carried out with mice, where their daily dose of water supply depended upon responding correctly to the cues. Still, there were some false alarms and some misses. While the false alarms could be attributed to error, the misses are more difficult to explain. Why does the mouse not respond even when it knows that an essential like its water supply depends upon its performance? Maybe it isn’t paying attention- it just doesn’t want to. And it’s a long way to go before we can make neural networks learn these kinds of unspoken cues, and till then it’s safe to say that we needn’t worry about an a(i)pocalypse.