Learning is no longer the prerogative of the biological brain. While working with a variety of neural networks—artificial intelligence modelled on the human brain—British start-up Graphcore set out to show the world in pictures how machines acquire knowledge.
(Images © Graphcore)
The resulting snapshots are an abstract visualisation of the processes that happen when systems decipher the structure and patterns of an image to classify its content. While our brains do this all the time, it’s much more of a challenge for those possessing a digital mind.
The clusters and shapes we can see here are the result of communication between the individual processes running on interlinked layers.
Sally Doherty, Graphcore
Much like the neurons and synapses in the human brain, we can see nerve cells and conductive channels in this artificial digital replica. These are organised in layers, and the more layers and neurons are available, the greater the AI’s faculty of abstraction. The density and compactness of clusters in the images directly indicate the concentration of neurones working on a problem. In this respect, the visualisations—a byproduct of software framework Poplar—resemble magnetoencephalography scans of a biological brain.
These images are visual interpretations of artificial neural networks at work. They are reminiscent of an MEG brain scan or living nucleus. The cell-like clusters represent interlinked processes and the fine lines dividing them show links between the layers that hold the neurons.
These images were created with Graphcore’s Poplar graph programming framework, which is designed to make developing artificial intelligence faster and easier. One of the goals in creating neural networks is to enable both sequential and parallel processing of information.
Results and data collected from previous tasks help a neural network solve subsequent problems. This is the “digital equivalent of human experience”, explains Sally Doherty, learning, for instance, that cats have pointy ears or that managers often wear ties. This means that neural networks can be trained to recognise specific patterns using processes such as Deep Learning.
The capability of these systems is developing at a rapid pace. According to technology visionaries like Google researcher Ray Kurzweil, in less than 20 years’ time, we will be connected to intelligent computer systems all the time—to help us navigate unfamiliar streets, for instance, or understand foreign languages.
This picture shows an artificial neural network built on Microsoft’s ResNet architecture being trained in image recognition. Since 2010, the ImageNet Challenge lets AI’s compete in this discipline.
Different AIs are built on different architectures, each of which enables a different set of specialisations. The AlexNet system, for instance, is predestined for tasks such as facial recognition.
Although artificial intelligence currently only sees limited real-life application, in a few short years they will be able to diagnose psychological disturbances and plan entire cities. Visionaries believe that they may even become part of our everyday thought processes.
Bechtle update editorial team
Published on Nov 20, 2017.