Since they became part of our lives some 80 years ago, computers have become faster and smaller, but their basic architecture hasn’t changed.
====== AVISO IMPORTANTE! ======
There’s still one part that stores information – that’s the memory (e.g., RAM, hard drive), and another part that processes information – that’s the CPU or processor.
Now Associate Professor Shahar Kvatinsky presents an architectural alternative. Bringing the “thinking” and the “remembering” functionalities together into one unit, he has built a neural network right into the hardware of a chip, and as a proof of concept – taught it to recognize handwritten letters. The results of his study were recently been published in Nature Electronics.
“We like to describe a computer as a ‘brain’, but entirely separate hardware for storing information and for using it is not how an organic brain works,” explains Prof. Kvatinsky.
Kvatinsky develops neuromorphic hardware – electronic circuits inspired by neuro-biological architectures present in the nervous system.
The idea of such computers was first developed in the 1980s at the California Institute of Technology, but it is modern technological developments that enabled considerable advances in that field.
One might think modern computers are already surpassing the human brain – has not a computer already defeated the best human chess and Go players?
Although the answer is “yes,” AlphaGo, the program that defeated multiple Go masters, relied on 1500 processors, and accrued a $3000 electricity bill per game.
The human players’ energy consumption for the same game amounted to a sandwich, more or less, and that same player is also capable of talking, driving, and performing countless other functionalities. Computers still have a long way to go.
In collaboration with Tower Semiconductor, Prof. Kvatinsky and his team designed and built a computer chip that, like an organic brain, does everything: stores the information and processes it. This chip is also hardware-only, meaning its programming isn’t separate; it is integrated into the chip. What this chip does is learn; specifically, learning handwriting recognition, a feat achieved through deep-belief algorithms.
Unlike most of the neuromorphic chips investigated these days, that use emerging unconventional technologies, this chip is based on commercial technology available in Tower Semiconductor foundries.
Presented with multiple handwritten examples of each letter, the chip learnt which one is which, and achieved 97% accuracy in recognition with extremely low energy consumption.
Artificial neural networks learn in a way similar to living brains: they are presented with examples (examples of handwritten letters, in this particular study), and “figure out” on their own the elements that make one letter different from others, but similar to the same letter in different handwriting.
When the neural network is implemented as hardware, the learning process strengthens the conductivity of some nodes. This is very similar to how, when we learn, the connections between neurons in our brains are strengthened.
There are countless potential uses these chips.
For example, Prof. Kvatinsky says, such a chip could be incorporated into the camera sensor of smartphones and similar devices, eliminating the conversion of analogue data into digital – a step that all such devices perform before any form of enhancement applied to the image.
Instead, all processing could be performed directly on the raw image before it is stored in a compressed digital form.
“Commercial companies are in a constant race to improve their product,” Prof. Kvatinsky explains, “they cannot afford to go back to the drawing board and reimagine the product from scratch.
That’s an advantage the academia has – we can develop a new concept that we believe could be better and, release it when it can compete with what’s already on the market.”