The startup Luminous Computing has developed a computer chip that utilizes light instead of electrons data for artificial intelligence, with the goal of doing so more rapidly, with greater efficiency, and with less energy consumption. The advancement promises to overcome the bottleneck effect that limits the hardware of current AI modeling, which contains classical electric chips, continually becoming even more insufficient for these types of applications.
The What (and Why) of Luminous Computing
In 2019, the project of Marcus Gomez (ex-researcher at Google, Tinder, The May Clinic, Stanford and the Harvard School of Medicine), Michael Gao (winner of the United States Math Olympiads and the former CEO of AlphaSheets), and Mitchell Nahmias (pioneer in the field of neuromorphic photonics), was backed by technological leaders such as Bill Gates, Dara Khosrowshahi (the CEO of Uber), Ali Partovi of NEO, and Luke Nosek and Steve Oskoui of Gigafund.
In total, the founders of Luminous Computing (that did not have more than 7 employees in its ranks at the time) took in around 9 million dollars to make advances in the possible solution to the problem of the computational power of conventional processors, that does not permit sufficiently rapid machine learning - and therefore delays the evolutions of robotics, autonomous vehicles, and other applications of AI (from the large to the small, like voice assistants in the style of Siri, that come in any type of smartphone).
Following the principles of laser communication, the scientists at Luminous Computing employed light in a model of machine learning based on neural networks. These networks process data in various computational layers with interconnected nodes that look for patterns in this data and return a signal or response, in a way similar to the way the neurons in our brains do. The data is propagated through the network (and run in backpropagation) so that the network “learns”, and that the final layer generates a prediction based on the calculations it conducted in the other layers.
Silicon microelectronics have a memory bank that stores data and instructions in a shared multiplex bus, which impedes the simultaneous accessing of both the data and the instructions. This produces a bottleneck effect, explained by Nahmias and other authors in a recent study.
In turn, this bottleneck continually worsens more and more, according to a study by OpenAI, in which it is estimated that the computational power required to train larger artificial intelligence models would need to double every three or four months (in contrast to the two years of the well-known Moore’s Law).
New Uses for Something “Old”
For some time it has been believed that laser communication could resolve these bottleneck issues. Fiber optics, for instance, have been widely implemented, perhaps that will help us understand the reasonings behind these new chips. Photonic waveguides (which is to say, the miniscule structures that conduct light in this computer chip), provide multiplex channels, meaning, they are capable of combining two or more signals in the same channel of transmission - with little fluctuation in latency, distortion, and interference.
The first photonic chips were developed for quantum computing in 2015 that using light came to store up to 8 bits in the same place the same year. But up until now, no one had created a chip of this type for the implementation of neural networks and artificial intelligence models - with the goal of stimulating faster learning (the plan is to go from days to only minutes) through linear photonic operations, which offer substantial differences in terms of the density of data transfer or bandwidth, latency, and energy consumption. The prototype performs up to three times better than other chips.
It is also the first time a chip threatens to overtake (and even replace) the Tensor Processing Unit (TPU) of Google, an integrated circuit for high speed artificial intelligence and that promotes completely automatic machine learning.
Towards the Future
To be sure, Luminous is not the only firm dedicated to photonic chips, for some time now, they have had competition.
Lightmatter, for instance, is a Boston-based company that also has big investors, that have put approximately 33 million dollars into it, towards the development of new AI hardware with focus on laser technology.
It is Dirk Englund, a consultant for Lightmatter and MIT professor, that maintains that producing photonic chips at scale will be quite the challenge, because of all of the devices necessary to manipulate and control light in the necessary way: fundamentally, lasers and electro-optic modulators.
Because of this, one will need to wait some time more to see these chips start to be in regular use in artificial intelligence in a way that allows it to reach its true potential.