In the past few years, more than 50 companies with a mission to make artificial intelligence run faster have been founded.
The first AI chip was created in 1992 by Bell Labs researcher Yann LeCun to run deep neural networks.
However, the technology was ahead of its time and never reached the mass market.
Fast forward to 2017 and LeCun is now the overseer of the central AI lab at Facebook.
Below we will look at some of the companies manufacturing AI chips for a market predicted to be worth $129 billion by 2025.
Intel entered AI chip manufacturing in a serious way after acquiring deep-learning start-up Nervana Systems in 2016 for around $350 million.
In announcing the deal, Intel EVP Diane Bryant said that “Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.”
Intel’s versatile and popular Xeon processes allowed the company to become the first AI chip manufacturer to cross the $1 billion sales mark in 2017.
IBM released TruthNorth AI in 2014. The AI chip, which delivered efficient deep network inference and superior data interpretation, housed 256 million synapses, 1 million neurons, and 5.4 billion transistors.
In August 2021, the company unveiled details of its new IBM Telum Processor which contains on-chip acceleration for AI inferencing while transactions take place.
Essentially, the processor allows such clients to move from a fraud detection to a fraud prevention mindset.
Cerebras Systems was founded in California in 2015 by Andrew Feldman, Michael James, Sean Lie, Jean-Philippe Fricker, and Gary Lauterbach.
The company’s latest AI chip model is the Cerebras WSE-2 which contains 850,000 cores and 2.6 trillion transistors.
GlaxoSmithKline is a major client of Cerebras, with the healthcare conglomerate using its tech to train language models using biological data at scale to discover transformation medicines.
AstraZeneca is also using Cerebras AI to rapidly iterate and experiment by running queries on thousands of research papers.
Nvidia has transferred its expertise in GPUs to AI chips for various purposes. The Xavier chipset, for example, has been used in autonomous driving, while its Volta chipset tends to be more common in data centers.
A new chip named H100 “Hopper” was announced in April 2022 with 80 billion transistors over 814 square millimeters.
It is believed the H100 will reduce the time it takes for an AI model to learn tasks like translating live speech into different languages or generating captions.
The chip’s ability to detect audio deepfakes means it can also be used to prevent fraud or the spread of misinformation.
Groq is a machine learning systems start-up that was founded in 2016 by ex-Google employees Jonathan Ross and Douglas Wightman together with investor Chamath Palihapitiya.
To that end, Groq offers an entirely new processing architecture that caters to the demanding requirements of machine learning applications.
Groq’s AI chip reduces the complexity associated with hardware-focused development.
In other words, developers can spend more time on algorithms and less time adapting their solutions to fit the hardware.
- More than 50 companies have been founded in the last few years with a mission to make artificial intelligence run faster. Major players such as Facebook, Google, and Microsoft have also advanced the technology in their quest to incorporate neural networks.
- Intel became a presence in the market after acquiring Nervana Systems in 2016, while IBM’s AI chips help companies shift their mindset from one of fraud detection to fraud prevention.
- AI chips from Cerebras Systems have been used by healthcare companies to perform research rapidly and discover new medicines. Nvidia, on the other hand, has transferred its expertise in GPUs to AI chips with important language and voice processing applications.