chip-companies

AI Chips: The Chip Companies Leading The Way To The AI Revolution

In the past few years, more than 50 companies with a mission to make artificial intelligence run faster have been founded.

The first AI chip was created in 1992 by Bell Labs researcher Yann LeCun to run deep neural networks.

However, the technology was ahead of its time and never reached the mass market. 

Fast forward to 2017 and LeCun is now the overseer of the central AI lab at Facebook.

It was around this time that neural networks suddenly exploded in relevance and popularity, with neural networks in particular transforming the likes of Google, Facebook, and Microsoft.

Below we will look at some of the companies manufacturing AI chips for a market predicted to be worth $129 billion by 2025.

How is a GPU made and who makes it?

  • Semiconductor Manufacturing:
    • GPU manufacturing begins with semiconductor foundries that produce silicon wafers.
    • Advanced manufacturing processes are used to create the GPU chips on these wafers.
    • Leading semiconductor manufacturers include TSMC (Taiwan Semiconductor Manufacturing Company), Samsung, and GlobalFoundries.
  • GPU Design and Development:
    • Tech companies like NVIDIA and AMD design GPUs, including architecture, specifications, and features.
    • These designs are translated into integrated circuits (ICs) that serve as the core GPUs.
  • Component Suppliers:
    • GPU manufacturers source various components, including memory chips (e.g., GDDR6 or HBM), PCBs (Printed Circuit Boards), and cooling solutions (e.g., fans or heat sinks).
    • Suppliers provide these components to GPU manufacturers to assemble the graphics cards.
  • Assembly and Testing:
    • GPUs are assembled on PCBs, and other components like VRAM are added.
    • Comprehensive testing, including functional and stress tests, ensures the GPUs meet quality and performance standards.
  • Distribution and Retail Partners:
    • GPUs are distributed to retailers, system integrators, and OEMs (Original Equipment Manufacturers).
    • Retailers include online and offline stores that sell GPUs to consumers and businesses.
  • OEM Integration:
    • Some GPUs are integrated into laptops, desktop computers, workstations, and servers by OEMs like Dell, HP, ASUS, and others.
    • These devices are then sold to end-users or enterprises.
  • Software Development:
    • GPU manufacturers develop drivers and software support for their GPUs, enabling compatibility with various operating systems and applications.
    • Independent software vendors (ISVs) may optimize their software for specific GPUs.
  • Data Centers and Cloud Services:
    • GPUs are used in data centers and cloud services for tasks like AI (Artificial Intelligence), deep learning, and high-performance computing.
    • Companies like Amazon Web Services (AWS) and Microsoft Azure deploy GPUs in their data centers.
  • Gaming and Consumer Electronics:
    • GPUs are a critical component in gaming consoles, graphics cards for gaming PCs, and various consumer electronics like smart TVs.
    • These products are sold to consumers through retail channels.
  • Supply Chain Management:
    • Supply chain professionals manage the flow of materials, components, and finished products.
    • Inventory management, demand forecasting, and logistics are essential for efficient supply chain operations.
  • Global Distribution:
    • GPUs and related components are distributed globally to meet demand in various regions.
    • Logistics companies handle transportation and delivery.
  • End-User Adoption:
    • GPUs are utilized by end-users for various applications, including gaming, content creation, scientific research, and more.
    • End-users purchase GPUs through retail channels or as part of pre-built systems.
  • Support and Maintenance:
    • GPU manufacturers provide customer support, including warranty services and driver updates.
    • Users may also seek maintenance and repair services for faulty GPUs.
  • Recycling and Disposal:
    • End-of-life GPUs may be recycled to recover valuable materials and reduce environmental impact.
    • Proper disposal and recycling practices are essential for sustainability.
  • Market Dynamics:
    • The GPU supply chain is influenced by market demand, technological advancements, and competitive forces.
    • Supply shortages and surpluses can impact pricing and availability.

Intel

Intel entered AI chip manufacturing in a serious way after acquiring deep-learning start-up Nervana Systems in 2016 for around $350 million.

In announcing the deal, Intel EVP Diane Bryant said that “Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.

Intel’s versatile and popular Xeon processes allowed the company to become the first AI chip manufacturer to cross the $1 billion sales mark in 2017.

IBM

IBM released TruthNorth AI in 2014. The AI chip, which delivered efficient deep network inference and superior data interpretation, housed 256 million synapses, 1 million neurons, and 5.4 billion transistors.

In August 2021, the company unveiled details of its new IBM Telum Processor which contains on-chip acceleration for AI inferencing while transactions take place.

After a three-year design period, it will provide business insights at scale for clients across the banking, finance, trading, insurance, and customer service industries.

Essentially, the processor allows such clients to move from a fraud detection to a fraud prevention mindset.

Cerebras Systems

Cerebras Systems was founded in California in 2015 by Andrew Feldman, Michael James, Sean Lie, Jean-Philippe Fricker, and Gary Lauterbach.

The company’s latest AI chip model is the Cerebras WSE-2 which contains 850,000 cores and 2.6 trillion transistors.

GlaxoSmithKline is a major client of Cerebras, with the healthcare conglomerate using its tech to train language models using biological data at scale to discover transformation medicines.

AstraZeneca is also using Cerebras AI to rapidly iterate and experiment by running queries on thousands of research papers.

Nvidia

nvidia-business-model
NVIDIA is a GPU design company, which develops and sells enterprise chips for industries spacing from gaming, data centers, professional visualizations, and autonomous driving. NVIDIA serves major large corporations as enterprise customers, and it uses a platform strategy where it combines its hardware with software tools to enhance its GPUs’ capabilities.

Nvidia has transferred its expertise in GPUs to AI chips for various purposes. The Xavier chipset, for example, has been used in autonomous driving, while its Volta chipset tends to be more common in data centers.

A new chip named H100 “Hopper” was announced in April 2022 with 80 billion transistors over 814 square millimeters.

It is believed the H100 will reduce the time it takes for an AI model to learn tasks like translating live speech into different languages or generating captions. 

The chip’s ability to detect audio deepfakes means it can also be used to prevent fraud or the spread of misinformation.

Groq

Groq is a machine learning systems start-up that was founded in 2016 by ex-Google employees Jonathan Ross and Douglas Wightman together with investor Chamath Palihapitiya. 

Groq’s mission is to make it easier for companies to adopt AI systems since only 20% of 2,000 companies in a McKinsey survey claimed they had successfully implemented AI in more than one process.

To that end, Groq offers an entirely new processing architecture that caters to the demanding requirements of machine learning applications. 

Groq’s AI chip reduces the complexity associated with hardware-focused development.

In other words, developers can spend more time on algorithms and less time adapting their solutions to fit the hardware.

Key takeaways:

  • More than 50 companies have been founded in the last few years with a mission to make artificial intelligence run faster. Major players such as Facebook, Google, and Microsoft have also advanced the technology in their quest to incorporate neural networks.
  • Intel became a presence in the market after acquiring Nervana Systems in 2016, while IBM’s AI chips help companies shift their mindset from one of fraud detection to fraud prevention.
  • AI chips from Cerebras Systems have been used by healthcare companies to perform research rapidly and discover new medicines. Nvidia, on the other hand, has transferred its expertise in GPUs to AI chips with important language and voice processing applications.

Key Highlights

  • History of AI Chips:
    • The first AI chip was created in 1992 by Yann LeCun at Bell Labs, although it didn’t gain mass market adoption at the time.
  • GPU Manufacturing Process:
    • The production of GPUs begins with semiconductor foundries manufacturing silicon wafers.
    • Leading semiconductor manufacturers in this space include TSMC, Samsung, and GlobalFoundries.
    • Tech companies like NVIDIA and AMD design GPUs and oversee their development.
  • Component Suppliers:
    • GPU manufacturers source components such as memory chips, PCBs, and cooling solutions from various suppliers.
  • Assembly and Testing:
    • GPUs are assembled and undergo comprehensive testing to ensure quality and performance.
  • Distribution and Retail Partners:
    • GPUs are distributed to retailers, system integrators, and OEMs, including online and offline stores.
  • OEM Integration:
    • Some GPUs are integrated into devices like laptops, desktop computers, and servers by OEMs before being sold to end-users or enterprises.
  • Software Development:
    • GPU manufacturers develop drivers and software support for their GPUs, enabling compatibility with different operating systems and applications.
    • Independent software vendors may optimize their software for specific GPUs.
  • AI Chip Manufacturers:
    • Several companies are manufacturing AI chips to meet the growing demand in the AI market.
    • Major players in this space include Intel, IBM, Cerebras Systems, NVIDIA, and Groq.
  • Intel in AI Chips:
    • Intel entered AI chip manufacturing after acquiring Nervana Systems in 2016.
    • Intel’s Xeon processors contributed to its success in this market.
  • IBM AI Chips:
    • IBM developed the TruthNorth AI chip in 2014, known for efficient deep network inference.
    • In 2021, IBM introduced the Telum Processor with on-chip AI acceleration for various industries.
  • Cerebras Systems:
    • Cerebras Systems, founded in 2015, manufactures AI chips like the Cerebras WSE-2 with millions of cores and trillions of transistors.
    • Clients like GlaxoSmithKline and AstraZeneca use Cerebras AI for research and development.
  • NVIDIA in AI Chips:
    • NVIDIA is known for GPU design and enterprise chips for various industries.
    • NVIDIA’s AI chips like Xavier and Volta are used in autonomous driving and data centers.
  • NVIDIA’s H100 Chip:
    • NVIDIA introduced the H100 “Hopper” chip with 80 billion transistors, aimed at accelerating AI model learning and detecting deepfakes.
  • Groq in AI Chips:
    • Groq, founded in 2016, offers a new processing architecture for machine learning applications.
    • Groq’s AI chip simplifies hardware-focused development, allowing more focus on algorithms.
  • AI Chip Market Growth:
    • The AI chip market is predicted to be worth $129 billion by 2025.
    • More than 50 companies have emerged recently to advance AI technology.

Read More:

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
FourWeekMBA