What Is Google Brain? History of Google Brain

  • Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development. The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”.
  • Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. Much of the early work on the project was done in the “20% time” where employees pursue side projects.
  • Google announced in November 2015 that it created a new machine learning system known as TensorFlow. This system is present in many features across Google’s product suite and was made open-source to advance the industry and in the process, position the company as a leader. 

 

 

Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development.

The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”. Ng was soon joined by fellow Google engineer Jeff Dean and researcher Greg Corrado, with much of the initial work part-time and only conducted in the employee’s “20 percent time”.

Below, we’ll chart a brief history of Google Brain and highlight some of the division’s proudest achievements to date.

Early years

Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. This vision was clarified in a 2012 post in which Ng and Dean described a system (later known as DistBelief) that could distinguish between pictures of motorcycles and cars.

Instead of feeding the system labeled images, the pair showed it 10 million YouTube videos over a week based on a belief that it would learn to identify unlabeled images. After the week-long period, the neural network comprised of 16,000 computer processors had acquired unintended knowledge and had learned to identify cat.s

As Ng explained in The New York Times: “The remarkable thing was that [the system] had discovered the concept of a cat itself. No one had ever told it what a cat is. That was a milestone in machine learning.

While neither the blog post nor the accompanying paper mentions Google Brain explicitly, the work undertaken was part of the Google Brain project.

DNNresearch Inc. acquisition

In March 2013, Google acquired Toronto-based neural networks start-up DNNresearch Inc. 

As part of the deal, DNNresearch founder Geoffrey Hinton, a world-renowned neural net researcher, joined Google with two of his graduate students in Alex Krizhevsky and Ilya Sutskever (who would later co-found OpenAI).

The announcement also coincided with a $600,000 gift from Google to Hinton’s research team to support further research into neural nets. Hinton subsequently started a branch of Google Brain in Toronto.

Google Brain graduates

With the team afforded the space and time to prove the Google Brain tech, there was confidence that it could be used in other applications.

Google Brain thus graduated from Google X (now known as X, the moonshot factory) in late 2012 and became part of Google AI. 

TensorFlow

Google announced in November 2015 that it had created a new machine learning system known as TensorFlow. The software was made open source to both benefit Google and advanced the wider industry, and some saw it as the point at which Google started to pivot from a search company to an AI company.

TensorFlow was the successor of DistBelief and powers features such as Android’s speech recognition system, the search function in Google Photos, and the “smart reply” function in the inbox app. It can also be found in YouTube video recommendations and many other contexts.

Google Brain is verified

On April 6, 2016, Hacker News shared the Google Brain team page on its website. This data potentially marked the first time the team name had been used in the public arena.

Two months later, Google Brain released Magenta, a machine learning project that could generate art and music. In September, Google introduced the Google Neural Machine Translation (GNMT) system to increase the fluency and accuracy of Google Translate.

The announcement post, which was written by Google research scientists, once more publicly acknowledge that the Google Brain team existed by thanking them for their contributions.

Key Highlights

  • Introduction to Google Brain:
    • Google Brain is an artificial intelligence research team within Google AI, which is dedicated to AI research and development.
    • It was founded in 2011 by Andrew Ng, initially named the “Deep Learning Project at Google.”
    • The early work on Google Brain was conducted part-time in employees’ “20 percent time.”
  • Early Achievements:
    • Google Brain aimed to integrate deep learning processes into Google’s existing infrastructure.
    • A significant achievement was training a neural network to recognize cats in images by exposing it to unlabeled YouTube videos.
    • This achievement demonstrated the system’s ability to learn features without explicit labeling.
  • DNNresearch Inc. Acquisition:
    • In 2013, Google acquired DNNresearch Inc., a neural networks startup based in Toronto.
    • Geoffrey Hinton, a prominent neural net researcher, and his graduate students joined Google as part of the acquisition.
    • This acquisition contributed to the advancement of Google Brain’s research and expertise.
  • Google Brain Graduates and TensorFlow:
    • Google Brain transitioned from Google X to Google AI, signifying its growth and relevance.
    • In 2015, Google introduced TensorFlow, an open-source machine learning system.
    • TensorFlow replaced DistBelief and powered various Google products, including speech recognition and Google Photos.
  • Public Recognition and Contributions:
    • In 2016, the Google Brain team’s name was publicly acknowledged on Hacker News and the team page.
    • Google Brain released Magenta, an AI project generating art and music.
    • The Google Neural Machine Translation (GNMT) system was introduced to enhance Google Translate’s accuracy.
  • Key Takeaways:
    • Google Brain is an AI research team under Google AI, founded by Andrew Ng.
    • It initially aimed to integrate deep learning into Google’s infrastructure, with significant early successes in image recognition.
    • TensorFlow, introduced in 2015, became a cornerstone of Google’s AI efforts and was open-sourced.
    • Google Brain’s contributions expanded to various AI projects and research areas over the years.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Related To Google

Who Owns Google

who-owns-google
Google is primarily owned by its founders, Larry Page and Sergey Brin, who have more than 51% voting power. Other individual shareholders comprise John Doerr (1.5%), a venture capitalist and early investor in Google, and CEO, Sundar Pichai. Former Google CEO Eric Schmidt has 4.2% voting power. The most prominent institutional shareholders are mutual funds BlackRock and The Vanguard Group, with 2.7% and 3.1%, respectively.

How Does Google Make Money

how-does-google-make-money
Google (now Alphabet) primarily makes money through advertising. The Google search engine, while free, is monetized with paid advertising. In 2023, Alphabet generated over $175B from Google search, $31.51B billion from the Network members (Adsense and AdMob), $31.31B billion from YouTube Ads, $33B from Google Cloud, and $34.69B billion from other sources (Google Play, Hardware devices, and other services). And $1.53B from its other bets. 

Google Business Model

google-business-model
Google is an attention merchant that – in 2022 – generated over $224 billion (almost 80% of revenues) from ads (Google Search, YouTube Ads, and Network sites), followed by Google Play, Pixel phones, YouTube Premium (a $29 billion segment), and Google Cloud ($26.2 billion).

Google Other Bets

google-other-bets
Of Google’s (Alphabet) over $307.39 billion in revenue for 2023, Google also generated for the first time, well over 1.5 billion dollars in revenue from its bets, which Google considers potential moonshots (companies that might open up new industries). Google’s bets also generated a loss for the company of over $4 billion in the same year. In short, Google is using the money generated by search and betting it on other innovative industries, which are ramping up in 2023. 

Google Cloud Business

google-cloud-business-model
In 2023, Alphabet’s (Google) Cloud Business generated over $33 billion within Alphabet’s Google overall business model, and it was also profitable, with over $1.7 billion in profits. Google Cloud is instrumental to Google’s AI strategy.

How Big Is Google?

how-big-is-google
Google is an attention merchant that – in 2023 – generated $237.85 billion (over 77% of its total revenues) from ads (Google Search, YouTube Ads, and Network sites), followed by Google Play, Pixel phones, YouTube Premium (a $31.5 billion segment), and Google Cloud (over $33 billion).

Google Traffic Acquisition Costs

what-is-google-tac
The traffic acquisition cost represents the expenses incurred by an internet company, like Google, to gain qualified traffic – on its pages – for monetization. Over the years, Google has been able to reduce its traffic acquisition costs and, in any case, to keep it stable. In 2023 Google spent 21.39% ($50.9 billion) of its total advertising revenues ($237.8 billion) to guarantee its traffic on several desktop and mobile devices across the web.

YouTube Business Model

how-does-youtube-make-money
YouTube was acquired for almost $1.7 billion in 2006 by Google. It makes money through advertising and subscription revenues. YouTube advertising network is part of Google Ads, and it reported more than $31B in revenues by 2023. YouTube also makes money with its paid memberships and premium content.

Google vs. Bing

google-vs-bing
In 2023, Google’s search advertising machine, generated over 175 billion dollars. Whereas Microsoft’s Bing generated 12.2 billion dollars. Thus, as of 2023, Google’s search advertising machine is over 14x larger than Microsoft’s search advertising machine.

Google Profits

google-income
Google makes most of its money from advertising. Indeed total advertising revenue represented nearly 78% of Google’s (Alphabet) overall revenues for 2023. Google Search represented nearly 57% of Google’s total revenues. Google generated $307.39B in revenues in 2022, and $73.79B billion in net profits.

Google Revenue Breakdown

google-revenue-breakdown
In 2023, Google generated $307.39 billion, comprising $175B in Google Search, $31.51B in YouTube ads, and $31.31B in Google network revenue. $34.69B in other revenue, $33B in Google cloud, $1.53B in other bets.

Google Advertising Revenue

how-much-money-does-google-make-from-advertising
In 2023, Google generated 237.85B in revenue in advertising, which represented over 77% of its total revenues of $ 307.39 B. In 2022, Google generated $224.47B in revenues from advertising, which represented almost 80% of the total revenues, compared to $282.83B in total revenues. Therefore, most of the revenues from Alphabet, the mother company of Google, come from advertising.

Apple vs. Google

apple-vs-google-revenues

Google Employees Number

google-layoffs
At the end of December 2022, Google had over 190,000 employees.  On January 20, Google announced the layoff of 12,000 employees within the company, thus bringing the number of total employees by December 2023 to 182,502 full-time employees.

Google Revenue Per Employee

google-revenue-per-employee
Google generated $1,684,332 per employee in 2023, compared to $1,486,779 per employee in 2022. As of January 2023, as the company announced a mass layoff, it brought back its revenue per employee at $1,586,880, still behind the peak in 2021, for $1,840,330.

YouTube Ad Revenue

youtube-ads-revenue
By 2023, YouTube generated $31.51 billion in advertising revenue.

Main Free Guides:

Read next:

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
FourWeekMBA