What Is Google Brain? History of Google Brain

  • Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development. The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”.
  • Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. Much of the early work on the project was done in the “20% time” where employees pursue side projects.
  • Google announced in November 2015 that it created a new machine learning system known as TensorFlow. This system is present in many features across Google’s product suite and was made open-source to advance the industry and in the process, position the company as a leader. 

 

 

Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development.

The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”. Ng was soon joined by fellow Google engineer Jeff Dean and researcher Greg Corrado, with much of the initial work part-time and only conducted in the employee’s “20 percent time”.

Below, we’ll chart a brief history of Google Brain and highlight some of the division’s proudest achievements to date.

Early years

Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. This vision was clarified in a 2012 post in which Ng and Dean described a system (later known as DistBelief) that could distinguish between pictures of motorcycles and cars.

Instead of feeding the system labeled images, the pair showed it 10 million YouTube videos over a week based on a belief that it would learn to identify unlabeled images. After the week-long period, the neural network comprised of 16,000 computer processors had acquired unintended knowledge and had learned to identify cat.s

As Ng explained in The New York Times: “The remarkable thing was that [the system] had discovered the concept of a cat itself. No one had ever told it what a cat is. That was a milestone in machine learning.

While neither the blog post nor the accompanying paper mentions Google Brain explicitly, the work undertaken was part of the Google Brain project.

DNNresearch Inc. acquisition

In March 2013, Google acquired Toronto-based neural networks start-up DNNresearch Inc. 

As part of the deal, DNNresearch founder Geoffrey Hinton, a world-renowned neural net researcher, joined Google with two of his graduate students in Alex Krizhevsky and Ilya Sutskever (who would later co-found OpenAI).

The announcement also coincided with a $600,000 gift from Google to Hinton’s research team to support further research into neural nets. Hinton subsequently started a branch of Google Brain in Toronto.

Google Brain graduates

With the team afforded the space and time to prove the Google Brain tech, there was confidence that it could be used in other applications.

Google Brain thus graduated from Google X (now known as X, the moonshot factory) in late 2012 and became part of Google AI. 

TensorFlow

Google announced in November 2015 that it had created a new machine learning system known as TensorFlow. The software was made open source to both benefit Google and advanced the wider industry, and some saw it as the point at which Google started to pivot from a search company to an AI company.

TensorFlow was the successor of DistBelief and powers features such as Android’s speech recognition system, the search function in Google Photos, and the “smart reply” function in the inbox app. It can also be found in YouTube video recommendations and many other contexts.

Google Brain is verified

On April 6, 2016, Hacker News shared the Google Brain team page on its website. This data potentially marked the first time the team name had been used in the public arena.

Two months later, Google Brain released Magenta, a machine learning project that could generate art and music. In September, Google introduced the Google Neural Machine Translation (GNMT) system to increase the fluency and accuracy of Google Translate.

The announcement post, which was written by Google research scientists, once more publicly acknowledge that the Google Brain team existed by thanking them for their contributions.

Key Highlights

  • Introduction to Google Brain:
    • Google Brain is an artificial intelligence research team within Google AI, which is dedicated to AI research and development.
    • It was founded in 2011 by Andrew Ng, initially named the “Deep Learning Project at Google.”
    • The early work on Google Brain was conducted part-time in employees’ “20 percent time.”
  • Early Achievements:
    • Google Brain aimed to integrate deep learning processes into Google’s existing infrastructure.
    • A significant achievement was training a neural network to recognize cats in images by exposing it to unlabeled YouTube videos.
    • This achievement demonstrated the system’s ability to learn features without explicit labeling.
  • DNNresearch Inc. Acquisition:
    • In 2013, Google acquired DNNresearch Inc., a neural networks startup based in Toronto.
    • Geoffrey Hinton, a prominent neural net researcher, and his graduate students joined Google as part of the acquisition.
    • This acquisition contributed to the advancement of Google Brain’s research and expertise.
  • Google Brain Graduates and TensorFlow:
    • Google Brain transitioned from Google X to Google AI, signifying its growth and relevance.
    • In 2015, Google introduced TensorFlow, an open-source machine learning system.
    • TensorFlow replaced DistBelief and powered various Google products, including speech recognition and Google Photos.
  • Public Recognition and Contributions:
    • In 2016, the Google Brain team’s name was publicly acknowledged on Hacker News and the team page.
    • Google Brain released Magenta, an AI project generating art and music.
    • The Google Neural Machine Translation (GNMT) system was introduced to enhance Google Translate’s accuracy.
  • Key Takeaways:
    • Google Brain is an AI research team under Google AI, founded by Andrew Ng.
    • It initially aimed to integrate deep learning into Google’s infrastructure, with significant early successes in image recognition.
    • TensorFlow, introduced in 2015, became a cornerstone of Google’s AI efforts and was open-sourced.
    • Google Brain’s contributions expanded to various AI projects and research areas over the years.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

About The Author

Scroll to Top
FourWeekMBA