Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development. The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”.
Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. Much of the early work on the project was done in the “20% time” where employees pursue side projects.
Google announced in November 2015 that it created a new machine learning system known as TensorFlow. This system is present in many features across Google’s product suite and was made open-source to advance the industry and in the process, position the company as a leader.
Google Brain is an artificial intelligence research team that works at Google AI – a Google division dedicated exclusively to AI research and development.
The division was started in 2011 by Andrew Ng who named it the “Deep Learning Project at Google”. Ng was soon joined by fellow Google engineer Jeff Dean and researcher Greg Corrado, with much of the initial work part-time and only conducted in the employee’s “20 percent time”.
Below, we’ll chart a brief history of Google Brain and highlight some of the division’s proudest achievements to date.
Google Brain was initially conceived as a way to build deep learning processes over the top of Google’s existing infrastructure. This vision was clarified in a 2012 post in which Ng and Dean described a system (later known as DistBelief) that could distinguish between pictures of motorcycles and cars.
Instead of feeding the system labeled images, the pair showed it 10 million YouTube videos over a week based on a belief that it would learn to identify unlabeled images. After the week-long period, the neural network comprised of 16,000 computer processors had acquired unintended knowledge and had learned to identify cat.s
As Ng explained in The New York Times: “The remarkable thing was that [the system] had discovered the concept of a cat itself. No one had ever told it what a cat is. That was a milestone in machine learning.’
While neither the blog post nor the accompanying paper mentions Google Brain explicitly, the work undertaken was part of the Google Brain project.
DNNresearch Inc. acquisition
In March 2013, Google acquired Toronto-based neural networks start-up DNNresearch Inc.
As part of the deal, DNNresearch founder Geoffrey Hinton, a world-renowned neural net researcher, joined Google with two of his graduate students in Alex Krizhevsky and Ilya Sutskever (who would later co-found OpenAI).
With the team afforded the space and time to prove the Google Brain tech, there was confidence that it could be used in other applications.
Google Brain thus graduated from Google X (now known as X, the moonshot factory) in late 2012 and became part of Google AI.
TensorFlow
Google announced in November 2015 that it had created a new machine learning system known as TensorFlow. The software was made open source to both benefit Google and advanced the wider industry, and some saw it as the point at which Google started to pivot from a search company to an AI company.
TensorFlow was the successor of DistBelief and powers features such as Android’s speech recognition system, the search function in Google Photos, and the “smart reply” function in the inbox app. It can also be found in YouTube video recommendations and many other contexts.
Google Brain is verified
On April 6, 2016, Hacker News shared the Google Brain team page on its website. This data potentially marked the first time the team name had been used in the public arena.
Two months later, Google Brain released Magenta, a machine learning project that could generate art and music. In September, Google introduced the Google Neural Machine Translation (GNMT) system to increase the fluency and accuracy of Google Translate.
The announcement post, which was written by Google research scientists, once more publicly acknowledge that the Google Brain team existed by thanking them for their contributions.
Key Highlights
Introduction to Google Brain:
Google Brain is an artificial intelligence research team within Google AI, which is dedicated to AI research and development.
It was founded in 2011 by Andrew Ng, initially named the “Deep Learning Project at Google.”
The early work on Google Brain was conducted part-time in employees’ “20 percent time.”
Early Achievements:
Google Brain aimed to integrate deep learning processes into Google’s existing infrastructure.
A significant achievement was training a neural network to recognize cats in images by exposing it to unlabeled YouTube videos.
This achievement demonstrated the system’s ability to learn features without explicit labeling.
DNNresearch Inc. Acquisition:
In 2013, Google acquired DNNresearch Inc., a neural networks startup based in Toronto.
Geoffrey Hinton, a prominent neural net researcher, and his graduate students joined Google as part of the acquisition.
This acquisition contributed to the advancement of Google Brain’s research and expertise.
Google Brain Graduates and TensorFlow:
Google Brain transitioned from Google X to Google AI, signifying its growth and relevance.
In 2015, Google introduced TensorFlow, an open-source machine learning system.
TensorFlow replaced DistBelief and powered various Google products, including speech recognition and Google Photos.
Public Recognition and Contributions:
In 2016, the Google Brain team’s name was publicly acknowledged on Hacker News and the team page.
Google Brain released Magenta, an AI project generating art and music.
The Google Neural Machine Translation (GNMT) system was introduced to enhance Google Translate’s accuracy.
Key Takeaways:
Google Brain is an AI research team under Google AI, founded by Andrew Ng.
It initially aimed to integrate deep learning into Google’s infrastructure, with significant early successes in image recognition.
TensorFlow, introduced in 2015, became a cornerstone of Google’s AI efforts and was open-sourced.
Google Brain’s contributions expanded to various AI projects and research areas over the years.
Google is primarily owned by its founders, Larry Page and Sergey Brin, who have more than 51% voting power. Other individual shareholders comprise John Doerr (1.5%), a venture capitalist and early investor in Google, and CEO, Sundar Pichai. Former Google CEO Eric Schmidt has 4.2% voting power. The most prominent institutional shareholders are mutual funds BlackRock and The Vanguard Group, with 2.7% and 3.1%, respectively.
Google (now Alphabet) primarily makes money through advertising. The Google search engine, while free, is monetized with paid advertising. In 2023, Alphabet generated over $175B from Google search, $31.51B billion from the Network members (Adsense and AdMob), $31.31B billion from YouTube Ads, $33B from Google Cloud, and $34.69B billion from other sources (Google Play, Hardware devices, and other services). And $1.53B from its other bets.
Google is an attention merchant that – in 2022 – generated over $224 billion (almost 80% of revenues) from ads (Google Search, YouTube Ads, and Network sites), followed by Google Play, Pixel phones, YouTube Premium (a $29 billion segment), and Google Cloud ($26.2 billion).
Of Google’s (Alphabet) over $307.39 billion in revenue for 2023, Google also generated for the first time, well over 1.5 billion dollars in revenue from its bets, which Google considers potential moonshots (companies that might open up new industries). Google’s bets also generated a loss for the company of over $4 billion in the same year. In short, Google is using the money generated by search and betting it on other innovative industries, which are ramping up in 2023.
In 2023, Alphabet’s (Google) Cloud Business generated over $33 billion within Alphabet’s Google overall businessmodel, and it was also profitable, with over $1.7 billion in profits. Google Cloud is instrumental to Google’s AI strategy.
Google is an attention merchant that – in 2023 – generated $237.85 billion (over 77% of its total revenues) from ads (Google Search, YouTube Ads, and Network sites), followed by Google Play, Pixel phones, YouTube Premium (a $31.5 billion segment), and Google Cloud (over $33 billion).
The traffic acquisition cost represents the expenses incurred by an internet company, like Google, to gain qualified traffic – on its pages – for monetization. Over the years, Google has been able to reduce its traffic acquisition costs and, in any case, to keep it stable. In 2023 Google spent 21.39% ($50.9 billion) of its total advertising revenues ($237.8 billion) to guarantee its traffic on several desktop and mobile devices across the web.
YouTube was acquired for almost $1.7 billion in 2006 by Google. It makes money through advertising and subscription revenues. YouTube advertising network is part of Google Ads, and it reported more than $31B in revenues by 2023. YouTube also makes money with its paid memberships and premium content.
In 2023, Google’s search advertising machine, generated over 175 billion dollars. Whereas Microsoft’s Bing generated 12.2 billion dollars. Thus, as of 2023, Google’s search advertising machine is over 14x larger than Microsoft’s search advertising machine.
Google makes most of its money from advertising. Indeed total advertising revenue represented nearly 78% of Google’s (Alphabet) overall revenues for 2023. Google Search represented nearly 57% of Google’s total revenues. Google generated $307.39B in revenues in 2022, and $73.79B billion in net profits.
In 2023, Google generated $307.39 billion, comprising $175B in Google Search, $31.51B in YouTube ads, and $31.31B in Google network revenue. $34.69B in other revenue, $33B in Google cloud, $1.53B in other bets.
In 2023, Google generated 237.85B in revenue in advertising, which represented over 77% of its total revenues of $ 307.39 B. In 2022, Google generated $224.47B in revenues from advertising, which represented almost 80% of the total revenues, compared to $282.83B in total revenues. Therefore, most of the revenues from Alphabet, the mother company of Google, come from advertising.
At the end of December 2022, Google had over 190,000 employees. On January 20, Google announced the layoff of 12,000 employees within the company, thus bringing the number of total employees by December 2023 to 182,502 full-time employees.
Google generated $1,684,332 per employee in 2023, compared to $1,486,779 per employee in 2022. As of January 2023, as the company announced a mass layoff, it brought back its revenue per employee at $1,586,880, still behind the peak in 2021, for $1,840,330.
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results.
Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual.
Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.
Gennaro is the creator of FourWeekMBA, which reached about four million business people, comprising C-level executives, investors, analysts, product managers, and aspiring digital entrepreneurs in 2022 alone | He is also Director of Sales for a high-tech scaleup in the AI Industry | In 2012, Gennaro earned an International MBA with emphasis on Corporate Finance and Business Strategy.