Who is Timnit Gebru?

Timnit Gebru is a computer scientist of Ethiopian heritage who has a passion for diversity in technology, data mining, and algorithmic bias. Gebru also spent a much-publicized period at Google AI where she researched ethical and social issues around artificial intelligence.

Early career

Gebru arrived in the United States as a 15-year-old after she fled the Eritrean-Ethiopian War. Initially, she served as a hardware intern at Apple and worked on projects related to the audio quality of its computers.

In 2005, Gebru became an audio systems engineer at the company and designed the audio circuitry for the MacBook, MacBook Pro, iMac, and Apple TV. Over her six years at the company, she started to become interested in computer vision software that could detect human figures. 

Gebru also founded MotionThink in 2011 โ€“ a company that used design thinking to devise solutions for small businesses. Later, she became a student at the Recurse Center in New York and worked on open-source initiatives as well as her programming skills.

Stanford University

Gebru received a Master of Science in electrical engineering from Stanford University in 2010 and then completed her doctorate under the supervision of renowned researcher and computer scientist Fei Fei Li.

At Stanfordโ€™s Artificial Intelligence Lab, Gebru wrote a seminal paper that exposed racial and gender biases in AI-powered facial recognition software. She also utilized computer vision algorithms to detect and then classify vehicles from 50 million Google Street View images

The characteristics of these vehicles were used to detect a citizen’s education, race, income level, voting patterns, and degree of income segregation. Ultimately, the AI technology enabled researchers to detect demographic and economic shifts in communities in real-time.

Black in AI

Along with fellow Ethiopian computer scientist Rediet Abebe, Gebru founded the tech research organization and affinity group Black in AI in May 2017. 

The organization was started to increase the representation of black people in AI-related roles, and according to its GitHub page, addresses “the lack of visibility and support for those who are already in the field, leading them to leave or not realize their full potential.โ€

Google AI

Gebru joined Google AI in September 2018 as the co-leader of the Ethical AI Research Team. With computer scientist Margaret Mitchell, she researched the implications of artificial intelligence and its capacity to benefit society.

Gebruโ€™s doctoral work at Stanford and the increasing popularity of deep learning set her up for a lucrative career in Silicon Valley. However, it soon became apparent that a contradiction existed between Gebruโ€™s personal values and her technical work on algorithms and automation: โ€œIโ€™m not worried about machines taking over the world,โ€ she once wrote. โ€œIโ€™m worried about groupthink, insularity, and arrogance in the AI community.โ€

Remembering Liโ€™s advice to find a way to connect social justice with tech, Gebru decided that authoring research papers was a better way to promote change in AI ethics than raising said issues with her superiors at Google.

As a consequence, she left the company in December 2020.

DAIR

After Google, Gebru founded the Distributed AI Research Institute (DAIR) in August 2021. DAIR is a global, interdisciplinary AI effort with the core belief that the downsides of AI are preventable when its production and deployment involve diverse perspectives. 

Gebru said that DAIR would become part of an existing ecosystem of smaller institutes such as Data for Black Lives, Algorithmic Justice League, and Data & Society.

Key takeaways:

  • Timnit Gebru is a computer scientist of Ethiopian heritage who is passionate about diversity in technology, data mining, and algorithmic bias. Gebru also spent a much-publicized period at Google where she researched ethical and social issues around AI.
  • Gebru completed her doctorate at Stanford under the supervision of renowned researcher and computer scientist Fei Fei Li. At the universityโ€™s Artificial Intelligence Lab, she wrote a seminal paper that exposed racial and gender biases in AI-powered facial recognition software.
  • Gebru joined Google AI in September 2018 as the co-leader of the Ethical AI Research Team, but left the company two years later after deciding that authoring academic papers was a better way to institute ethical change in AI.

Key Highlights

  • Overview of Timnit Gebru:
    • Timnit Gebru is a computer scientist of Ethiopian heritage who is passionate about diversity in technology, data mining, and addressing algorithmic bias.
    • She gained significant recognition during her time at Google AI, where she conducted research on ethical and social issues related to artificial intelligence.
  • Early Career and Apple Involvement:
    • Gebru arrived in the United States after fleeing the Eritrean-Ethiopian War and worked at Apple as a hardware intern.
    • She later became an audio systems engineer at Apple, contributing to projects related to audio quality in computers.
  • Stanford University and Research:
    • Gebru earned her Master of Science in electrical engineering from Stanford University and completed her Ph.D. under the guidance of renowned computer scientist Fei Fei Li.
    • Her research at Stanford included a pivotal paper that revealed racial and gender biases in AI-driven facial recognition software.
    • Gebru utilized computer vision algorithms to analyze Google Street View images and extract demographic and economic insights from vehicle characteristics.
  • Founding Black in AI:
    • Alongside fellow computer scientist Rediet Abebe, Gebru founded Black in AI, an organization aimed at increasing the representation of black individuals in AI-related fields.
  • Google AI and Ethical Research:
    • Gebru joined Google AI in 2018 as the co-leader of the Ethical AI Research Team, where she collaborated with Margaret Mitchell.
    • Her doctoral work and growing interest in AI ethics led her to research the societal implications of artificial intelligence.
    • Gebru’s concerns about ethical issues in AI led her to leave Google in December 2020.
  • DAIR and Continued Contributions:
    • After departing from Google, Gebru founded the Distributed AI Research Institute (DAIR) with a focus on diversity and ethical AI efforts.
    • DAIR is part of a larger ecosystem of organizations promoting ethical AI, including Data for Black Lives and the Algorithmic Justice League.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffmanโ€™s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

About The Author

Scroll to Top
FourWeekMBA