Who is Emily Bender?

Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics. Bender is also the director of the university’s Computational Linguistics Master of Science (CLMS) program and, at times, has voiced her concerns over the potential societal risks of large language models.

Education and early career

Bender received a Ph.D. in Linguistics from Stanford University in 2000 and spent around 10 months as a lecturer at the University of California, Berkeley, soon after.

In 2001, she briefly worked as a Grammar Engineer at YY Technologies before returning to Stanford in September 2002 as an Acting Assistant Professor.

University of Washington

Twelve months later, Bender took up an assistant professor position in linguistics at the University of Washington. She is now an Adjunct Professor in Computer Science & Engineering, a Professor of Linguistics, and, as noted earlier, director of the CLMS program.

Bender is also involved in various university institutions:

  • Tech Policy Lab – an interdisciplinary collaboration to promote and enhance tech policy via research and education.
  • Value Sensitive Design Lab – an initiative centered around value-sensitive design. Pioneered in the 1990s, the approach establishes theory and method to incorporate human values throughout the design process, and
  • RAISE – an acronym of Responsibility in AI Systems & Experiences, RAISE’s mission is to conduct research into AI systems and their interactions with human values. It also intends to devise systems for underserved contexts across critical areas such as education, finance, policy, and health.

Stochastic parrots

In response to the rapid progression in NLP over the preceding three years, Bender published the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? in March 2021.

The paper asked important questions about the risks associated with LLMs and how those risks could be mitigated. To that end, Bender proposed that the financial and environmental costs be considered first and foremost.

She also believed that resources be invested in improving the quality of LLM input. In other words, datasets should be carefully curated and documented as opposed to models simply consuming all of the information on the internet.

Google’s LaMDA chatbot

When Google engineer Blake Lemoine publicly stated that the company’s LaMDA chatbot was sentient, Bender stressed that the obvious misconception “shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate?

At the crux of the article she wrote for The Guardian is that people instinctively believe the words produced by a chatbot were created by a human mind. She also argued that the question-and-answer, concierge-type service now incorporated into Google Search increases the likelihood that a user will take information scraped from the internet as fact. 

To prevent the spread of misinformation and design systems that “don’t abuse our empathy or trust”, Bender noted that transparency was key. What was the model trained to do? What information was it trained on? Who chose the data, and for what purpose?

Key takeaways:

  • Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics. 
  • At the University of Washington, Bender is involved in several organizations that deal with AI as well as tech design, policy, and impact. These include RAISE, Value Sensitive Design Lab, and Tech Policy Lab.
  • In a 2021 academic paper, Bender asked several important questions about the risks associated with LLMs and how those risks could be mitigated. She has also written on the role of AI chatbots and their contribution to the spread of misinformation.

Key Highlights

  • Background and Expertise:
    • Emily M. Bender is a University of Washington professor specializing in natural language processing (NLP) and computational linguistics.
    • She is the director of the university’s Computational Linguistics Master of Science (CLMS) program and holds positions in both Linguistics and Computer Science & Engineering departments.
  • Education and Career:
    • Bender earned her Ph.D. in Linguistics from Stanford University in 2000.
    • She has held positions at various institutions, including the University of California, Berkeley, and Stanford University, before settling at the University of Washington.
  • Involvement in University Institutions:
    • Bender is affiliated with several University of Washington institutions, including the Tech Policy Lab, Value Sensitive Design Lab, and RAISE (Responsibility in AI Systems & Experiences).
    • These affiliations reflect her commitment to addressing AI’s societal impact, ethics, and human values.
  • Concerns and Contributions:
    • Bender authored a significant paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” in 2021.
    • In the paper, she raised concerns about the risks associated with large language models (LLMs) and proposed mitigations, such as considering financial and environmental costs, improving data quality, and enhancing transparency.
  • Critique of AI Chatbots and Misinformation:
    • Bender has critiqued AI chatbots’ potential to spread misinformation, emphasizing that people may perceive generated text as the output of a human mind.
    • She stressed the importance of transparency in the design of AI systems to prevent the abuse of empathy and trust and to ensure accountability for data selection and model training.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
FourWeekMBA