Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics. Bender is also the director of the university’s Computational Linguistics Master of Science (CLMS) program and, at times, has voiced her concerns over the potential societal risks of large language models.
Education and early career
Bender received a Ph.D. in Linguistics from Stanford University in 2000 and spent around 10 months as a lecturer at the University of California, Berkeley, soon after.
In 2001, she briefly worked as a Grammar Engineer at YY Technologies before returning to Stanford in September 2002 as an Acting Assistant Professor.
University of Washington
Twelve months later, Bender took up an assistant professor position in linguistics at the University of Washington. She is now an Adjunct Professor in Computer Science & Engineering, a Professor of Linguistics, and, as noted earlier, director of the CLMS program.
Bender is also involved in various university institutions:
- Tech Policy Lab – an interdisciplinary collaboration to promote and enhance tech policy via research and education.
- Value Sensitive Design Lab – an initiative centered around value-sensitive design. Pioneered in the 1990s, the approach establishes theory and method to incorporate human values throughout the design process, and
- RAISE – an acronym of Responsibility in AI Systems & Experiences, RAISE’s mission is to conduct research into AI systems and their interactions with human values. It also intends to devise systems for underserved contexts across critical areas such as education, finance, policy, and health.
In response to the rapid progression in NLP over the preceding three years, Bender published the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? in March 2021.
The paper asked important questions about the risks associated with LLMs and how those risks could be mitigated. To that end, Bender proposed that the financial and environmental costs be considered first and foremost.
She also believed that resources be invested in improving the quality of LLM input. In other words, datasets should be carefully curated and documented as opposed to models simply consuming all of the information on the internet.
Google’s LaMDA chatbot
When Google engineer Blake Lemoine publicly stated that the company’s LaMDA chatbot was sentient, Bender stressed that the obvious misconception “shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate?”
At the crux of the article she wrote for The Guardian is that people instinctively believe the words produced by a chatbot were created by a human mind. She also argued that the question-and-answer, concierge-type service now incorporated into Google Search increases the likelihood that a user will take information scraped from the internet as fact.
To prevent the spread of misinformation and design systems that “don’t abuse our empathy or trust”, Bender noted that transparency was key. What was the model trained to do? What information was it trained on? Who chose the data, and for what purpose?
- Emily M. Bender is a University of Washington professor who specializes in natural language processing (NLP) and computational linguistics.
- At the University of Washington, Bender is involved in several organizations that deal with AI as well as tech design, policy, and impact. These include RAISE, Value Sensitive Design Lab, and Tech Policy Lab.
- In a 2021 academic paper, Bender asked several important questions about the risks associated with LLMs and how those risks could be mitigated. She has also written on the role of AI chatbots and their contribution to the spread of misinformation.
- Background and Expertise:
- Emily M. Bender is a University of Washington professor specializing in natural language processing (NLP) and computational linguistics.
- She is the director of the university’s Computational Linguistics Master of Science (CLMS) program and holds positions in both Linguistics and Computer Science & Engineering departments.
- Education and Career:
- Bender earned her Ph.D. in Linguistics from Stanford University in 2000.
- She has held positions at various institutions, including the University of California, Berkeley, and Stanford University, before settling at the University of Washington.
- Involvement in University Institutions:
- Bender is affiliated with several University of Washington institutions, including the Tech Policy Lab, Value Sensitive Design Lab, and RAISE (Responsibility in AI Systems & Experiences).
- These affiliations reflect her commitment to addressing AI’s societal impact, ethics, and human values.
- Concerns and Contributions:
- Bender authored a significant paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” in 2021.
- In the paper, she raised concerns about the risks associated with large language models (LLMs) and proposed mitigations, such as considering financial and environmental costs, improving data quality, and enhancing transparency.
- Critique of AI Chatbots and Misinformation:
- Bender has critiqued AI chatbots’ potential to spread misinformation, emphasizing that people may perceive generated text as the output of a human mind.
- She stressed the importance of transparency in the design of AI systems to prevent the abuse of empathy and trust and to ensure accountability for data selection and model training.
Connected Business Model Analyses
Stability AI Ecosystem