Who is Kate Crawford?

Kate Crawford is an Australian composer, producer, writer, and academic whose research focuses on the social change brought about by media technologies. Crawford is also the author of the 2021 book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.

Early career

Crawford completed a Ph.D. in Media Studies at the University of Sydney in 2007. Soon after, she wrote a book based on her dissertation about generational stereotypes and myths around adolescence and adulthood.

She then joined the University of New South Wales as an Associate Professor and continued to publish academic papers on various topics. These included the interplay between mobile devices and gender, the culture of technology use, and various topics related to big data.

MIT Media Lab

In January 2013, Crawford spent five years as a Visiting Professor at the MIT Media Lab in Cambridge, Massachusetts. 

Crawford was part of the Comparative Media Studies Program as a researcher and also instructed courses on topics such as the social and political implications of AI, data ethics, and critical approaches to technology.

She also collaborated with other faculty members on key issues and projects in the domain of AI ethics and governance.

Microsoft Research

Crawford then joined Microsoft Research in February 2012 as a Senior Principal Researcher. She continues to work at the company’s research lab in New York City today.

At Microsoft, Crawford has worked on data documentation projects that enable dataset creators to identify biases or hidden assumptions. One project also enabled consumers to determine whether such datasets met their requirements.

What’s more, she has been involved in research into the technical, social, and ethical implications of autonomous and semi-autonomous systems – particularly over the COVID-19 pandemic and in the realm of responsible big data research.

AI Now Institute

Crawford co-founded New York University’s AI Now Institute in early 2017 and served as a Distinguished Research Professor until June 2020. 

The institute “produces diagnosis and policy research to address the concentration of power in the tech industryand also sets the standard for “discourse-shaping work that focuses on the social consequences of AI and the industry behind it.

AI Now was spun out as an independent organization in mid-2022.

École Normale Supérieure 

Crawford then joined the Parisian university École Normale Supérieure (ENS) in February 2019. In the process, she became the Inaugural Visiting Chair of AI and Justice and now leads a collaborative international team involved in machine learning.

One of ENS’s principal partners is Fondation Abeona, a team that empowers citizens, companies, and decision-makers with collective intelligence to create a sustainable, responsible, and inclusive digital and AI transition. 

Views on AI

Crawford is a leading thinker and speaker on the social, political, ethical, and environmental implications of artificial intelligence and large-scale data systems.

At times, she has also been a critic of big tech’s wider ambitions in the industry. For example, in a 2021 article titled Artificial Intelligence is Misreading Human Emotion, she argued that AI could not be trained to accurately infer human emotions from facial expressions.

In the article, she referenced a 2019 paper by neuroscientist Lisa Feldman Barrett who noted that “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts”.

Key takeaways:

  • Kate Crawford is a composer, producer, writer, and academic whose research focuses on the social change brought about by media technologies. Crawford is also the author of Atlas of AI, a 2021 book on the politics and planetary costs of artificial intelligence.
  • Crawford spent five years as a Visiting Professor at the MIT Media Lab in Cambridge, Massachusetts. There, she was involved in research and instruction on the social and political implications of AI, data ethics, and critical approaches to technology.
  • Crawford then joined Microsoft Research in February 2012 as a Senior Principal Researcher to continue her work on the implications of AI. She also co-founded the AI Now Institute in 2017 to analyze big tech’s role in AI development and the wider industry.

Key Highlights

  • Kate Crawford: Kate Crawford is a multi-talented individual known for her work as a composer, producer, writer, and academic. Her focus lies in understanding the societal impacts and changes brought about by media technologies, particularly in the realm of artificial intelligence.
  • Atlas of AI: Crawford is the author of the 2021 book “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” The book delves into the complex interplay between AI, power dynamics, politics, and the global environmental impact of AI technologies.
  • Early Career and Academic Pursuits: Crawford earned her Ph.D. in Media Studies from the University of Sydney in 2007. She began her academic journey by exploring generational stereotypes and myths, particularly those surrounding adolescence and adulthood. Her research extended to various topics such as mobile devices, gender, technology culture, and big data.
  • MIT Media Lab: Crawford’s career led her to the prestigious MIT Media Lab, where she spent five years as a Visiting Professor. During her tenure, she contributed to the Comparative Media Studies Program, taught courses on AI’s social and political implications, data ethics, and critical technology approaches. She engaged in collaborative projects addressing AI ethics and governance.
  • Microsoft Research: Crawford joined Microsoft Research in February 2012 as a Senior Principal Researcher. Her work at Microsoft involved projects that focused on data documentation, uncovering biases, and examining the ethical and societal implications of autonomous systems, especially during the COVID-19 pandemic.
  • AI Now Institute: In 2017, Crawford co-founded New York University’s AI Now Institute, where she served as a Distinguished Research Professor until 2020. The institute’s mission revolves around researching and addressing the concentration of power in the tech industry. It critically examines AI’s social consequences and influences industry discourse.
  • École Normale Supérieure: Crawford’s expertise extended to École Normale Supérieure (ENS) in Paris, where she assumed the role of Inaugural Visiting Chair of AI and Justice. She leads an international team focused on machine learning and collaborates with partners like Fondation Abeona to promote responsible and inclusive digital and AI transitions.
  • Views on AI: Crawford is recognized as a prominent thinker and speaker on AI’s social, political, ethical, and environmental implications. She’s a critic of big tech’s ambitious goals, challenging common assumptions. She has pointed out the limitations of AI’s ability to accurately infer human emotions from facial expressions, emphasizing the importance of understanding scientific nuances.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

About The Author

Scroll to Top
FourWeekMBA