Scarlett Johansson vs. OpenAI, what happened?

The case involving Scarlett Johansson and the AI app Lisa AI: 90s Yearbook & Avatar revolves around the unauthorized use of Johansson’s likeness and voice by the app, leading to significant legal and ethical concerns.

Key Details of the Case

  • Unauthorized Use: Scarlett Johansson took legal action against the AI app Lisa AI for using her name, likeness, and an AI-generated version of her voice in an advertisement without her consent.
  • Advertisement Content: The 22-second advertisement featured actual footage of Johansson from behind the scenes of the Marvel film “Black Widow.” The ad included a clip where Johansson says, “What’s up guys? It’s Scarlett and I want you to come with me.” This was followed by AI-generated images and a voice that closely resembled Johansson’s, promoting the app.
  • Legal Claims: Johansson’s legal team claims that this blatant voice cloning not only infringes upon her intellectual property rights but also raises concerns about the potential misuse of AI technology. The lawsuit aims to address these violations and establish a precedent for protecting celebrities’ voices from unauthorized replication.
  • Damages and Injunction: Johansson is seeking substantial damages, as well as an injunction to prevent any further dissemination of the infringing material. Her attorney, Kevin Yorn, emphasized that they do not take such matters lightly and will pursue all available legal remedies.

Broader Implications and Ethical Concerns

  • Ethical Use of AI: The case has ignited a broader debate about the ethical use of AI in advertising and the need for robust regulations to safeguard individuals’ voices and likenesses in the digital age. The unauthorized use of Johansson’s voice and likeness underscores the need for clearer regulations and protections against the misuse of AI technologies.
  • Legal Framework: The legal landscape surrounding AI voice cloning is complex and largely uncharted. Existing laws lack precision on intellectual property rights for voices, leaving room for misuse like “deepfakes.” Privacy torts inadequately cover AI aspects, necessitating a comprehensive legal framework.
  • Regulatory Actions: The Federal Trade Commission (FTC) has been actively involved in addressing the harms of AI-enabled voice cloning. The FTC’s Voice Cloning Challenge aims to foster multidisciplinary solutions to prevent, monitor, and evaluate malicious voice cloning. This initiative reflects the need for a combined effort from technology developers, policymakers, and legal experts to address the risks posed by AI technologies.

Other Notable Incidents

  • Tom Hanks: Similar to Johansson, Tom Hanks has also been embroiled in non-legal action against a dental plan advertisement that used his likeness without permission. Hanks took to social media to distance himself from the company.
  • Drake and The Weeknd: An AI clone collaboration between The Weeknd and Drake went viral and was even submitted to the Grammys for consideration. However, neither artist was involved in its creation, highlighting the potential for misuse of AI-generated content in the music industry.

Legal and Ethical Considerations

  • Intellectual Property Rights: The unauthorized use of a person’s voice, especially for commercial purposes, can lead to lawsuits for copyright infringement or violation of publicity rights. This is particularly relevant in jurisdictions like California and New York, which have specific laws addressing the right of publicity and unauthorized use of a person’s voice.
  • Privacy and Consent: AI voice cloning raises significant privacy concerns, especially when done without the individual’s explicit permission. This can lead to violations of privacy and the right of publicity, particularly if the voice clone is used for commercial purposes.
  • Ethical Use: The ethical use of AI for voice cloning depends on consent, transparency, and the purpose of use. Issues arise when AI-generated voices or deepfakes are used for deception or without the original voice owner’s consent.

Conclusion

The Scarlett Johansson case against Lisa AI highlights the urgent need for clearer legal frameworks and ethical guidelines to govern the use of AI voice cloning technology. As AI continues to advance, it is crucial to balance technological innovation with the protection of individual rights and privacy. This case serves as a precedent for future legal actions and regulatory measures to address the misuse of AI technologies.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem
OpenAI, what happened?">

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
FourWeekMBA