In a surprising turn of events, Scarlett Johansson has revealed her shock and dismay at the unauthorized use of her voice by OpenAI in ChatGPT. Despite declining their proposal to lend her voice, the company replicated her distinct vocal qualities without her consent. This has sparked a heated debate about the ethics of AI and the protection of personal identity. Let’s dive into the details of this controversy.
Scarlett Johansson’s Statement
In a statement released on Monday, Scarlett Johansson expressed her astonishment and disappointment at the actions of Sam Altman, the creator of ChatGPT. She revealed that Altman had approached her in September with an offer to voice the ChatGPT 4.0. However, after careful consideration, Johansson declined the offer for personal reasons.
The Eerie Resemblance
To Johansson’s surprise, just nine months later, the ChatGPT voices, including ‘Sky’, were released, bearing an uncanny resemblance to her voice. Friends, family, and the general public all noticed the striking similarity, leaving Johansson shocked and angered. The similarity was so pronounced that even her closest acquaintances and news outlets struggled to discern the difference.
Altman’s Intentional Reference
In a tweet, Altman hinted at the intentional nature of the similarity by simply posting the word ‘her.’ This reference alluded to the Spike Jonze movie “Her,” in which Johansson voiced an AI character named Samantha who formed an intimate relationship with a human. This connection only fueled Johansson’s suspicion that the replication of her voice was not a mere coincidence.
Legal Battle
Two days before the release of the ChatGPT 4.0 demo, Altman reached out to Johansson’s agent, asking her to reconsider her decision. However, the system was already out in the open, prompting Johansson to seek legal counsel. Her legal team sent two letters to Altman and OpenAI, demanding an explanation of the exact process used to create the ‘Sky’ voice.
OpenAI’s Response
OpenAI was also quick to debunk the internet’s theories about Johansson in a blog post, “How the voices for ChatGPT were chosen.”
OpenAI initially denied any wrongdoing, stating that the voice used for ‘Sky’ belonged to a different professional actress and was not an imitation of Scarlett Johansson. However, under mounting pressure and in the midst of a growing controversy, they eventually agreed to take down the ‘Sky’ voice.
The Broader Implications
Johansson’s case highlights the challenges and ethical dilemmas that arise with the advancement of AI technology. As deepfake technology and voice replication become more sophisticated, the protection of individual rights and personal identity becomes crucial. The unauthorized use of someone’s voice raises concerns about consent, privacy, and the potential for misuse.
Seeking Clarity and Legislation
In her statement, Johansson called for transparency and appropriate legislation to safeguard individual rights in the face of deepfake technology. She emphasized the importance of protecting one’s likeness, work, and identity in this AI era.
Conclusion
The incident with Scarlett Johansson and OpenAI raises important questions about using AI voices and its limitations. As AI technology continues to advance, it is crucial to establish ethical guidelines to protect individuals’ rights and privacy. Clear consent and transparency are essential when using AI voices that resemble real individuals.
| Also Read Latest From Us
- Meet Codeflash: The First AI Tool to Verify Python Optimization Correctness
- Affordable Antivenom? AI Designed Proteins Offer Hope Against Snakebites in Developing Regions
- From $100k and 30 Hospitals to AI: How One Person Took on Diagnosing Disease With Open Source AI
- Pika’s “Pikadditions” Lets You Add Anything to Your Videos (and It’s Seriously Fun!)
- AI Chatbot Gives Suicide Instructions To User But This Company Refuses to Censor It