Digital Product Studio

OpenAI Removes Top AI Safety Leader Aleksander Madry From His Role

OpenAI is one of the leading AI research companies working on developing safe and beneficial AGI. However, there have been some recent changes in its safety leadership that have raised concerns. In a surprising move, OpenAI recently announced the removal of Aleksander Madry, a prominent figure in AI safety.

Aleksander Removed From His Role at OpenAI

On July 23, 2024, CNBC reported that OpenAI removed Aleksander Madry from his role as head of the preparedness team. According to sources, Madry was one of OpenAI’s top safety executives handling catastrophic risks of frontier AI models. 

Madry Assigned to New AI Reasoning Project

Madry has now been assigned to work on a new and important research project focused on AI reasoning. In his new role, Madry will continue working on core AI safety challenges. 

However, sources suggest he will have a bigger responsibility within OpenAI’s research division following the reassignment. It is unclear if the move will impact Madry’s other roles at MIT, where he is currently on leave. 

Additionally, the details about this new project are sparse. It is unclear how it will compare to his previous safety-focused role.

Leadership Transition

With Madry’s reassignment, OpenAI executives Joaquin Quinonero Candela and Lilian Weng will now lead the preparedness team on an interim basis. Both are experienced researchers, but this is a significant leadership change for such an important function.

Questions Raised About OpenAI’s Safety Practices

The decision to remove Madry from his previous role comes just a week before US Senators sent a letter to the OpenAI CEO questioning the company’s safety practices. The lawmakers have sought information on steps taken by OpenAI to ensure the safety of its advanced AI systems. 

Ongoing Concerns Around OpenAI’s Commitment to Safety

The removal of Madry comes amid growing concerns about lack of oversight in the AI industry. Some experts believe companies may not voluntarily share critical safety information. 

Recent departures of other key staff have also raised questions on whether OpenAI is putting products over safety. The new development has added to the scrutiny on the company’s safety practices and priorities.

| Latest From Us

SUBSCRIBE TO OUR NEWSLETTER

Stay updated with the latest news and exclusive offers!


* indicates required
Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Don't Miss Out on AI Breakthroughs!

Advanced futuristic humanoid robot

*No spam, no sharing, no selling. Just AI updates.