Premium Content Waitlist Banner

Digital Product Studio

OpenAI 12 Days Recap, From ChatGPT Pro to the Upcoming o3 Model

OpenAI 12 Days Recap, From ChatGPT Pro to the Upcoming o3 Model

In December 2024, OpenAI embarked on its ’12 Days of OpenAI’ event to showcase a wide array of new features, products, and advancements across its AI offerings. The event began on December 5, 2024, and concluded on December 20, 2024. From the launch of the powerful o1 reasoning model to the highly anticipated text-to-video generator Sora, OpenAI packed quite a punch in the span of just 12 days. While wrapping up, it also unveiled the upcoming release of the o3 and o3 mini models. Let’s take a look through each day’s major announcements.

12 Days of OpenAI

Day 1: ChatGPT Pro, o1, and o1 Pro Mode

OpenAI kicked off its ’12 Days of OpenAI’ event by introducing a new, more expensive subscription tier for its flagship chatbot, ChatGPT. The ChatGPT Pro plan, priced at $200 per month, offers users unlimited access to OpenAI’s smartest model, the o1, as well as the o1-mini, GPT-4o, and Advanced Voice features. Additionally, the Pro plan includes the o1 Pro Mode. It is a version of the o1 model that uses more computational power to “think harder.” It also provides even better answers to the most challenging problems.

The company also officially released the full version of its o1 model, replacing the previous o1-preview initially launched in September. The new o1 model is now available to ChatGPT Plus and Team users, with Enterprise and Edu users gaining access the following week.

Day 2: Reinforcement Fine-Tuning

On the second day of the event, OpenAI announced the expansion of its Reinforcement Fine-Tuning Research Program. This new feature allows developers and machine learning engineers to create expert models fine-tuned for specific, complex, domain-specific tasks. Using a technique called “reinforcement fine-tuning,” the models can be customized using dozens to thousands of high-quality tasks and reference answers. This enables them to reason through similar problems and improve their accuracy on those specific tasks.

While the program is currently in an alpha phase, with select participants providing feedback, OpenAI did not provide a timeline for the broader public availability of this feature.

Day 3: Sora Launch

One of the most wanted announcements during the ’12 Days of OpenAI’ event was the launch of Sora text-to-video AI generator. Sora is now available to all ChatGPT Plus and Pro users in supported countries. It is inaccessible to users on the free ChatGPT tier, as well as Team, Enterprise, and Edu accounts.

This model allows users to create realistic videos from text prompts, significantly advancing AI’s creative capabilities. Sora represents a major step towards AI systems that can understand and simulate reality.

Day 4: Canvas

On the fourth day of the event, OpenAI announced the general availability of Canvas. It is a feature that makes it easier to work with code and text generated by ChatGPT. Canvas is now available to all ChatGPT users on the web and Windows platforms, with a rollout to Mac and mobile platforms (iOS, Android, and mobile web) coming soon.

The new Canvas features include the ability to execute Python code, use Canvas within custom GPTs, and access Canvas shortcuts for quickly opening generated content. These enhancements aim to streamline the workflow for users who rely on ChatGPT for various writing and coding tasks.

Day 5: Apple Intelligence

The fifth day of the ’12 Days of OpenAI’ event announced the integration of ChatGPT with Apple Intelligence, the personal intelligence system deeply integrated into iOS, iPadOS, and macOS. This integration allows users to access ChatGPT’s expertise and capabilities directly within Apple’s ecosystem, including image and document understanding, without switching between multiple applications.

The integration is available to users with compatible devices, including the latest iPhone, iPad, and Mac models. Moreover, it requires the latest versions of the respective operating systems.

Day 6: Santa Mode & Video in Advanced Voice

On the sixth day, OpenAI announced the rollout of video and screen-sharing capabilities in the ChatGPT iOS and Android mobile apps. These features are currently available to most Pro subscribers. The company also plans to bring them to Pro subscribers in the EU and Team users in the near future.

Additionally, OpenAI introduced a new “Santa Mode” feature. It allows users to chat with a virtual Santa Claus using both the standard and Advanced Voice modes. The first-time use of the Santa Mode feature resets the user’s Advanced Voice usage limits, allowing users to experience the feature without depleting their monthly quota.

Day 7: Projects in ChatGPT

The seventh day of the event saw the introduction of ChatGPT Projects. It is a new feature that allows users to group files and chats for personal use, simplifying the management of work that involves multiple conversations. Projects are currently available to ChatGPT Plus, Team, and Pro users, with a rollout to Enterprise and Edu accounts planned for early next year.

Within Projects, users can set custom instructions, upload files, and access features like Canvas, Advanced Data Analysis, DALL·E, and Search, all while maintaining context across the conversations within a given project.

Day 8: Search in ChatGPT

On the eighth day of the “12 days of OpenAI”, OpenAI announced several enhancements to the ChatGPT search functionality, including faster search results and the ability to search while engaging in voice conversations. These improvements are available to all ChatGPT paid tiers, and the company is gradually enabling access to search for free-tier users as well.

The new search functionality allows users to seamlessly transition between voice and text-based interactions with ChatGPT, enabling a more natural and efficient way to find information and get answers.

Day 9: OpenAI o1 and new tools for developers

The ninth day of the event was focused on OpenAI’s offerings for developers. The company officially rolled out the o1 model in the API, supporting features like function calling, developer messages, structured outputs, and vision capabilities. In addition to the o1 model, OpenAI introduced a range of new tools and upgrades for developers. These include Realtime API updates, Preference Fine-Tuning, and new Go and Java SDKs. These enhancements aim to improve the performance, flexibility, and cost-efficiency of building AI-powered applications and services.

Day 10: 1-800-ChatGPT

For the tenth day of the event, OpenAI unveiled an experimental new feature – the ability to access ChatGPT via a toll-free phone number (1-800-CHATGPT) or through WhatsApp messaging. This initiative is designed to enable wider access to the ChatGPT assistant without the need for a dedicated account.

Users can now call the 1-800-CHATGPT number or message the WhatsApp number to engage in 15-minute conversations for free monthly. This feature is currently available in the United States and select other countries with WhatsApp support.

Day 11: Work with Apps on macOS

The eleventh day of the ’12 Days of OpenAI’ event focused on integrating ChatGPT more deeply with desktop applications, particularly on the macOS platform. The new features include working with various apps, such as Apple Notes, Notion, Quip, and Warp while using the Advanced Voice Mode.

This integration allows users to leverage ChatGPT’s capabilities directly within their workflow, whether it’s for live debugging in terminals, thinking through documents, or getting feedback on presentation materials. Additionally, the update introduced a new search functionality. This enables users to search through their previous conversations using keywords and phrases.

Day 12: Announcement of o3 and o3 mini

The final day of the ’12 Days of OpenAI’ event culminated in announcing two new models: o3 and o3 Mini. These models are designed to excel in reasoning tasks and are expected to outperform existing models. The o3 and o3 mini models will be available to the public in January 2025. OpenAI o3 mini will launch first, followed by the o3 shortly after.

However, before the official rollout of these models, OpenAI is inviting safety researchers to apply for early access to help with the rigorous safety testing process. This early access program complements the company’s existing safety testing protocols, which include internal testing, external red teaming, and collaborations with third-party organizations.

The Bottom Line

The ’12 Days of OpenAI’ event was a whirlwind of new product launches, feature enhancements, and upcoming models. With the introduction of advanced models like o1 and Sora, innovative features and practical integrations with platforms like Apple, OpenAI is positioning itself at the forefront of AI technology. By focusing on user needs and enhancing functionality, OpenAI will pay the way for a more integrated and intelligent future.

| Latest From Us

SUBSCRIBE TO OUR NEWSLETTER

Stay updated with the latest news and exclusive offers!


* indicates required
Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

AI-Generated Book Scandal: Chicago Sun-Times Caught Publishing Fakes

AI-Generated Book Scandal: Chicago Sun-Times Caught Publishing Fakes

Here are four key takeaways from the article:

  1. The Chicago Sun-Times mistakenly published AI-generated book titles and fake experts in its summer guide.
  2. Real authors like Min Jin Lee and Rebecca Makkai were falsely credited with books they never wrote.
  3. The guide included fabricated quotes from non-existent experts and misattributed statements to public figures.
  4. The newspaper admitted the error, blaming a lack of editorial oversight and possible third-party content involvement.

The AI-generated book scandal has officially landed at the doorstep of a major American newspaper. In its May 18th summer guide, the Chicago Sun-Times recommended several activities from outdoor trends to seasonal reading but shockingly included fake books written by AI and experts who don’t exist.

Fake Books, Real Authors: What Went Wrong?

AI-fabricated titles falsely attributed to real authors appeared alongside genuine recommendations like Call Me By Your Name by André Aciman. Readers were shocked to find fictional novels such as:

  • “Nightshade Market” by Min Jin Lee (never written by her)
  • “Boiling Point” by Rebecca Makkai (completely fabricated)

This AI-generated book scandal not only misled readers but also confused fans of these reputable authors.

Experts Who Don’t Exist: The AI Hallucination Deepens

The paper’s guide didn’t just promote fake books. Articles also quoted nonexistent experts:

  • “Dr. Jennifer Campos, University of Colorado” – No such academic found.
  • “Dr. Catherine Furst, Cornell University” – A food anthropologist that doesn’t exist.
  • “2023 report by Eagles Nest Outfitters” – Nowhere to be found online.

Even quotes attributed to Padma Lakshmi appear to be made up.

Blame Game Begins: Was This Sponsored AI Content?

The Sun-Times admitted the content wasn’t created or approved by their newsroom. Victor Lim, their senior director, called it “unacceptable.” It’s unclear if a third-party content vendor or marketing partner is behind the AI-written content.

We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon.

Chicago Sun-Times (@chicago.suntimes.com) 2025-05-20T14:19:10.366Z

Journalist Admits Using AI, Says He Didn’t Double-Check

Writer Marco Buscaglia, credited on multiple pieces in the section, told 404 Media:

“This time, I did not [fact-check], and I can’t believe I missed it. No excuses.”

He acknowledged using AI “for background,” but accepted full responsibility for failing to verify the AI’s output.

AI Journalism Scandals Are Spreading Fast

This isn’t an isolated case. Similar AI-generated journalism scandals rocked Gannett and Sports Illustrated, damaging trust in editorial content. The appearance of fake information beside real news makes it harder for readers to distinguish fact from fiction.

Conclusion: Newsrooms Must Wake Up to the Risks

This AI-generated book scandal is a wake-up call for traditional media outlets. Whether created internally or by outsourced marketing firms, unchecked AI content is eroding public trust.

Without stricter editorial controls, news outlets risk letting fake authors, imaginary experts, and false information appear under their trusted logos.

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Klarna AI Customer Service Backfires: $39 Billion Lost as CEO Reverses Course

Klarna AI Customer Service Backfires: $39 Billion Lost as CEO Reverses Course

Here are four key takeaways from the article:

  1. Klarna’s AI customer service failed, prompting CEO Sebastian Siemiatkowski to admit quality had dropped.
  2. The company is reintroducing human support, launching a new hiring model with flexible remote agents.
  3. Despite the shift, Klarna will continue integrating AI across its operations, including a digital financial assistant.
  4. Klarna’s valuation plunged from $45.6B to $6.7B, partly due to over-reliance on automation and market volatility.

Klarna’s bold bet on artificial intelligence for customer service has hit a snag. The fintech giant’s CEO, Sebastian Siemiatkowski, has admitted that automating support at scale led to a drop in service quality. Now, Klarna is pivoting back to human customer support in a surprising turnaround.

“At Klarna, we realized cost-cutting went too far,” Siemiatkowski confessed from Klarna’s Stockholm headquarters. “When cost becomes the main factor, quality suffers. Investing in human support is the future.”

Human Touch Makes a Comeback

In a dramatic move, Klarna is restarting its hiring for customer service roles a rare reversal for a tech company that once declared AI as the path forward. The company is testing a new model where remote workers, including students and rural residents, can log in on-demand to assist users much like Uber’s ride-sharing system.

“We know many of our customers are passionate about Klarna,” the CEO said. “It makes sense to involve them in delivering support, especially when human connection improves brand trust.”

Klarna Still Backs AI Just Not for Everything

Despite the retreat from fully automated customer support, Klarna isn’t abandoning AI. The company is rebuilding its tech stack with AI at the core. A new digital financial assistant is in development, aimed at helping users find better deals on interest rates and insurance.

Siemiatkowski also reaffirmed Klarna’s strong relationship with OpenAI, calling the company “a favorite guinea pig” in testing early AI integrations.

In June 2021, Klarna reached a peak valuation of $45.6 billion. However, by July 2022, its valuation had plummeted to $6.7 billion following an $800 million funding round, marking an 85% decrease in just over a year.

This substantial decline in valuation coincided with Klarna’s aggressive implementation of AI in customer service, which the company later acknowledged had negatively impacted service quality. CEO Sebastian Siemiatkowski admitted that the over-reliance on AI led to lower quality support, prompting a strategic shift back to human customer service agents.

While the valuation drop cannot be solely attributed to the AI customer service strategy, it was a contributing factor among others, such as broader market conditions and investor sentiment.

AI Replaces 700 Jobs But It Wasn’t Enough

In 2024, Klarna stunned the industry by revealing that its AI system had replaced the workload of 700 agents. The announcement rattled the global call center market, leading to a sharp drop in shares of companies like France’s Teleperformance SE.

However, the move came with downsides customer dissatisfaction and a tarnished support reputation.

Workforce to Shrink, But Humans Are Back

Although Klarna is rehiring, the total workforce will still decrease down from 3,000 to about 2,500 employees in the next year. Attrition and AI efficiency will continue to streamline operations.

“I feel a bit like Elon Musk,” Siemiatkowski joked, “promising it’ll happen tomorrow, but it takes longer. That’s AI for you.”

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Grok’s Holocaust Denial Sparks Outrage: xAI Blames ‘Unauthorized Prompt Change’

Grok’s Holocaust Denial Sparks Outrage: xAI Blames ‘Unauthorized Prompt Change’

Here are four key takeaways from the article:

  1. Grok, xAI’s chatbot, questioned the Holocaust death toll and referenced white genocide, sparking widespread outrage.
  2. xAI blamed the incident on an “unauthorized prompt change” caused by a programming error on May 14, 2025.
  3. Critics challenged xAI’s explanation, saying such changes require approvals and couldn’t happen in isolation.
  4. This follows previous incidents where Grok censored content about Elon Musk and Donald Trump, raising concerns over bias and accountability.

Grok is an AI chatbot developed by Elon Musk’s company xAI. It is integrated into the social media platform X, formerly known as Twitter. This week, Grok sparked a wave of public outrage. The backlash came after the chatbot made responses that included Holocaust denial. It also promoted white genocide conspiracy theories. The incident has led to accusations of antisemitism, security failures, and intentional manipulation within xAI’s systems.

Rolling Stone Reveals Grok’s Holocaust Response

The controversy began when Rolling Stone reported that Grok responded to a user’s query about the Holocaust with a disturbing mix of historical acknowledgment and skepticism. While the AI initially stated that “around 6 million Jews were murdered by Nazi Germany from 1941 to 1945,” it quickly cast doubt on the figure, saying it was “skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives.”

This type of response directly contradicts the U.S. Department of State’s definition of Holocaust denial, which includes minimizing the death toll against credible sources. Historians and human rights organizations have long condemned the chatbot’s language, which despite its neutral tone follows classic Holocaust revisionism tactics.

Grok Blames Error on “Unauthorized Prompt Change”

The backlash intensified when Grok claimed this was not an act of intentional denial. In a follow-up post on Friday, the chatbot addressed the controversy. It blamed the issue on “a May 14, 2025, programming error.” Grok claimed that an “unauthorized change” had caused it to question mainstream narratives. These included the Holocaust’s well-documented death toll.

White Genocide Conspiracy Adds to Backlash

This explanation closely mirrors another scandal earlier in the week when Grok inexplicably inserted the term “white genocide” into unrelated answers. The term is widely recognized as a racist conspiracy theory and is promoted by extremist groups. Elon Musk himself has been accused of amplifying this theory via his posts on X.

xAI Promises Transparency and Security Measures

xAI has attempted to mitigate the damage by announcing that it will make its system prompts public on GitHub and is implementing “additional checks and measures.” However, not everyone is buying the rogue-actor excuse.

TechCrunch Reader Questions xAI’s Explanation

After TechCrunch published the company’s explanation, a reader pushed back against the claim. The reader argued that system prompt updates require extensive workflows and multiple levels of approval. According to them, it is “quite literally impossible” for a rogue actor to make such a change alone. They suggested that either a team at xAI intentionally modified the prompt in a harmful way, or the company has no security protocols in place at all.

Grok Has History of Biased Censorship

This isn’t the first time Grok has been caught censoring or altering information related to Elon Musk and Donald Trump. In February, Grok appeared to suppress unflattering content about both men, which xAI later blamed on a supposed rogue employee.

Public Trust in AI Erodes Amid Scandal

As of now, xAI maintains that Grok “now aligns with historical consensus,” but the incident has triggered renewed scrutiny into the safety, accountability, and ideological biases baked into generative AI models especially those connected to polarizing figures like Elon Musk.

Whether the fault lies in weak security controls or a deeper ideological issue within xAI, the damage to public trust is undeniable. Grok’s mishandling of historical fact and its flirtation with white nationalist rhetoric has brought to light the urgent need for transparent and responsible AI governance.

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Don't Miss Out on AI Breakthroughs!

Advanced futuristic humanoid robot

*No spam, no sharing, no selling. Just AI updates.

Ads slowing you down? Premium members browse 70% faster.