In December 2024, OpenAI embarked on its ’12 Days of OpenAI’ event to showcase a wide array of new features, products, and advancements across its AI offerings. The event began on December 5, 2024, and concluded on December 20, 2024. From the launch of the powerful o1 reasoning model to the highly anticipated text-to-video generator Sora, OpenAI packed quite a punch in the span of just 12 days. While wrapping up, it also unveiled the upcoming release of the o3 and o3 mini models. Let’s take a look through each day’s major announcements.
Table of contents
- 12 Days of OpenAI
- Day 1: ChatGPT Pro, o1, and o1 Pro Mode
- Day 2: Reinforcement Fine-Tuning
- Day 3: Sora Launch
- Day 4: Canvas
- Day 5: Apple Intelligence
- Day 6: Santa Mode & Video in Advanced Voice
- Day 7: Projects in ChatGPT
- Day 8: Search in ChatGPT
- Day 9: OpenAI o1 and new tools for developers
- Day 10: 1-800-ChatGPT
- Day 11: Work with Apps on macOS
- Day 12: Announcement of o3 and o3 mini
12 Days of OpenAI
Day 1: ChatGPT Pro, o1, and o1 Pro Mode
OpenAI kicked off its ’12 Days of OpenAI’ event by introducing a new, more expensive subscription tier for its flagship chatbot, ChatGPT. The ChatGPT Pro plan, priced at $200 per month, offers users unlimited access to OpenAI’s smartest model, the o1, as well as the o1-mini, GPT-4o, and Advanced Voice features. Additionally, the Pro plan includes the o1 Pro Mode. It is a version of the o1 model that uses more computational power to “think harder.” It also provides even better answers to the most challenging problems.
The company also officially released the full version of its o1 model, replacing the previous o1-preview initially launched in September. The new o1 model is now available to ChatGPT Plus and Team users, with Enterprise and Edu users gaining access the following week.
Day 2: Reinforcement Fine-Tuning
On the second day of the event, OpenAI announced the expansion of its Reinforcement Fine-Tuning Research Program. This new feature allows developers and machine learning engineers to create expert models fine-tuned for specific, complex, domain-specific tasks. Using a technique called “reinforcement fine-tuning,” the models can be customized using dozens to thousands of high-quality tasks and reference answers. This enables them to reason through similar problems and improve their accuracy on those specific tasks.
While the program is currently in an alpha phase, with select participants providing feedback, OpenAI did not provide a timeline for the broader public availability of this feature.
Day 3: Sora Launch
One of the most wanted announcements during the ’12 Days of OpenAI’ event was the launch of Sora text-to-video AI generator. Sora is now available to all ChatGPT Plus and Pro users in supported countries. It is inaccessible to users on the free ChatGPT tier, as well as Team, Enterprise, and Edu accounts.
This model allows users to create realistic videos from text prompts, significantly advancing AI’s creative capabilities. Sora represents a major step towards AI systems that can understand and simulate reality.
Day 4: Canvas
On the fourth day of the event, OpenAI announced the general availability of Canvas. It is a feature that makes it easier to work with code and text generated by ChatGPT. Canvas is now available to all ChatGPT users on the web and Windows platforms, with a rollout to Mac and mobile platforms (iOS, Android, and mobile web) coming soon.
The new Canvas features include the ability to execute Python code, use Canvas within custom GPTs, and access Canvas shortcuts for quickly opening generated content. These enhancements aim to streamline the workflow for users who rely on ChatGPT for various writing and coding tasks.
Day 5: Apple Intelligence
The fifth day of the ’12 Days of OpenAI’ event announced the integration of ChatGPT with Apple Intelligence, the personal intelligence system deeply integrated into iOS, iPadOS, and macOS. This integration allows users to access ChatGPT’s expertise and capabilities directly within Apple’s ecosystem, including image and document understanding, without switching between multiple applications.
The integration is available to users with compatible devices, including the latest iPhone, iPad, and Mac models. Moreover, it requires the latest versions of the respective operating systems.
Day 6: Santa Mode & Video in Advanced Voice
On the sixth day, OpenAI announced the rollout of video and screen-sharing capabilities in the ChatGPT iOS and Android mobile apps. These features are currently available to most Pro subscribers. The company also plans to bring them to Pro subscribers in the EU and Team users in the near future.
Additionally, OpenAI introduced a new “Santa Mode” feature. It allows users to chat with a virtual Santa Claus using both the standard and Advanced Voice modes. The first-time use of the Santa Mode feature resets the user’s Advanced Voice usage limits, allowing users to experience the feature without depleting their monthly quota.
Day 7: Projects in ChatGPT
The seventh day of the event saw the introduction of ChatGPT Projects. It is a new feature that allows users to group files and chats for personal use, simplifying the management of work that involves multiple conversations. Projects are currently available to ChatGPT Plus, Team, and Pro users, with a rollout to Enterprise and Edu accounts planned for early next year.
Within Projects, users can set custom instructions, upload files, and access features like Canvas, Advanced Data Analysis, DALL·E, and Search, all while maintaining context across the conversations within a given project.
Day 8: Search in ChatGPT
On the eighth day of the “12 days of OpenAI”, OpenAI announced several enhancements to the ChatGPT search functionality, including faster search results and the ability to search while engaging in voice conversations. These improvements are available to all ChatGPT paid tiers, and the company is gradually enabling access to search for free-tier users as well.
The new search functionality allows users to seamlessly transition between voice and text-based interactions with ChatGPT, enabling a more natural and efficient way to find information and get answers.
Day 9: OpenAI o1 and new tools for developers
The ninth day of the event was focused on OpenAI’s offerings for developers. The company officially rolled out the o1 model in the API, supporting features like function calling, developer messages, structured outputs, and vision capabilities. In addition to the o1 model, OpenAI introduced a range of new tools and upgrades for developers. These include Realtime API updates, Preference Fine-Tuning, and new Go and Java SDKs. These enhancements aim to improve the performance, flexibility, and cost-efficiency of building AI-powered applications and services.
Day 10: 1-800-ChatGPT
For the tenth day of the event, OpenAI unveiled an experimental new feature – the ability to access ChatGPT via a toll-free phone number (1-800-CHATGPT) or through WhatsApp messaging. This initiative is designed to enable wider access to the ChatGPT assistant without the need for a dedicated account.
Users can now call the 1-800-CHATGPT number or message the WhatsApp number to engage in 15-minute conversations for free monthly. This feature is currently available in the United States and select other countries with WhatsApp support.
Day 11: Work with Apps on macOS
The eleventh day of the ’12 Days of OpenAI’ event focused on integrating ChatGPT more deeply with desktop applications, particularly on the macOS platform. The new features include working with various apps, such as Apple Notes, Notion, Quip, and Warp while using the Advanced Voice Mode.
This integration allows users to leverage ChatGPT’s capabilities directly within their workflow, whether it’s for live debugging in terminals, thinking through documents, or getting feedback on presentation materials. Additionally, the update introduced a new search functionality. This enables users to search through their previous conversations using keywords and phrases.
Day 12: Announcement of o3 and o3 mini
The final day of the ’12 Days of OpenAI’ event culminated in announcing two new models: o3 and o3 Mini. These models are designed to excel in reasoning tasks and are expected to outperform existing models. The o3 and o3 mini models will be available to the public in January 2025. OpenAI o3 mini will launch first, followed by the o3 shortly after.
However, before the official rollout of these models, OpenAI is inviting safety researchers to apply for early access to help with the rigorous safety testing process. This early access program complements the company’s existing safety testing protocols, which include internal testing, external red teaming, and collaborations with third-party organizations.
The Bottom Line
The ’12 Days of OpenAI’ event was a whirlwind of new product launches, feature enhancements, and upcoming models. With the introduction of advanced models like o1 and Sora, innovative features and practical integrations with platforms like Apple, OpenAI is positioning itself at the forefront of AI technology. By focusing on user needs and enhancing functionality, OpenAI will pay the way for a more integrated and intelligent future.
| Latest From Us
- Meet Codeflash: The First AI Tool to Verify Python Optimization Correctness
- Affordable Antivenom? AI Designed Proteins Offer Hope Against Snakebites in Developing Regions
- From $100k and 30 Hospitals to AI: How One Person Took on Diagnosing Disease With Open Source AI
- Pika’s “Pikadditions” Lets You Add Anything to Your Videos (and It’s Seriously Fun!)
- AI Chatbot Gives Suicide Instructions To User But This Company Refuses to Censor It