Premium Content Waitlist Banner

Digital Product Studio

OpenAI Deep Research, Cloned in 12 Hours?: Meet Open Deep Research 

OpenAI Deep Research, Cloned in 12 Hours?: Meet Open Deep Research 

In just 12-hour, a team of developers accomplished what many would consider impossible. They replicated OpenAI’s Deep Research . Enter Open Deep Research, an open source tool that extract, search, and reason through vast amounts of web data. Developed from the powerful tools like Firecrawl Extract and Next.js, Open Deep Research can yield revolutionary results in record time.

The idea behind Open Deep Research was simple yet ambitious. The project aimed to clone and enhance the capabilities of OpenAI new Deep Research. It brought together an AI Agent that reasons over large datasets. It than follows a seamless integration using firecrawl extract with real-time data feeds, and an architecture that supports scalability and dynamic user interfaces. The result is a platform that replicates deep research with AI-driven data analysis.

In the sections that follow, we’ll take a closer look at the journey and the technology behind this rapid development. We’ll explore how leveraging open source can drastically cut down development time while still delivering robust, enterprise-grade functionality.

The 12-Hour Miracle: Replicating Deep Research

Creating something as complex as a deep research tool in just half a day might sound like a tall tale. Yet, with the right mindset and technological support, the developers behind Open Deep Research proved that speed does not always come at the cost of quality. The project was born from a challenge a test to see if it was possible to replicate deep research capabilities quickly without reinventing the wheel.

The Challenge and the Approach

The core challenge was to build an AI system capable of sifting through extensive web data and deriving meaningful insights in real time. Instead of starting from scratch, the team wisely chose to build upon proven, open source components. By harnessing Next.js for advanced routing and React Server Components for server-side rendering, they laid a solid foundation that could handle both performance and scalability.

Exploring the vast potential of Open Deep Research, an AI Research Tool for Deep Research. Powered by Firecrawl Extract and a Next.js AI Chatbot for accessible insights.

Moreover, the team integrated Firecrawl Extract, a tool that extracts structured data from multiple websites. This component is essential for feeding real-time data into the AI, enabling it to perform nuanced reasoning tasks on a large scale. When combined with an AI Agent Reasoning model, this system could intelligently process and analyze data, mimicking the depth and breadth of research traditionally associated with more extended development cycles.

The Role of Open Source

An integral part of this rapid development process was the open source nature of the project. Open source deep research projects thrive on collaboration, community input, and the collective expertise of developers worldwide. By releasing Open Deep Research under an open source license, the team not only showcased their technical prowess but also invited others to improve, modify, and expand upon their work. This collaborative spirit is what fuels innovation in the tech world today.

Leveraging Cutting-Edge Tools and Technologies

A project of this magnitude cannot succeed without the support of modern, reliable tools. Let’s break down some of the key components that made Open Deep Research possible.

Firecrawl Extract: The Data Engine

At the heart of Open Deep Research lies Firecrawl Extract. This tool is responsible for scouring the web, extracting structured data, and presenting it in a format that the AI can easily process. Imagine having a digital detective that scours countless websites, pulling out only the most relevant details for further analysis. Firecrawl Extract does just that, ensuring that the AI receives high-quality, real-time data to work its reasoning magic.

Its integration into the project highlights the power of using dedicated tools like Firecrawl Extract for specific tasks. Instead of writing custom code to handle data extraction from scratch, the team leveraged Firecrawl Extract capabilities to save time and reduce potential errors, a smart move when working on a strict 12-hour deadline.

Next.js and React: The Dynamic Duo

For the user interface and routing, the project harnessed the power of Next.js with its advanced App Router, along with React Server Components. This combination provides a seamless and highly efficient user experience. Next.js handles the heavy lifting of routing and page transitions, while React Server Components ensure that content is rendered quickly and efficiently on the server side.

Furthermore, the project utilized NextAuth.js for secure, yet straightforward authentication, making it easy for users to interact with the platform without unnecessary complications. The focus was clearly on performance and accessibility—two key pillars that underpin modern web applications.

AI SDK and Model Flexibility

Another standout feature is the inclusion of an AI SDK that offers a unified API for generating text, structured objects, and tool calls with various large language models (LLMs). Although the project ships with OpenAI’s gpt-4o as the default, the flexibility to switch providers (including Anthropic and Cohere) means that developers can choose the model that best fits their needs. This kind of flexibility is essential in today’s fast-paced tech landscape where one size rarely fits all.

By abstracting the complexity of model integration, the AI SDK allows developers to focus on refining their research and user experience rather than getting bogged down in configuration details. It’s a classic example of using open source innovation to drive progress forward without compromising on quality.

Shadcn/ui and Tailwind CSS: Styling with Precision

User interface matters. The project makes use of shadcn/ui—a library of component primitives based on Radix UI—to deliver an accessible, well-designed, and responsive experience. Combined with Tailwind CSS, the styling is not only modern but also highly customizable. This ensures that the application looks as polished as it functions, an important consideration when presenting complex data in an understandable format.

AI Agent Reasoning: How the Brain Works

At the center of Open Deep Research is an AI agent designed for deep reasoning. But how does it actually work? Let’s unpack the process in a way that’s both accessible and technically sound.

Data Ingestion and Real-Time Analysis

The AI agent begins its task by tapping into the data extracted by Firecrawl. This process isn’t static; the system continuously feeds real-time data into the model. Think of it as an ongoing conversation between the web and the AI, where fresh insights are constantly generated and refined. The agent uses advanced search and extract techniques to ensure that no stone is left unturned.

Reasoning at Scale

Once the data is in, the AI agent employs its reasoning model—a combination of machine learning algorithms designed to mimic human-like analysis. It parses the structured data, identifies key patterns, and draws logical inferences, much like how a seasoned researcher would. This process is powered by the AI SDK, which streamlines communication between the model and the data sources.

The system replicates deep research while expanding AI’s ability to interpret vast amounts of data. Its rapid, sophisticated reasoning demonstrates the power of integrating top-tier tools.

Integrating AI Reasoning with User Interaction

The true genius of Open Deep Research lies in its ability to present complex research findings in an easily digestible format. Users interact with the system through a sleek, intuitive interface. This system is often referred to as the Next.js AI Chatbot. It leverages the same reasoning processes to provide answers, insights, and even visualizations based on the underlying data.

The chatbot component is designed to mimic natural conversation. It guides users through the research process. It asks clarifying questions when needed and adapting its responses based on the user’s input. This dynamic interaction helps bridge the gap between raw data and actionable insights. This will makes deep research accessible to a broader audience.

Deploying and Running Locally: A Developer’s Guide

For developers eager to experiment with Open Deep Research, deploying and running the project locally is a breeze. The process has been streamlined to encourage experimentation and customization, reflecting the core ethos of open source development.

Step-by-Step Setup

  1. Environment Setup:
    Begin by setting up your environment using the provided .env.example file. This file contains all the necessary environment variables to get started. It’s essential not to commit your .env file to version control to protect your secrets and access credentials.
  2. Dependency Installation:
    Use a package manager like pnpm to install all project dependencies with the command pnpm install. This step ensures that every library and tool is correctly set up for your development environment.
  3. Database Migrations:
    The project requires some initial database setup. Run pnpm db:migrate to execute the necessary migrations. This ensures that the data persistence layer, powered by Vercel Postgres and Vercel Blob, is properly configured.
  4. Local Development:
    Finally, start your development server with pnpm dev. Once the server is running, your instance of the Next.js AI Chatbot—and by extension, the full Open Deep Research platform—will be accessible on localhost:3000.

Deployment with Vercel

For those looking to push their project live, deploying to Vercel is as simple as a single click. Vercel’s platform provides a seamless integration with GitHub, ensuring that your deployment process is both secure and efficient. Additionally, Vercel’s support for environment variables means that your deployment is both flexible and secure, catering to the needs of professional developers and hobbyists alike.

This ease of deployment is not only convenient but also exemplifies the modern trend of “deploy and iterate.” With a robust cloud platform handling much of the heavy lifting, developers can focus on innovation and experimentation rather than getting bogged down by infrastructure challenges.

The Broader Impact: Shaping the Future of AI Research

While the rapid development of Open Deep Research is impressive on its own, the broader implications of such projects are even more exciting. Here’s why this development matters and what it could mean for the future of AI research.

Democratizing Deep Research

By releasing Open Deep Research as an open source project, the creators have democratized access to deep research tools. No longer is such technology confined to large corporations or well-funded labs. Now, enthusiasts, academics, and startups can experiment with advanced AI research capabilities without prohibitive costs. This democratization fosters a collaborative environment where innovation can thrive and new ideas are constantly tested against real-world challenges.

Bridging the Gap Between Research and Application

The integration of an AI Agent Reasoning model with a user-friendly Next.js AI Chatbot illustrates how research can directly inform practical applications. Instead of a static research paper or a slow-to-update system, Open Deep Research provides real-time insights in an interactive format. This seamless transition from theory to application is crucial in a world where timely insights can lead to significant competitive advantages.

Inspiring the Next Generation of Developers

Perhaps one of the most inspiring aspects of Open Deep Research is the example it sets for aspiring developers. It shows that with determination with right tools, it’s possible to achieve remarkable feats under tight deadlines. This project not only demonstrates technical excellence but also encourages a culture of sharing and innovation. In my openion it will undoubtedly inspire future projects in the AI space.

Real-World Applications and Future Prospects

With such a powerful toolkit at their disposal, developers and researchers can envision a wide range of applications for Open Deep Research. Here are a few possibilities:

  • Market Analysis and Trend Forecasting:
    Imagine a tool that continuously monitors financial news, social media, and market trends. Than it to provide insights for investors. The combination of Firecrawl Extract and AI Agent Reasoning could highlight the signals that matter most.
  • Academic Research and Data Synthesis:
    Where synthesizing information from multiple sources is crucial Like academic research, It can be a game changer. Researchers can deploy the system to automatically aggregate data, identify correlations, and generate summaries of existing literature, accelerating the pace of discovery.
  • Customer Insights and Feedback Analysis:
    Businesses are increasingly turning to AI to understand customer sentiment and behavior. Open Deep Research can help companies refine their products and services based on real-time consumer feedback.
  • Automated Reporting Tools:
    Industries that rely on timely reports such as news media, market research, and regulatory compliance would benefit from it the most. Open research as an automated system that can produce comprehensive, data-backed reports in real time could be invaluable.

The flexibility inherent in this open source project means that it can be adapted to a wide array of industries and applications. As more developers contribute to its evolution, the possibilities for future enhancements and integrations are virtually limitless.

Final Thoughts

The journey of Open Deep Research teaches us an important lesson: with the right tools like firecrawl extract and a collaborative mindset, the speed of innovation is limited only by our imagination. It’s a testament to the progress we can achieve when we break free from traditional constraints.

If you’re intrigued by what you’ve read, why not dive into the demo or even deploy your own version? Whether you’re looking to enhance your research capabilities or simply explore the latest in AI development, Open Deep Research is here to inspire and empower. The future of AI research is about smarter, more accessible tools that enable everyone to participate in the digital revolution.

So, take a moment to explore, experiment, and share your own ideas. After all, innovation is a journey best taken together.

| Latest From Us

SUBSCRIBE TO OUR NEWSLETTER

Stay updated with the latest news and exclusive offers!


* indicates required
Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

AI-Generated Book Scandal: Chicago Sun-Times Caught Publishing Fakes

AI-Generated Book Scandal: Chicago Sun-Times Caught Publishing Fakes

Here are four key takeaways from the article:

  1. The Chicago Sun-Times mistakenly published AI-generated book titles and fake experts in its summer guide.
  2. Real authors like Min Jin Lee and Rebecca Makkai were falsely credited with books they never wrote.
  3. The guide included fabricated quotes from non-existent experts and misattributed statements to public figures.
  4. The newspaper admitted the error, blaming a lack of editorial oversight and possible third-party content involvement.

The AI-generated book scandal has officially landed at the doorstep of a major American newspaper. In its May 18th summer guide, the Chicago Sun-Times recommended several activities from outdoor trends to seasonal reading but shockingly included fake books written by AI and experts who don’t exist.

Fake Books, Real Authors: What Went Wrong?

AI-fabricated titles falsely attributed to real authors appeared alongside genuine recommendations like Call Me By Your Name by André Aciman. Readers were shocked to find fictional novels such as:

  • “Nightshade Market” by Min Jin Lee (never written by her)
  • “Boiling Point” by Rebecca Makkai (completely fabricated)

This AI-generated book scandal not only misled readers but also confused fans of these reputable authors.

Experts Who Don’t Exist: The AI Hallucination Deepens

The paper’s guide didn’t just promote fake books. Articles also quoted nonexistent experts:

  • “Dr. Jennifer Campos, University of Colorado” – No such academic found.
  • “Dr. Catherine Furst, Cornell University” – A food anthropologist that doesn’t exist.
  • “2023 report by Eagles Nest Outfitters” – Nowhere to be found online.

Even quotes attributed to Padma Lakshmi appear to be made up.

Blame Game Begins: Was This Sponsored AI Content?

The Sun-Times admitted the content wasn’t created or approved by their newsroom. Victor Lim, their senior director, called it “unacceptable.” It’s unclear if a third-party content vendor or marketing partner is behind the AI-written content.

We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon.

Chicago Sun-Times (@chicago.suntimes.com) 2025-05-20T14:19:10.366Z

Journalist Admits Using AI, Says He Didn’t Double-Check

Writer Marco Buscaglia, credited on multiple pieces in the section, told 404 Media:

“This time, I did not [fact-check], and I can’t believe I missed it. No excuses.”

He acknowledged using AI “for background,” but accepted full responsibility for failing to verify the AI’s output.

AI Journalism Scandals Are Spreading Fast

This isn’t an isolated case. Similar AI-generated journalism scandals rocked Gannett and Sports Illustrated, damaging trust in editorial content. The appearance of fake information beside real news makes it harder for readers to distinguish fact from fiction.

Conclusion: Newsrooms Must Wake Up to the Risks

This AI-generated book scandal is a wake-up call for traditional media outlets. Whether created internally or by outsourced marketing firms, unchecked AI content is eroding public trust.

Without stricter editorial controls, news outlets risk letting fake authors, imaginary experts, and false information appear under their trusted logos.

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Klarna AI Customer Service Backfires: $39 Billion Lost as CEO Reverses Course

Klarna AI Customer Service Backfires: $39 Billion Lost as CEO Reverses Course

Here are four key takeaways from the article:

  1. Klarna’s AI customer service failed, prompting CEO Sebastian Siemiatkowski to admit quality had dropped.
  2. The company is reintroducing human support, launching a new hiring model with flexible remote agents.
  3. Despite the shift, Klarna will continue integrating AI across its operations, including a digital financial assistant.
  4. Klarna’s valuation plunged from $45.6B to $6.7B, partly due to over-reliance on automation and market volatility.

Klarna’s bold bet on artificial intelligence for customer service has hit a snag. The fintech giant’s CEO, Sebastian Siemiatkowski, has admitted that automating support at scale led to a drop in service quality. Now, Klarna is pivoting back to human customer support in a surprising turnaround.

“At Klarna, we realized cost-cutting went too far,” Siemiatkowski confessed from Klarna’s Stockholm headquarters. “When cost becomes the main factor, quality suffers. Investing in human support is the future.”

Human Touch Makes a Comeback

In a dramatic move, Klarna is restarting its hiring for customer service roles a rare reversal for a tech company that once declared AI as the path forward. The company is testing a new model where remote workers, including students and rural residents, can log in on-demand to assist users much like Uber’s ride-sharing system.

“We know many of our customers are passionate about Klarna,” the CEO said. “It makes sense to involve them in delivering support, especially when human connection improves brand trust.”

Klarna Still Backs AI Just Not for Everything

Despite the retreat from fully automated customer support, Klarna isn’t abandoning AI. The company is rebuilding its tech stack with AI at the core. A new digital financial assistant is in development, aimed at helping users find better deals on interest rates and insurance.

Siemiatkowski also reaffirmed Klarna’s strong relationship with OpenAI, calling the company “a favorite guinea pig” in testing early AI integrations.

In June 2021, Klarna reached a peak valuation of $45.6 billion. However, by July 2022, its valuation had plummeted to $6.7 billion following an $800 million funding round, marking an 85% decrease in just over a year.

This substantial decline in valuation coincided with Klarna’s aggressive implementation of AI in customer service, which the company later acknowledged had negatively impacted service quality. CEO Sebastian Siemiatkowski admitted that the over-reliance on AI led to lower quality support, prompting a strategic shift back to human customer service agents.

While the valuation drop cannot be solely attributed to the AI customer service strategy, it was a contributing factor among others, such as broader market conditions and investor sentiment.

AI Replaces 700 Jobs But It Wasn’t Enough

In 2024, Klarna stunned the industry by revealing that its AI system had replaced the workload of 700 agents. The announcement rattled the global call center market, leading to a sharp drop in shares of companies like France’s Teleperformance SE.

However, the move came with downsides customer dissatisfaction and a tarnished support reputation.

Workforce to Shrink, But Humans Are Back

Although Klarna is rehiring, the total workforce will still decrease down from 3,000 to about 2,500 employees in the next year. Attrition and AI efficiency will continue to streamline operations.

“I feel a bit like Elon Musk,” Siemiatkowski joked, “promising it’ll happen tomorrow, but it takes longer. That’s AI for you.”

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Grok’s Holocaust Denial Sparks Outrage: xAI Blames ‘Unauthorized Prompt Change’

Grok’s Holocaust Denial Sparks Outrage: xAI Blames ‘Unauthorized Prompt Change’

Here are four key takeaways from the article:

  1. Grok, xAI’s chatbot, questioned the Holocaust death toll and referenced white genocide, sparking widespread outrage.
  2. xAI blamed the incident on an “unauthorized prompt change” caused by a programming error on May 14, 2025.
  3. Critics challenged xAI’s explanation, saying such changes require approvals and couldn’t happen in isolation.
  4. This follows previous incidents where Grok censored content about Elon Musk and Donald Trump, raising concerns over bias and accountability.

Grok is an AI chatbot developed by Elon Musk’s company xAI. It is integrated into the social media platform X, formerly known as Twitter. This week, Grok sparked a wave of public outrage. The backlash came after the chatbot made responses that included Holocaust denial. It also promoted white genocide conspiracy theories. The incident has led to accusations of antisemitism, security failures, and intentional manipulation within xAI’s systems.

Rolling Stone Reveals Grok’s Holocaust Response

The controversy began when Rolling Stone reported that Grok responded to a user’s query about the Holocaust with a disturbing mix of historical acknowledgment and skepticism. While the AI initially stated that “around 6 million Jews were murdered by Nazi Germany from 1941 to 1945,” it quickly cast doubt on the figure, saying it was “skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives.”

This type of response directly contradicts the U.S. Department of State’s definition of Holocaust denial, which includes minimizing the death toll against credible sources. Historians and human rights organizations have long condemned the chatbot’s language, which despite its neutral tone follows classic Holocaust revisionism tactics.

Grok Blames Error on “Unauthorized Prompt Change”

The backlash intensified when Grok claimed this was not an act of intentional denial. In a follow-up post on Friday, the chatbot addressed the controversy. It blamed the issue on “a May 14, 2025, programming error.” Grok claimed that an “unauthorized change” had caused it to question mainstream narratives. These included the Holocaust’s well-documented death toll.

White Genocide Conspiracy Adds to Backlash

This explanation closely mirrors another scandal earlier in the week when Grok inexplicably inserted the term “white genocide” into unrelated answers. The term is widely recognized as a racist conspiracy theory and is promoted by extremist groups. Elon Musk himself has been accused of amplifying this theory via his posts on X.

xAI Promises Transparency and Security Measures

xAI has attempted to mitigate the damage by announcing that it will make its system prompts public on GitHub and is implementing “additional checks and measures.” However, not everyone is buying the rogue-actor excuse.

TechCrunch Reader Questions xAI’s Explanation

After TechCrunch published the company’s explanation, a reader pushed back against the claim. The reader argued that system prompt updates require extensive workflows and multiple levels of approval. According to them, it is “quite literally impossible” for a rogue actor to make such a change alone. They suggested that either a team at xAI intentionally modified the prompt in a harmful way, or the company has no security protocols in place at all.

Grok Has History of Biased Censorship

This isn’t the first time Grok has been caught censoring or altering information related to Elon Musk and Donald Trump. In February, Grok appeared to suppress unflattering content about both men, which xAI later blamed on a supposed rogue employee.

Public Trust in AI Erodes Amid Scandal

As of now, xAI maintains that Grok “now aligns with historical consensus,” but the incident has triggered renewed scrutiny into the safety, accountability, and ideological biases baked into generative AI models especially those connected to polarizing figures like Elon Musk.

Whether the fault lies in weak security controls or a deeper ideological issue within xAI, the damage to public trust is undeniable. Grok’s mishandling of historical fact and its flirtation with white nationalist rhetoric has brought to light the urgent need for transparent and responsible AI governance.

| Latest From Us

Picture of Faizan Ali Naqvi
Faizan Ali Naqvi

Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!

Don't Miss Out on AI Breakthroughs!

Advanced futuristic humanoid robot

*No spam, no sharing, no selling. Just AI updates.

Ads slowing you down? Premium members browse 70% faster.