Artificial intelligence offers incredible potential for customer support, promising instant answers and efficiency. However, a recent incident involving the AI code editor company, Cursor, highlights the significant risks when AI goes wrong. An AI support agent invented a policy that didn’t exist, leading to frustrated users and a public relations scramble.
This event serves as a critical cautionary tale for any business considering customer-facing AI. Let’s dive into what happened and what we can learn.
Table of contents
- What Went Wrong at Cursor?
- The Fallout: Confusion and Cancellations
- Cursor Steps In: Clarification and Correction
- Understanding AI Confabulations (Hallucinations)
- The Business Risks of Unguarded AI Support
- Cursor’s Response and Lessons Learned
- Key Takeaways for Businesses Using AI
- Conclusion: Proceed with Caution
What Went Wrong at Cursor?
It started with a common scenario for developers. A user of Cursor, an AI-powered code editor, noticed they were being logged out whenever they switched between their different computers (desktop, laptop, remote server). This broke a standard workflow for many programmers.
Confused, the developer reached out to Cursor support via email. They quickly received a reply from an agent named “Sam.”
Sam’s response was clear but incorrect: “Cursor is designed to work with one device per subscription as a core security feature.” The message sounded official and definitive. The user had no reason to believe Sam wasn’t a human representative stating a new, albeit frustrating, company policy.
The Fallout: Confusion and Cancellations
The user shared their experience and Sam’s response on platforms like Reddit. Other Cursor users took this as official confirmation of a highly unpopular policy change. Developers rely on multi-device access; restricting it felt like a major step backward.
The reaction was swift. Comments flooded in expressing frustration. Several users publicly stated they were canceling their Cursor subscriptions specifically because of this non-existent “one device per subscription” policy. The original poster confirmed they had cancelled, and their workplace was removing the software. The situation escalated quickly.
Cursor Steps In: Clarification and Correction
About three hours after the initial posts gained traction, a human representative from Cursor jumped into the Reddit discussion. “Hey! We have no such policy,” they clarified. “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”
It turned out “Sam” was not a person but an AI model. The AI hadn’t relayed an existing policy; it had completely fabricated one. This is a phenomenon known as an AI “confabulation” or “hallucination.”
Understanding AI Confabulations (Hallucinations)
AI models like the one Cursor used are trained on vast amounts of data. They excel at identifying patterns and generating plausible-sounding text. However, they don’t “understand” information in the human sense.

When faced with a query they don’t have a direct answer for, some AI models will essentially “fill in the gaps” creatively. They prioritize generating a confident, coherent response over admitting uncertainty or stating “I don’t know.” In this case, the AI invented a logical-sounding (though incorrect) reason for the user’s login issue.
This isn’t malicious behavior by the AI, but rather a limitation of the current technology. It highlights the danger of deploying these systems without checks and balances.
The Business Risks of Unguarded AI Support
The Cursor incident wasn’t the first time an AI support agent caused problems. Earlier in 2024, Air Canada was famously ordered by a tribunal to honor a refund policy completely invented by its own support chatbot. The tribunal rejected Air Canada’s argument that the chatbot was a separate entity responsible for its own mistakes.
These cases demonstrate clear business risks:
- Customer Frustration: Incorrect information leads to angry customers.
- Damaged Trust: Users lose faith in the company and its support channels.
- Negative Publicity: Public complaints on social media and forums can harm brand reputation.
- Financial Loss: Canceled subscriptions, refunds, and potential legal costs add up.
Deploying AI in customer-facing roles without human oversight or clear disclosure can backfire spectacularly.
Cursor’s Response and Lessons Learned
To their credit, Cursor handled the situation differently than Air Canada. They quickly acknowledged the error, apologized for the confusion, and clarified that the AI bot, not a human, had provided the wrong information.
Cursor co-creator Michael Truell explained the situation on Hacker News. He confirmed the user was refunded and that the original logout problem stemmed from a backend security update that had unintended side effects for some users (which was subsequently fixed).

Crucially, Truell stated, “Any AI responses used for email support are now clearly labeled as such.” This addresses a key point raised by users: the lack of transparency. Many felt deceived because they believed “Sam” was a human agent. Naming the bot “Sam” without indicating it was AI contributed to this perception.
Key Takeaways for Businesses Using AI
The Cursor episode offers valuable lessons for any company using or considering AI for customer interactions:
- Transparency is Crucial: Always clearly label AI agents. Users should know if they are interacting with a bot or a human.
- Human Oversight is Necessary: AI should assist, not replace, human support, especially for complex or sensitive issues. Have clear escalation paths to human agents.
- Understand AI Limitations: Be aware of confabulations. AI can generate plausible falsehoods. Don’t treat AI responses as infallible truth.
- Test Rigorously: Thoroughly test AI support systems in various scenarios before deploying them to customers.
- Monitor Performance: Continuously monitor AI interactions for accuracy and customer satisfaction.
- Own the Output: Remember, your company is responsible for the information provided by its AI tools, correct or not.
Conclusion: Proceed with Caution
AI holds immense promise for enhancing customer support and efficiency. However, the Cursor incident is a stark reminder that the technology is not perfect. AI confabulations are a real risk with potentially serious consequences for customer relationships and brand reputation.
Businesses must implement AI support thoughtfully, prioritizing transparency, incorporating human oversight, and understanding the inherent limitations of the technology. As AI continues to evolve, responsible deployment will be key to harnessing its benefits without falling victim to its pitfalls. The story of “Sam” the AI bot serves as a powerful cautionary tale.
| Latest From Us
- Robotaxis Are Watching You: How Autonomous Cars Are Fueling a New Era of Surveillance
- AI Unmasks JFK Files: Tulsi Gabbard Uses Artificial Intelligence to Classify Top Secrets
- FDA’s Shocking AI Plan to Approve Drugs Faster Sparks Controversy
- AI in Consulting: McKinsey’s Lilli Makes Entry-Level Jobs Obsolete
- AI Job Losses Could Trigger a Global Recession, Klarna CEO Warns