Real World Omelettes
The 260 Chicken McNuggets Problem
When voice AI goes viral for all the wrong reasons, what McDonald’s AI Disaster Teaches Us.
In June 2024, McDonald’s quietly killed its AI drive-thru experiment after three years of development and testing across more than 100 US locations. According to CIO, the reason was a series of TikTok videos showing confused customers pleading with the system to stop adding items to their orders — including one infamous clip where the AI kept piling on Chicken McNuggets until the total hit 260.
This was not some scrappy startup failing fast. As reported by TeaCode, McDonald’s had partnered with IBM and invested years into the Automated Order Taking system. They had resources, time, and test markets. And still, the technology could not reliably distinguish between “that’s all” and “add more.”
“After working with IBM for three years to leverage AI to take drive-thru orders, McDonald’s called the whole thing off in June 2024. The reason? A slew of social media videos showing confused and frustrated customers trying to get the AI to understand their orders.” — CIO
There is a lesson here that goes beyond fast food.
What Actually Went Wrong
The Automated Order Taking system had one job: understand what customers wanted and get it right. In controlled tests, it probably worked fine. But drive-thrus are chaos. Background noise from cars, kids, and weather. Accents and speech patterns the model had not trained on. Customers changing their minds mid-sentence.
According to DigitalDefynd’s analysis of AI disasters, the system frequently misheard customers and placed ridiculous orders that went viral on social media. Reports described the AI adding butter packets unprompted, putting bacon on ice cream sundaes, and interpreting ambient noise as menu items.
“In one clip, the bot kept adding ‘hundreds of dollars of McNuggets’ to an order despite customers’ pleas to stop.” — DigitalDefynd
Some errors were funny. Others left customers frustrated and employees scrambling to fix orders manually — defeating the entire point of the automation.
The Klarna Comparison
McDonald’s is not alone. According to TeaCode, Klarna made headlines earlier in 2024 boasting that its AI customer service assistant handled work equivalent to 700 human agents. The efficiency story wrote itself.
But the cost-cutting narrative backfired. As TeaCode reported, customers started sharing experiences of the AI giving incorrect information and failing to resolve actual problems. The company later had to bring back human support for complex issues — effectively admitting the AI was not ready for prime time.
“Klarna’s AI Support Experiment Harmed Their Reputation: The Klarna experience serves as a stark reminder that while AI offers powerful efficiencies, an automation strategy driven too heavily by cost reduction can be detrimental.” — TeaCode
The pattern repeats: company deploys AI to cut costs, AI fails in ways humans would not, customers lose trust, company has to walk it back.
When Automation Should Stay in the Back Office
Here is what both cases get wrong: they put immature AI directly in front of customers with no safety net. The humans who would normally catch errors were removed from the loop entirely.
According to IIInigence’s analysis of AI implementation failures, successful automation works differently. The AI handles initial processing, flags edge cases, and routes problems to humans when confidence is low. A hybrid approach where machines do the heavy lifting but humans handle the exceptions.
One framework that works: automate the 80 percent of tasks that are predictable and routine, but keep humans on the 20 percent that involves judgment, ambiguity, or high stakes. Drive-thru ordering turned out to be way more than 20 percent edge cases.
What to Ask Before Deploying Customer-Facing AI
- What happens when the AI is wrong? Is the cost low enough to tolerate errors, or does every mistake create a recovery problem?
- How variable is the input? Controlled data is easy. Real-world messiness is hard.
- Can you fail gracefully? If the AI gets confused, does it escalate to a human, or does it just keep guessing?
- What is the viral risk? If someone films your AI failing, does it become a funny anecdote or a brand crisis?
The Lesson for the Rest of Us
According to CIO, McDonald’s has not given up on voice AI. In a June 2024 internal memo obtained by trade publication Restaurant Business, the company announced it would end the IBM partnership but still saw a future in voice-ordering solutions. The feature became opt-in for franchisees while they work on improvements.
For anyone implementing AI in their own business, the 260 McNuggets problem is a useful reminder. Automation works best when it makes your operation more reliable, not less. When it handles the predictable stuff so humans can focus on the exceptions. When failure is a data point, not a viral moment.
Start with internal processes. Prove the technology works in low-stakes environments. Build in human checkpoints. And maybe wait until your AI can tell the difference between “no more” and “some more” before putting it on the front line.
