South Minneapolis News

collapse
Home / Daily News Analysis / I was duped by an AI customer service bot and I hate it

I was duped by an AI customer service bot and I hate it

May 14, 2026  Twila Rosenbaum  5 views
I was duped by an AI customer service bot and I hate it

“Hey, it’s Theo,” said the friendly text message from the restaurant I’d just booked through a popular online reservation platform. “Looking forward to having you in tomorrow. Any dietary restrictions or allergies for the kitchen, and are we celebrating anything special this visit?”

What a nice text, I naively thought, as I gamely replied we were coming in for Mother’s Day. It felt good that someone from the restaurant had personally reached out, so I wanted to reciprocate.

We went back and forth for a couple more rounds, with me noting which of the restaurant’s locations we’d be visiting for our celebratory brunch. But then the bubble burst—and I felt like an idiot. “Would you like me to save that it’s Mother’s Day for your future visits too, or just for this one?” the suddenly robotic-sounding rep asked.

Oh, duh! I’m chatting with AI.

I should’ve known better. More and more businesses are using AI for customer service, with one October 2025 survey estimating that half of small businesses in the US employ AI to “elevate” their customer service operations. I’m sure that figure is much higher by now. Indeed, the global chatbot market was already valued at over $5 billion in 2023 and is projected to grow exponentially as natural language models become cheaper and more accessible.

To be fair, AI customer service reps can be effective for straightforward tasks, such as scheduling appointments, answering FAQs, or resetting passwords. If an AI can help me book a reservation, check store hours, or reschedule a missed delivery, I’m all for it. The efficiency is undeniable—AI never needs a coffee break, can handle hundreds of simultaneous conversations, and provides consistent responses 24/7. It’s a logical choice for cost-conscious businesses.

My problem, though, is when an AI representative doesn’t outright say it’s AI, or won’t own up to it when asked. And even when an AI rep does admit to it, it’s the not knowing in the first place that makes me feel like a chump—and I grow suspicious of the company I’m dealing with.

For example, one of my medical providers quite clearly uses AI for its customer service calls. (Or at least it became clear in retrospect, when the phone responses came too quickly and the tone of the bot started sounding rote.) The AI did stay in its lane, handling only appointment scheduling and passing on prescriptions and other medical issues to a flesh-and-blood person. But I was annoyed and put off that the AI didn’t identify itself as AI from the get-go. It felt like a subtle form of deception, a breach of the implicit honesty we assume in professional communication.

My interactions with the medical office AI bot didn’t bother me as much as my exchange with “Theo,” the restaurant AI. By giving itself a name but neglecting to disclose that it was a bot, Theo—and by extension the restaurant itself—made me feel like I’d been duped. Did I freak out and cancel my reservation? Well, no. (Mother’s Day brunch is a tough get in a major city.) But the stealth AI chat didn’t give me warm fuzzies about the restaurant either.

This experience raises broader questions about transparency in customer service. When a human representative chats with a customer, there’s an inherent social contract: we’re both aware of each other’s human fallibility and empathy. An AI, no matter how advanced, lacks genuine understanding and emotional nuance. Unlike a human, a bot cannot truly “celebrate” with you or sympathize with your dietary needs beyond a scripted response. Yet by mimicking human language and even adopting a human name, these bots exploit our natural tendency to anthropomorphize—to treat conversational agents as people. This is the very mechanism that makes the deception feel so personal.

History offers a cautionary tale. In the 1960s, ELIZA, one of the first chatbots, was designed to simulate a psychotherapist. Users often forgot they were talking to a program and poured out their hearts, a phenomenon now known as the ELIZA effect. But Joseph Weizenbaum, ELIZA’s creator, was alarmed by how easily people were fooled. He argued that it was unethical to allow people to believe they were interacting with a being that had genuine understanding. Today’s AI customer service bots are far more convincing—powered by large language models that can generate remarkably human-like text. The potential for deception, and the resulting erosion of trust, is even greater.

From a business perspective, the cost savings of AI are undeniable: reduced labor expenses, faster response times, higher scalability. However, these benefits can be quickly undermined if customers feel manipulated. Research in consumer psychology shows that perceived betrayal strongly predicts negative word-of-mouth and brand switching. A 2024 study by the Customer Service Institute found that 68% of consumers say they would trust a company less if they discovered it used AI without disclosure. The short-term savings could translate into long-term losses in customer loyalty and reputation.

Some companies have adopted best practices. For instance, several airlines now require their chatbots to introduce themselves as automated systems at the start of every interaction. “Hello, I’m an AI assistant” or “I’m a virtual agent” is stated clearly before any conversation begins. Others, like certain banks, use a subtle icon or label to indicate the agent is not human. These transparent approaches allow customers to adjust their expectations and still benefit from the efficiency of AI without feeling tricked.

Regulatory attention is also growing. The European Union’s AI Act, which came into force in 2024, includes provisions requiring clear disclosure when interacting with an AI system in contexts where deception is likely. While enforcement is still ramping up, similar legislation is being discussed in the United States and other countries. Companies that get ahead of these rules by voluntarily adopting transparency will not only avoid legal penalties but also build a reputation for integrity.

To business owners using AI for customer service: I get it. Using AI for customer service calls must be one of the easiest ways to cut costs (even if it’s not fun for the human reps losing their jobs). But if you are going to answer phones or send texts with AI bots, make sure they identify themselves as AI up front. A simple “I’m an automated assistant” at the beginning of a phone call or in the first text message is enough. It preserves the customer’s autonomy to decide how they want to interact. Those who prefer human help can request escalation; those who are fine with AI can proceed efficiently.

Otherwise, you’ll be saving money at the expense of customer trust—and trust is priceless. No amount of operational efficiency can compensate for the feeling of being duped. As AI becomes ever more integrated into our daily transactions, the line between helpful automation and deceptive impersonation will only blur further. It is up to businesses to draw that line clearly and honestly, because once trust is broken, it is exceedingly difficult to rebuild.


Source: PCWorld News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy