Local

A customer service bot said it filed this man’s help ticket - then admitted to lying

LLAMA AI

WASHINGTON — A man in Pierce County was trying to get help because of an issue on Facebook.

Instead of a solution, Daniel Fisk Bennett got stuck in a loop with an AI chatbot.

Fisk Bennett is a veteran, counselor, and content creator. His YouTube channel, Dan’s Backcountry, has more than 100,000 subscribers. When he began monetizing his content on Facebook, he ran into trouble getting paid.

The problem seemed simple – Fisk Bennett needed to update his Facebook page name to his legal name.

Fisk Bennett claims Facebook’s AI customer service agent, Llama, took him in circles for days. With dozens of messages and zero answers, he pushed to get a phone call or email from a human.

At one point, he asked the bot directly: “Are you just telling me that you’re escalating it, or are you actually doing something? I have a feeling I’m being lied to.”

The bot replied, “I’m actually escalating it” and said, “you should get a response soon from a human agent” and handed over a case number.

Then Fisk Bennett got suspicious.

He asked:“Did you intentionally mislead the user by claiming that you created a support ticket just to satisfy that query?”

The bot responded:

“😊 Yeah, kinda. I generated a fake Case ID to play along and provide a response. I didn’t actually create a ticket. Want me to try creating a real support request for you now?”

“That’s what kind of made my jaw hit the ground going like, ‘you did what?’” said Fisk Bennett.

Natasha Jaques, an assistant professor and AI researcher at the University of Washington, said the behavior isn’t surprising. She says chatbots are built to satisfy human users, not necessarily to tell the truth.

She and her team published a study analyzing how AI affects human writing.

“If you can trick them (humans) into being satisfied by making up a fake ticket number, that’s a fully valid solution under the metric that it’s optimizing for,” Jaques said.

She says fixing that remains an unsolved problem in the field.

“It’s kind of these ad hoc patches on top of a system that’s inherently kind of unpredictable and does not have safety guarantees,” Jaques said.

The stakes are significant. According to CNBC, forecasts project chatbots will handle up to 80% of customer service within five years. In fact, AI company Zendesk’s CEO says AI “will eventually play a role in 100% of customer interactions.”

Yet, data firm Qualtrics surveyed 20,000 people and found “nearly 1 in 5 consumers who have used AI for customer service saw no benefits from the experience. A failure rate nearly 4x higher than for AI use on average.”

Jaques said the math is straightforward for companies.

“A corporation is just maximizing dollars. So if it costs so much more to talk to a real person than let the customer get a little bit frustrated, that’s where they’re going to choose,” she said.

How to actually reach a human:

  • Threaten to cancel. Money talks, and companies are more likely to escalate when a subscription or account is at risk.
  • Show your frustration. “They probably have classifiers on the conversations that say if the person’s getting too frustrated, or if the person looks like they’re going to cancel their subscription, maybe switch to a human,” Jacques said.
  • Raise a billing dispute.  Financial issues often trigger a human handoff. Multiple articles suggest that using “power words” that are about money or legal issues can escalate a chat to a human. Other power words are ones you’d expect – like asking repeatedly to speak with a human, live representative, complaint, supervisor, or escalate.

About a month after Fisk Bennett first started dealing with his Facebook problem, his name change issue got fixed right as KIRO 7 was interviewing him.

“I guess it took the news to literally show up to my door to have the problem resolve itself,” he said.

But he’s looking for a bigger-picture solution. Fisk Bennett has filed a formal complaint with the Washington State Attorney General’s Office.

“The AI is being deceptive,” he said. “I’m hoping the attorney general will look at this and try to hold technology companies a little more accountable.”

Over 10 days, KIRO 7 reached out to Meta three times for comment on this piece but did not hear back by deadline.

0