Skip to main
University-wide Navigation
GettyImages-Customer_Service.jpg

It’s no secret, artificial intelligence (AI) is changing the customer service game.

Society is in a period of technological transition where AI agents are increasingly replacing live representatives. They have been adopted across a broad range of consumer domains — from traditional retail, travel, ride and residence sharing to legal and medical services.

Given advanced processing capabilities and labor cost advantages, this transition is expected to continue.

But what are the implications for customer response and satisfaction? 

Are we more forgiving of an AI bot than a live agent when we are let down? Less appreciative when we are helped?

Researchers from the University of Kentucky, University of Illinois-Chicago and the University of Technology Sydney conducted a study to examine these questions.

The study — forthcoming in the Journal of Marketing — is titled “Bad News? Send an AI. Good News? Send a Human” and is authored by Aaron GarveyTaeWoo Kim and Adam Duhachek.

“It started with wanting to understand if we have a 'blind spot' when negotiating with an AI instead of a human. In particular, a blind spot that lets the AI get away with an offer that we would not accept from a human,” Garvey, a marketing professor in the Gatton College of Business and Economics, said. “AI seems everywhere already, but it will replace even more human representatives as time goes on. I wanted to understand if this leaves consumers open to exploitation through such blind spots, and how we can deal with this ethically moving forward.”

According to the study, when a product or service offer is worse than expected, consumers respond better to an AI agent. However, for an offer that is better than expected, consumers respond more favorably to a live agent.

“This happens because AI agents are perceived to have weaker personal intentions when making decisions,” Garvey said. “Since an AI agent is a non-human machine, consumers typically don’t believe that an its behavior is driven by underlying selfishness or kindness.”

In conclusion, consumers believe AI agents lack selfish intentions (which would typically be punished) in the case of an unfavorable offer and lack benevolent intentions (which would typically be rewarded) in the case of a favorable offer. 

However, designing an AI agent to appear more humanlike can change consumer response. For example, a service robot with body structure and facial features elicits more favorable responses to a better-than-expected offer.

“This occurs because AI agents that are more humanlike are perceived to have stronger intentions,” Garvey said.

What do these results mean for marketing managers?

“For a marketer who is about to deliver bad news to a customer, an AI representative will improve that customer’s response. This would be the best approach for negative situations such as unexpectedly high price offers, cancellations, delays, negative evaluations, status changes, product defects, rejections, service failures and stockouts,” Kim explained. “However, good news is best delivered by a human. Unexpectedly positive outcomes could include expedited deliveries, rebates, upgrades, service bundles, exclusive offers, loyalty rewards and customer promotions.”

For consumers, these findings reveal a “blind spot” when dealing with AI agents — particularly when considering offers that fall short of expectations.

“We hope that making consumers aware of this phenomenon will improve their decision quality when dealing with AI agents, while also providing marketing managers techniques — such as making AI more humanlike in certain contexts — for managing this dilemma,” Duhachek said.

Full article and author contact information is available here.

Credits

Lindsey Piercy (Public Relations & Strategic Communication)