Guide to building vs. buying an AI Agent for customer service
This guide will help you assess the build vs buy question from 5 main angles so you can make a confident, informed decision.
Learn MoreWhen Chinese AI voice startup Timedomain launched an app called Him earlier this year, the company couldn’t have predicted just how madly in love users would fall with its AI boyfriends.
Voiced by AI, the romantic chatbot companions on Him called users every morning, read them poems, provided company during dinner, sent affectionate messages throughout the day, and held deep conversations about their (fictional) lives.
“In a world full of uncertainties, I would like to be your certainty. Go ahead and do whatever you want. I will always be here,” said one Him chatbot, according to an article in Rest of World .
So when it all came crashing down — the app wasn’t bringing in enough revenue to cover its costs and shuttered operations — the love stricken users were devastated. Some rushed to clone their AI boyfriends’ voices, record as many calls as they could, and even tried to find new investors to keep the app going. With no luck, they mourned, with one user telling Rest of World, “The days after he left, I felt I had lost my soul.”
The saga of Him poses a lot of questions about relationships, loneliness, and modern society. But one thing is for sure: it shows AI has become quite the smooth talker.
Recent advances in artificial intelligence have made chatbots increasingly conversational to the point that it feels like science fiction come to life. Character.AI lets users chat with an AI-version “of almost anyone, live or dead, real or (especially) imagined,” according to . Meta recently unveiled a collection of AI chatbots based on celebrities like Snoop Dog and Kendall Jenner, which although largely received as useless, have also been called “surreal.” And there’s no denying the conversational skills of ChatGPT, which took human-computer interaction to such new heights that it kicked off an entirely new wave of AI, not to mention the feeling that humanity is on the brink of a whole new chapter.
Clearly, we’re getting deeper into the uncanny valley than ever before, referring to the concept that describes the uneasiness humans often feel in reaction to robots and computers that display human-like characteristics. The best practice in customer service has always been to make clear to customers when they’re talking to a bot. But as AI chatbots become more advanced and capable of sounding just like people, it’s also becoming important to interrogate just how human-like we really want our chatbots to be. Should they sound just like us, or retain some bot-ness?
Tae Rang Choi, an assistant professor at Texas Christian University who’s done research on the perceptions of voice AI assistants , says it depends and that users’ motivations for speaking with chatbots are a major factor.
“If they use voice AI for social interactions, then they probably prefer it to be more human-like in conversation and voice. On the other hand, people who use voice AI for utilitarian reasons prefer that it’s not so conversational and feels more like AI,” she said in conversation for this article.
She added that people who use chatbots for social interaction perceive voice AI as a friend, whereas people who use it for utilitarian reasons perceive voice AI as just an assistant. With this insight, it’s reasonable to think a company’s own brand could even be a factor for how human versus bot-like it decides to go. A company that targets a very friendly feeling in its brand might opt for a more human-sounding bot, for example.
Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, told IEEE Spectrum that the goal is not to avoid the uncanny valley, but rather to avoid bad character animations and instead match the appearance of robots and computer systems to their abilities.
Of course, people are different and may have varied feelings about interacting with AI technologies and how those technologies present to us. And it’s important to recognize these are uncharted territories that society as a whole is feeling out in real-time. Additionally, perceptions about technology shift as people become more accustomed to it, and discover new use cases and incentives.
“As a researcher of human-computer systems for over 30 years, I believe this is a positive step forward, as natural language is one of the most effective ways for people and machines to interact,” wrote computer scientist Louis Rosenberg in an article for Big Think about the creepiness of conversational Al. “On the other hand, conversational AI will unleash significant dangers that need to be addressed.”
And it’s true that we’ve already seen several uses of increasingly human-sounding AI voice technology that could give many people pause.
Concern about deep fakes, for example, is only growing as the technology gets better at replicating real people’s voices in addition to sounding more human-like in general. Within the span of just a few days, actor Tom Hanks, CBS host Gayle King, and YouTube sensation Mr.Beast all recently had to put out public statements warning fans not to fall for AI-generated versions of their likenesses being used in fraudulent social media advertisements. Additionally, the technology has also given rise to some pretty serious scams. Bad actors are using AI voice technology to make kidnapping scams more realistic , cloning people’s voices to trick relatives into believing their loved one has been kidnapped in order to collect ransom money.
Malicious uses like these chip away at trust, which is vital for the public to accept any emerging technology, let alone a technology meant to mimic and stand in for humans. For this reason, transparency is another important factor in considering how human- or bot-like we want AI chatbots to sound.
In the case of customer interactions, trust is paramount. While Choi hasn’t specifically researched the perceptions of AI bots in the customer service domain, her previous research suggests that it’s an arena where more human-sounding AI could flourish — but only if there’s trust.
“Customers have an expectation for interpersonal rapport,” she said. “They’re looking for service with a kind smile.”
Above all, AI requires transparency. The goal should never be to fool a customer who’s interacting with an AI chatbot into thinking they’re conversing with a human.
“The company has to tell the customer if they’re talking to an AI so they can be aware of who they're talking to and whether it's a person versus the technology,” Choi said.
A study from Label Insight covered by Ink found that 94% of consumers are more likely to be loyal to a brand when it commits to full transparency, and 73% will even pay higher prices for services and products from companies that operate transparently. And now we’re starting to get a sense of how this is playing out with human-sounding AI chatbots in particular.
In a recent Zendesk-sponsored article in Raconteur, Zendesk’s VP of EMEA enterprise sales Eric Jorgensen pointed to cryptocurrency platform Luno as one example of how such transparency is promoting greater chatbot usage among customers.
“Actually, they [Luno] have had way better adoption of chatting with their chatbot because they’re upfront about it. Rather than those who pretend and then people know that some part of the language or context isn’t quite right, and [their] perception of the quality of that service drops,” Jorgensen said.
Across use cases and the tech industry at large, the recent generative AI boom in particular has sparked a conversation about transparency and what technological steps could be taken to ensure we’re able to decipher between AI- and human-generated content.
There are ongoing efforts and even government policy discussions around requiring AI-generated content to be watermarked as such, for example. Conversely, others are working to create an industry-wide standard for verifying content created by humans, such as the work coming out of the Content Authenticity Initiative . These types of initiatives and discussions around transparency for customer service bots are not only important for finding ways to stamp out AI-generated misinformation, enforce copyright, and ensure customer trust, but also say a lot about our continued interest in defining the boundaries between humans and AI.
Decades before we had AI that could sound like a human, we had a test for evaluating if it could pass for one.
Originally called “the imitation game,” the Turing Test was developed by British computer scientist Alan Turing in 1950 and involves having a person pose questions to both a machine respondent and a human respondent and then try to determine which is which. If the machine is able to fool the human “interrogator” into thinking it’s actually the human responder, Turing believed this could prove the machine’s intelligence.
While an imperfect method, the Turing Test holds somewhat of a legendary status in computer science. No computer has ever been able to pass it; however, we’re getting close.
“Given the rapid progress achieved in the design of natural language processing systems, we may see AI pass Turing’s original test within the next few years,” wrote Simon Goldstein and Cameron Domenico Kirk-Giannini, professors of philosophy at Australian Catholic University and Rutgers University, respectively, in The Conversation .
In a recent test of three large language models (including GPT-4), the professors found that as long as they prompted the model to include spelling mistakes, the testers could only correctly guess whether they were talking to an AI system 60% of the time.
“The AI did a good job of fooling the testers,” they wrote.
With machines getting closer to passing, the test is also more relevant than ever. According to Google Search data, searches for “AI Turing test” have been trending upwards since last year when compared to the last ten years. Additionally, ChatGPT and Microsoft’s AI-powered Bing have shown they’re capable of solving captchas , which are also meant to decipher humans from machines.
94% of consumers are more likely to be loyal to a brand when it commits to full transparency.
73% will even pay higher prices for services and products from companies that operate transparently.
Overall, what makes us human in a world of machine intelligence has never been more top of mind. Take, for example, the most recent issue of Wired magazine. Not only is it focused on AI from cover to cover, but discussion of deciphering humans from AI is a strong throughline.
One article titled “To Catch A Bot” dives into the burgeoning industry of tools popping up to identify AI-generated text and explores what it means that these systems are grading student essays and other works on a scale of humanness. Another discusses AI’s ability to beat humans at strategic games like Go, Chess, and more recently the more niche and much more human game Diplomacy, stating at one point that “humans have a dread fear of non-humans passing as the real thing.”
Even the issue’s advice column centers this philosophical exploration, featuring the reader question: “I failed two captcha tests this week. Am I still human?”
Cloud, as the advice columnist affectionately goes by, responded with 1,000 words interrogating what it means to prove one’s humanity in the age of advanced AI and why we’re so unwavering in our continued desire to ensure we’re able to tell human from machine.
“In truth, the Turing test has always been less about machine intelligence than our anxiety over what it means to be human,” Cloud said.
So as businesses embrace the newest capabilities of AI chatbots, it’s important for customer service leaders to remember this technology is so much more than bits, bytes, and progress for progress sake. AI might be on one end, but people will always be on the other — and people still have a lot of very human feelings about it all.
Practical guides to evolve your team, strategy, and tech stack for an AI-first world.
Get the toolkit