We have conversations with artificial intelligence both in private with voice assistants like Alexa, Siri, and the others, but also in our professional lives. We often don’t even really notice it. Many of our customers use conversational AI bots for their helpdesks, for example. But why is it “intelligent” to process a simple command to set up a printer, for instance? Well, it’s not that simple.
People who programmed with assemblers in the infancy of digitalisation saw logical algorithms as the only foundation of logic and could often not follow why a chatbot needed to be intelligent. As the bot’s reactions are, surely, nothing more than human input adapted to previous input. If I ask a chatbot “What’s the weather in Manchester”, the keywords “weather” and “Manchester ” are recognised and the answer would be something like: “It’s currently raining in Manchester with a temperature of 11 degrees.” But which Manchester did I mean? And when? Now? Or in general? And in which part of Manchester?
This is an attempt to explain in clear terms, what sets a bot apart and be seen as intelligent by the person using it—because artificial intelligence (AI) is actually rather complex. But let’s back up a bit...
As a rule, chatbots follow scripted, keyword-based conversations (e.g. weather plus Manchester). This means that they are not suitable for conversations where they have to intelligently understand what the customer says. Their output is simple and intelligible. But has its limitations. Chatbots can be useful tools for companies. In menu-based systems, for example, where you can instruct customers to give certain answers. You are probably familiar with the vocal variant—voice response units or VRUs. “If you wish to XYZ, press or say “one”,” etc. Or answers or requests for information are preformulated that can then be processed step by step. “Would you like information on starters, main courses, or desserts?”. Variations from the flow of information, however, can lead to errors.
But if you don’t need anything more complex than the text equivalent of a user interface, chatbots are an easy and affordable choice. For complex tasks such as handling customer complaints or IT service management—where content-based contexts need to be understood—conversational AI is the better choice.
Conversational AI makes sense of contexts by using natural language understanding (NLU). If we take the weather, again:
To understand contexts, a bot needs to be trained with sample sentences. Not every user asks “What is the weather like in Manchester?”. How about the question “Will it drizzle on Tuesday in Didsbury?”?. A classic chatbot would probably not be able to answer. It wouldn’t know that today is Monday and that the question is about tomorrow—or that Didsbury is an area of Manchester. An untrained (!) NLU, however, could answer: “I’m not sure, did you mean whether it will rain tomorrow in Manchester?”
A conversational AI can actually train itself and learn from correct context. If the bot’s question “Did you mean...” receives a positive answer, the bot saves it as correct and will be able to recognise it more quickly next time.
NLU helps to understand the intent and context of user interactions. It doesn’t just rely on a predetermined list of keywords (weather, Manchester) and in best case scenarios responds with a simple answer.
If we take a natural conversation whereby you’re planning to travel and want to ask the following question:
I want to go to Salford this weekend. Will it rain?” or
“When is the best time to travel to Manchester?”
Both requests are based on travelling, Manchester, and the weather but have hidden intent that a standard bot would not be able to identify without more information. In order for a bot to recognise the context, it needs to be trained with sample sentences. The bot also uses lexicons to identify operators. A lexicon for time can contain days of the week or time frames. And access to an online database with global locations makes the bot a comprehensive travel agent. Training sentences for the first example above could be something like:
“What is the weather like on <TIME> in <PLACE>.” or “What should I wear at <TIME> in <PLACE>?”
The second training sentence doesn’t look like it’s about the weather at first glance, and that’s why the AI needs training. In general it takes ten training sentences for the bot to get the hang of an intent. If it still doesn’t understand the intent, the NLU results can also be manually checked and optimised via training systems. With time, the AI shapes itself to understand and react to personal preference.
I hope this weather example gives a better understanding of the differences between chatbots and KI bots. If you would like to find out more about the uses of conversational AI in your organisation or want to request a demo, send me an email or visit our conversational AI page.
What’s more, our Partner Cognigy has just been named a Leader in the Gartner® Magic Quadrant™. Download the report here.