In this edition of ‘How To Build a Chatbot 101, we’re diving into the finer details about building intent-based chatbots.
Mapping responses/solutions to intents (or ‘triggers’) continues to be a methodology for businesses looking to build chatbots and virtual assistants even today.
With the arrival of Natural Language Understanding or ‘NLU’ (a branch of Natural Language Processing), chatbots can now decipher unstructured utterances/input from the user and structure them to formulate responses. The layer of intelligence on top of the intents helps the chatbot create relevant responses, either through Syntactic or Semantic Analytics.
While the intents-based methodology continues to be the standardized favourite, we want to show you why organizations need to start thinking beyond them.
A study by Uberall showed that over 43% of customers that interacted with a chatbot had a negative experience, where the Bot failed to grasp the question or query. [1]
Hence, we arrive at the point: The problems associated with Retrieval Intents, Multiple Intents and Undefined Intents.
Retrieval Intents: Merely an FAQ Bot
Similar to the recall function of the ANSI C platform, chatbots using retrieval intents can recall information/data/responses mapped to a particular intent.
This seems simplistic and easy to scale. However, unclassified interactions/utterances of the user are all clubbed into one bucket to ensure the user receives the same response.
“Hi, where is my order?”
- Response A
“Why is my order taking so long?”
- Response A
So on, and so forth. While it might be effective, the problem with this type of intent is it doesn’t take into account the context of the customer, often leading to friction at the CX end.
Also, what happens when the chatbot needs to answer complex queries? In such a scenario, the user will have to begin fleshing out retrieval intents for every possible customer query that could possibly arise.
The sheer scale of FTE (Full-Time Employment) developer hours required is enough to make even a Fortune 500 institution cry.
Multiple Intents: When Users want more with a single interaction
More often than not, users tend to communicate with chatbots and digital assistants like actual human beings. Customers might club questions into one single query or respond to a chatbot interaction with an immediate follow-up question.
In the example above, we see that a single query has two distinct intents. The first intent is the acknowledgement, while the second is the initiation of another query.
Back in the day, a single query could only have one intent. Today, there is a fallback mechanism where the chatbot can either:
In all of the three scenarios above, the customer tends to pay the price on the front-end. Whilst organisations constantly strive to provide seamless experiences, these outcomes tend to pull the effort a few steps back.
Undefined Intents: The end of the road.
The last stage is when the virtual assistant comes across queries or questions it cannot understand. This is often attributed to the lack of data or content updates across the organisational pipeline.
- Can I file a personal claim if my dependent is in the hospital?
- I’m sorry, I don’t understand.
From all of this, it becomes pretty clear that we’re hinting at the time and effort it takes to build intelligent virtual assistants using only ‘intents’. While the methodology is a successful way to bootstrap the entire project, it isn’t always effective.
This is why Enterprise Bot brought its product geniuses and engineers together into a room to rack their brains and come up with a solution. And it’s ready. The latest product developed by the team eliminates the redundancies associated with updating intent-knowledge bases. This ultimately reduces effort, which saves time and helps make digital assistant deployment even more agile.
Get in touch with the team directly or learn more over here