The Complete Guide to Better CxD Workflows
Canned responses are fine—until they’re not.
We’ve all been there. There’s nothing more frustrating than asking a chatbot a nuanced question and receiving a paragraph-long response that doesn’t actually provide the answer you need. It’s what gives them a bad rap and makes customers hesitant to try them.
Most businesses delving into conversational AI know that the ideal outcome is an agent that feels dynamic and attuned to the users’ needs. In other words, an agent that’s capable of connection. You ask a question, and it asks a clarifying question before offering an answer. It feels like an actual conversation rather than a keyword-guessing game.
For these interactions to feel conversational, a lot of structure is required. Designers and developers can't just plot out dialogues; they need to weave in personality, harness data for insights, and stay attuned to users' expectations. When executed well, users don't just interact with the assistant—they form a connection.
Adopting a clear conversation design process is key, and it’s what we’re going to cover in this article. This guide will walk you through the complete three-stage process you’ll need to build a conversational flow that never returns a rage-inducing canned response again. Take it step by step; each stage requires many hands, but the outcome is worth it.
Stage one: Build a blueprint
This stage is all about translating your vision into a tangible plan. If you want to build an agent that guides users through refund processing or answering FAQs without a human, that’s your vision. With the help of stakeholders, product owners, and natural language understanding (NLU) designers, you can build a blueprint—a tactical vision that marries potential interactions with the expertise necessary to bring them to fruition. Here’s how to put together your collaborative blueprint.
Also, many designers opt to create static sample dialogues between the briefing and design stages. My advice? That’s a relic of the past. Modern conversation design tools (like Voiceflow) allow you to leap straight into interactive prototypes, streamlining the process and facilitating richer feedback and more agile iterations.
Stage two: Fill in the details
This phase is necessary to help bridge the gap between the vision you built with your blueprint and the experience you’ll execute IRL. It requires meticulous collaboration between designers and developers to craft each conversational turn. It also needs to mirror the overarching objectives while leaving room for real-world complexities.
Here are the 8 steps you need to follow to make your initial vision a reality:
Stage three: BAU (business as usual) monitoring
Once the design is live, it’s up to the designers to monitor how users interact with the agent, which will help them find weak points and understand areas of friction.
Recommended by LinkedIn
During this stage, each conversation is an opportunity to optimize for increased conversion or ROI. The team works iteratively, updating responses or refining conversation flows to address feedback before rolling out changes gradually. With each new insight, the cycle begins again, driving improvements that may inspire entirely different journeys. The dialog and response designers will look to refine:
While the NLU designer will look to refine:
Measuring your assistant’s effectiveness
Measuring performance helps determine what’s working and what’s not. For conversational AI, cross-functional teams monitor metrics in natural language understanding, dialog design, and response quality to pinpoint weaknesses across components and interactions. By scrutinizing confidence scores, turns to complete, and fallback frequency, a team can address pain points piece by piece.
How to measure NLU design
Teams measure intent coverage, entity accuracy, confidence, and other NLP metrics to determine how to improve the AI's language comprehension. Here’s what we recommend looking at:
How to measure dialog design
Analyzing turns, dead ends, and backtracks highlights how to simplify conversation flows and smooth the user experience. Here’s what we tend to measure:
How to measure response design
Monitoring fallback use, repeats, length, and ratings identifies how to improve responses by rewriting or developing new content that better resonates with users. This is what we recommend measuring:
CxD never ends, and that’s a good thing
It may go without saying that the work of CxD is never really finished. This process being iterative means it will be ongoing for as long as your agent is available. By following the right steps, you can make this process as painless as possible for everyone involved—most importantly, your users who just want to find the answer they’re looking for.