Thoughts on Conversational Design - 2024
While at UX Brighton, Lorraine Burrell kindly invited me to The AI Meet-up South - Conversational AI Edition event. The theme of the event was reflections on 2024. There were three great talks focused on chatbots and the world of conversational product design. As with many of these events, the discussion afterward was also just as enlightening. I wanted to highlight four themes or reflections that interested me.
Questions and answers, not helping people with tasks.
This manifests in two key points. Current product and feature designs are often too focused on Q&A, yet humans naturally mix discovery questions with task-oriented goals. Many products seem to lack a sense of joint action and journey planning needed to focus on positive task outcomes in conversations. Directional repair is missing in many current chat experiences—that is, guiding users through a series of discovery questions and then helping them take action. Too often, these are empty calorie conversations.The cost of flexibility is always control.
As teams move from rule-based systems for managing conversations to LLM-powered conversational elements, there’s a noticeable loss of fine-grain control over design. This often shows up product team saying LLMs are not terse enough, overly polite, or too generalist in their responses. The language used by LLMs rarely matches the brand voice of the deploying company and fails to adjust as the context of the conversation evolves. It’s been fascinating to hear real examples of the benefits of out-of-the-box LLM responses, but also how these outputs are watering down the stylistic standards carefully put in place by conversational designers.If RAG does not work.
If your RAG implementation isn’t working, it’s more likely an issue with content design or informational transformation than something a new technical layer can fix. For instance, dumping a website into RAG often destroys its usefulness. Even high-quality web content loses impact when stripped from its original visual and informational architecture and forced to function in an entirely different medium and context. The lesson here is that it’s not about whether LLM or RAG is “bad”; it’s about where, how, and whether these tools are truly needed for solving your users’ problems.Solutionizing chat is so 2024.
One major issue with AI chat interfaces, as popularized by OpenAI, is that they give the facade of substantial utility but rarely deliver in real-world use cases. This year, that problem seems to have driven a wave of “solutionizing innovation” projects in many corporations. There’s a lot of anecdotal evidence across the tech and design industries to support this. I believe this trend will be short-lived, and in 2025, user need-driven product design will push back. OpenAI’s own shift to focusing its products on specific use cases—like writing and coding through the canvas interface—is an early sign of this shift.
Also enjoyed a conversation I had with Kane Simms about the use of ML models to predict content for a user’s base on the history of their navigational path.