Conversational Design and Implementation

ref.: salesforce.com Einstein

Since ChatGPT has taken over the Internet like a fire, we have seen that GenAI tools have also started to be used in supporting various design activities like empathizing, interpreting, ideating, prototyping, and testing. OpenAI’s ChatGPT 4o and Google’s latest Gemini differ from earlier AI platforms because they have become multi-modal.

As a result, they now demonstrate remarkable conversational competency and reasoning capabilities across many domains. This opens the door for using these tools for creating intelligent, fully conversational agents that can also support architects in a conversational, interactive solution design process. We can also imagine various emergent roles such agents can take in the broader area of human-AI collaboration and how we can use new capabilities to open the design space of conversational human-AI collaboration in the context of conversational design overall.

As Erica Hall, in her book “Conversational Design” states: “The tricky part about designing interactions with interconnected digital systems is making them feel like they’ve been designed for humans by humans. It’s easy to let the material of machine logic dominate in a sort of software brutalism. Typically, once we’ve defined the basic structure and logic, only then will we attempt to fit meaning to form and functionality.”

Hopefully, multi-modal GenAI can help us in making it more human like when interacting with us and taking on the roles of a designer, a user, and a product to support the design process. Envisioning a solution with a voice user interface allows us to explore their conversational capabilities.

It can generate personas representing various categories of stakeholders, ask us questions to refine our ideas, simulate interviews with fictional users, create design ideas, select the best ideas, envision them as detailed product features, simulate fictional usage scenarios and conversations between a prototype app and fictional users, and finally, it can evaluate user experience with the prototype app.

However, having access to a multi-modal GenAI platform doesn’t exempt us from the obligation to have good design and prompting skills.

Erica Hall outlines several principles for effective conversational design:

Clarity:

  • Ensure that the conversation is clear and easy to understand. Avoid jargon and overly complex language.
  • Use concise and direct language to convey information.

Brevity:

  • Keep messages short and to the point.
  • Users should be able to quickly grasp the information and respond.

Context:

  • Understand the context in which the conversation is taking place. This includes the user’s environment, the device they’re using, and their previous interactions.
  • Tailor responses based on the context to provide relevant and useful information.

Feedback:

  • Provide immediate feedback to user input. This reassures users that their input has been received and understood.
  • Use appropriate cues (like typing indicators or loading messages) to manage user expectations.

Personality:

  • Infuse the conversation with a personality that aligns with the brand or service. This makes the interaction more engaging and relatable.
  • Maintain a consistent tone and style throughout the conversation.

It is essential to design the conversation with user goals in mind after identifying common user intents and by providing design flows that cater to these intents. The implementation of scripted dialogs offers some control over interactions, AI-driven conversations were often out of reach. Now, however, with LLMs becoming almost a commodity, open source, and multi-modal, the door is open for the practical use of GenAI in the overall architecture of intelligent agents in challenging domains like customer service support or healthcare assistance. 

We are still far from delivering objective-driven AI, as stated recently by one of the fathers of deep learning, Yan LeCun. Professor LeCun, who is also Meta’s VP for AI, has recently explored the path towards a system capable of learning, remembering, reasoning, planning, and having common sense, all while being controlled. When we reach such a level of GenAI development of GenAI, we can also use it in a design process that primarily focuses on the front-end aspects of creating effective conversational interfaces, such as chatbots and voice assistants. While it delves deeply into user experience, interaction design, and dialogue principles, it does not, however, extensively cover the design of back-end systems, data architecture, business rules, or the integration of GenAI.

Ultimately, we need to extend conversational design by bridging Front-End and Back-End Design. In doing so, we should consider the following:

API Design:

Create robust APIs that allow the front-end to communicate seamlessly with the back end. Ensure these APIs are well-documented and capable of handling the necessary data transactions.

Middleware:

Implement middleware that can process user inputs, apply business rules, and interact with various data sources before sending a response back to the user.

Data Privacy and Security:

Design the back end with strong data privacy and security measures to protect user information, complying with relevant regulations and ensuring trust.

Performance Optimization:

Optimize the back end for performance to provide quick and accurate responses, minimizing latency in conversational interactions.

By understanding and applying Erica Hall’s principles with support from GenAI, we can create a conversational interface that not only engages users effectively, but also interacts seamlessly with sophisticated back-end systems, ensuring a holistic and efficient user experience.

Ref.: Erica Hall at https://www.muledesign.com/

 Authored by Alex Wyka, EA Principals Senior Consultant and Principal