Chat tools for Power Virtual Agents

Power Virtual Agent launched in 2020 to help business solve customer issues with bots. As one of six designers I was responsible for designing the conversations tools and testing features related to authoring. The design focused on a node graph, which is a popular way to organize conversational elements in game design. 

How the Virtual Agent system works

The Virtual Agent is always listening to human intent.  The AI determines which conversation to launch based the human’s intent. As the number of conversations grows and the list of intents becomes refined, the system appears smarter and capable of doing more. Additional improvements come from networking conversations and leveraging API services to perform transactions or get data.

App structure and concept

During a two day sprint the core app was worked out by various teams. Each team was responsible for protyping feature ideas and reviewing them with the group. The core concept emerged quite quickly: create code-free conversation bots using visual tools.

I worked on the design of the conversation authoring and testing tools

The building blocks of conversations

A conversation can be made up of many elements. For example, a bot can say a greeting, ask a question, make a decision, and call an API or connect to a human agent.

A graph of conversation blocks

Graphs have been used since the early days of video games to represent user choices. They are now now used to create alorithms, show relationships, and build conversations. As the project progressed I made a competetive analysis of 6 established products using graphs. 

The Virtual Agent team decided to use a node graph because it’s the most visual way to represent a conversation with branching choices and linear flow. Additionally, complimentary products like Microsoft Flow already use a similar UI pattern to represent serial events.

The graph became the organizing principle for authoring conversations


Wireframing began with establishing nodes, their content, and then testing the conversation visually. For each wireframe iteration I used a real-world conversation scenario so that we could  compare the variations against eachother. Each wireframe sequence typically started from the begining of writing a conversation and ended on testing the  conversation with the bot. We made several discoveries from wireframing. Primarily, content should not hide inside a panel but be directly editable in the node. We also debtated veritcal VS horizontal layouts, routing , and a library for content re-use.

Early wireframing led the team to a direct-editing approach of all the content on the graph
This sequence showed how a few conversation blocks could work together to make a simple conversation

Prototyping the interaction model

For about two weeks I paired with a developer to create a simple UI that could create graphs for our conversations. This prototype was tested for ease of editing and node creation. While it satisfied those criteria, the prototype revealed some issues with rerouting and organizing the graph.

This prototype established the basic UI of the graph

UX principles from wireframing

After about three months of wireframing, a few key principles emerged. First, content should be editable directly in nodes. This improves legibility and creates a more direct representation between the graph and the conversation that happens with the bot. Secondly, conversations should be modular to promote specific functionality. Lastly, all conversations should be testable visually on the canvas. Together these principles informed how we would design conversation tools.

Visual Design

Visual design focused on distinguishing node types, routing, and showing errors. I worked on this phase with two other designers. Once the visual language became established I managed a component library which tracked all the variations for content nodes. Lastly I worked on documenting accessibility requirements using MS standards.

An example showing all conversation blocks in use
Documenting Aria labels for accessibility
The node component library

Integrating with MS Flow

MS Flow is a process automater that can access databases and APIs. The general concept for integrating with outside app like MS Flow is based on object oriented programming. Data is passed to a function, the function does its work, and new data is returned. The design of the node showed the passing of data between Virtual Agent variables, Flow variables, and the return back to Virtual Agent variables.

Concept for graphically passing data between Power VA and Flow
An example of a MS Flow
Early wireframes showing Flow integration from a library
Finalized design before development

Tracing conversations

Since the begining of the project we needed a visual way of testing a conversation in various stages of completion. We called this feature tracing. Early wireframes showed a conversation launching in a panel and parts of the graph lighting up as the conversation progressed. 

Early version showing conversation blocks lighting up through the conversation
Design showing automatic update based on the conversation triggeredManual update based on the conversation with the bot
Design showing manual update based on the conversation triggeredManual update based on the conversation with the bot

In later phases I proposed a feature to automatically update the canvas based on the triggered conversation with the chat bot. Additionally I promoted node-navigating via the chat, to alllow immediate content editing. Both features together would allow authors to trace complex conversations and improve content. 

Auto-update and navigating to node from chat

Error framework

Conversations often generate errors, whether from missing data or out-of-date components like Flows. I designed an error panel to consolidate all errors and warnings. Leveraging the navigation feature from tracing, the panel could be used to jump around the canvas to quickly fix errors.

Sketching out permutations of errors
Error framework as designed for development
The topic checker implemented

Public preview launch

After 18 months of design, development, and testing Power Virtual Agents launched in public preview in late 2019. Below is a simple example of starting a conversation with a few triggers, writing the conversation, and then testing it with tracing.