splash-designingfortrust5

Designing for Trust

Exploring trust and collaboration in 
conversational e-commerce agents

An actionable guideline on how to design for trust, aimed for interaction designers, who design conversational e-commerce interfaces

Thesis Advisors: Dan Lockton, Daragh Byrne 

Duration: 8 Months

Disclaimer: I wasn't affiliated with any companies mentioned in this thesis while I did this research. All registered trademarks and copyrighted materials are the property of their respective owners.

problem2

The opportunity

Conversational agents are computer programs that interact with humans using natural language, currently through text or voice. As these programs promise to communicate with humans in a way that we, humans are good at, every day more than enough agents (aka chatbots) is being launched.

Today’s chatbots can’t help us with complex tasks, and they just started to "talk" and refer to each other to overcome this issue.  In my thesis, I argue that the currency for these hand-off moments will be trust.

challenge2
Background Image: Bot Ecosystem 2017 by Keyreply.ai

Process overview: Research through design

In the span of 8 months, I learned about the nuances of trust and conversation design through literature review, expert interviews, workshops, and doing formative tests.

research-overview1

Conversational Trust Design Checklist

To give back to the design community, I synthesized the findings of my final design experiment into an actionable design guideline as the final output of my thesis: The conversational trust design checklist provides suggestions for interaction designers, who are interested in designing for trust, with 14 implications in five categories.

checklistthemes2
Be transparent

Share what agents (need to) know about the user

While some expect that every party in a multi-party conversation can access their data, users should get information about their data usage and what all parties know about them.


sharewhatagentsneedtoknow
Be transparent

Refer others cautiously, visualize confidence level

A conversational agent should not refer to others (agents or websites) if it is not confident that they can handle the task. Communicate uncertainty with an indicator.


Be transparent

Give specific feedback to clarify

When there may be a risk for the user such as confirming before a payment, provide detailed and specific feedback to be transparent.


give_specific_feedback
review_bots_decision_making
Give control to the user

Enable users to review bot’s decision-making

Communicate the reasoning behind agents’ actions and recommendations. Provide a way for users where they can fact-check bot’s suggestions and decision-making.


roomforerrors
Give control to the user

Provide a room for revisions

Users may want to change or update information that they provide to the agent, enable them to do it efficiently.


failgracefully1
Give control to the user

Fail gracefully, offer auto-recovery

In case of failure, provide a reason and a safe exit after two times not to lose the user. Offer to do the failed task later, automatically.


non-conversational
Give control to the user

Provide alternatives for agents

Some users will not be comfortable with chatting a bot for their high stake
transactions yet. Don’t be prescriptive, provide alternatives.


Be relevant

Set the expectations

Clearly state what can a bot do or not, how well can it understand the user to eliminate communication breakdowns. The name of the agent can also
affect people’s expectations.


set_the_expectations
Be relevant

Remember the context and forget it when asked

Build upon the previous bits of the conversation. Provide users a sense of memory and a way to forget if needed.


Be responsive

Indicate writing and processing visually

Users expect to see the status of what the bot is doing. They expect
to get an answer from a virtual agent quicker than a human. Late responses raise questions about its reliability. A visual indicator that shows whether the bot is writing or processing makes users to perceive bot to be more human and the interaction faster, even it is longer.


handoff
Be responsive

Don’t indicate hand-offs

Don’t make the user feel any interruptions and try not to surface the seams in the conversation. Don’t emphasize or humanize the hand-offs. Be
concise about the first introduction in a hand-off and connect it back to the conversation.


Be visual

Use visual elements to increase the credibility

Relevant visual elements tend to increase the trustworthiness of a
text-based interface.


visualelements
Be visual

Include branding where possible

To form credibility and show competence, include visual brand symbols such as logos if possible.


branding3
Be visual

Provide secure gateways

Users expect to put their payment information in secure and encrypted forms on a cognitively higher level than the conversation. Leverage
solutions that can show the security level of the transaction
such as a webview with secure https:// page.


Conversational Trust: Simplified

Based on my literature review, I developed the conversational trust model. In summary, our trust in conversational agents depends on:

  • how predictable they are,
  • how good they are at completing their tasks,
  • how we much perceive them as risky,
  • their capability of fulfilling our expectations,
  • and how consistent they are to sustain our relationship with them.

Dimensions of Trust in Technology

In addition to conversational trust, I also referred many times to a widely used trust definition in the information systems field that includes dimensions such as competence, benevolence, and integrity.

dimensionsoftrust1

Assistants: Tomorrow’s meta-chatbots

After my interviews with industry experts, I focused on the current challenges that conversational agents may have. I used Amir Shevat's bot classifications as a framework, which he explains in his book "Designing Bots": 

  1. Expert agents that are good at solving problems in a single domain. ie: most of the chatbots we have today such as DoNotPay.
  2. Generalist agents that are good at solving problems in multiple domains. ie: personal assistants such as Google Assistant, Apple Siri.

Amir also argues that most assistants are trying to become a meta-chatbot by combining multiple simple tasks or domains on themselves such as setting an alarm, playing music, adding things into a grocery list.

bottypes
Disclaimer: Trademarks are only for illustrative purposes.

Assistants: Today's marketplaces for expert agents

When I examined the four most popular assistants in the US market,  I saw that all four have access to a community of domain-specific agents. In other words, generalist assistants are also becoming marketplaces for expert agents.

expert-bots
Disclaimer: Trademarks are only for illustrative purposes.

Assistants: Different system architectures

While the idea behind them is the same, today's assistants use two different models for communicating with domain-specific agents:

  • Bot-to-bot referral: In this scenario, an agent that people trust and use (trusted agent) refer them to a stranger agent. For example, when Cortana receives an intent that exists in its knowledge database, it invites a 3rd party bot to the conversation.

  • Meta chat-bot: In this scenario, when people ask something from a trusted agent, the agent becomes a mediator between the stranger agent and people. For example, when people ask something from Siri, it interacts with apps on the back-end and returns the result to the person. This way people only have to interact with Siri, an agent that they already trust.
bot-to-bot-referralsvsmetachatbot
Derived from Amir Shevat's work. Disclaimer: Trademarks are only for illustrative purposes.

Why trust in e-commerce and travel?

Based on my findings earlier in the process, I scoped down my context in researching trust to e-commerce for my final experiment.

Informed by a research report on how consumers don’t trust travel chatbots with booking their travel, I decided to focus on trip booking as it is a complex scenario. To understand how trip-planning chatbots work,  I reviewed the user experience of 25 travel chatbots on the market. 

competitive
Disclaimer: Trademarks are only for illustrative purposes.

Scenario-Building Workshop: Travel experiences with multiple agents

To understand the mental model of people when they experience a challenge that involves multiple actors, I did a scenario-building workshop with 6 participants. I identified two insights from this workshop:

  1. Participants described travel booking as a fragmented experience with many different actors involved including themselves, their relatives, friends, apps, websites, and brands.
  2. Some participants found managing their travel after they book it, challenging such as changing the date of flight or rebooking accommodations. 

The feedback I got from this brief workshop inspired me about my final design experiment on using ‘experience breakdowns’ and agents as seams similar to Kevin Gaunt’s project on smart homes. In his work, Kevin used multiple chatbots to create a "seamful" experience, illustrating Mark Weiser’s proposal that experiences should include beautiful seams than trying to be seamless.

workshop

A Wizard of Oz prototype: Destination Bot 

Scoping down to a fragmented experience, travel booking journey, I designed a travel chatbot on Slack and asked 6 college students to test it without knowing that there were humans pretending to be chatbots behind the curtain. I used bot-making tool Walkie to design multi-agent conversations by writing sample dialogs.

walkiebot
I used Walkie to write my sample dialogs for a multi-agent conversation. Disclaimer: Trademarks are only for illustrative purposes.

Two collaboration scenarios with multiple agents

While designing Destination Bot, my aim was to compare how trust changes in two agent-collaboration scenarios.

Scenario 1: Bot-to-service composition

The first scenario involved a bot-to-service composition, in which users interacted with different bots to handle various tasks. As the part of their role-play, participants were asked to explore Destination 2.0 to book travel for New Orleans with one of their friends.

bots-to-service5
2
Destination bot behaved as expected.
6
Lodging bot behaved as unexpected by confirming inaccurate information.
7
Banking bot behaved as expected
81
Manager bot behaved as expected.

Scenario 2: A negotiation scenario with a meta-chatbot

In the second scenario, participants interacted with a single bot to handle different tasks. As a part of their role-play, participants learned that their friend has to come back a day earlier. For this reason, they were asked to change their flight tickets and book a hotel reservation for their trip.

meta-chatbot1
a1
Destination Bot behaved as expected for changing the flight tickets.
a3
When users try to book a hotel for the first time, Destination bot gave an error. It blamed another bot as the reason for the error. After this, it behaved as expected.
a5
Destination Bot behaved as expected for paying the order total.
a6
Destination bot behaved as expected for surveying the customer satisfaction.

The impact (so far)

Presenting @ Voice UX Meetup #3

On October 2018, I was able to present this project at Voice UX Meetup in the San Francisco Bay Area, hosted by Botsociety. The event was held at Google Launchpad in San Francisco. Overall, it was a great opportunity for me presenting along with Andrew Ku, a conversation designer at Google. The audience was highly engaged and interested in the project and even an attendee, Chaitrali B sketched these amazing notes to summarize the talks.

fullsizeoutput_70cd
96AB3134-EE0A-496B-854A-30A9A06BB87A
fullsizeoutput_70c6

Going Forward

Reflection

  • Trust is complex. Working between high-level design strategies and architectures to granular visual and conversation design decisions made me realize how vital, yet complex trust is for establishing and maintaining the relationship between humans and technology artifacts. 

  • Conversations are for building trust. Combining trust and conversation into a single model taught me how ‘building’ trust is parallel with conversing. In other words, I learned how trust becomes the outcome of a conversation. 

  • Just enough research is what is necessary. The short timespan of thesis taught me how much primary and secondary research would be necessary to advance and also to research using my skills in design and making.

  • Trust is going to be more important in future and I am just starting... Having this thesis project as a foundation to my future work in trust, I believe I still, have a lot to learn, test, and verify as a
    researcher and designer to design for interactions that will users
    trust and start a conversation.

A rolling to-do list

  • Re-evaluate findings with voice-based UIs
  • Refine the checklist by testing it with more designers
  • Research ways to make the final checklist accessible to designers
  • Share and pass knowledge and my design methods

Other design experiments I did

Botae - An Incompetent yet Benevolent Chatbot

After my literature review, I decided to test some of the dimensions of trust with a design experiment. I developed a chatbot called Botae on Facebook to see in what levels users are willing to share their personal data and how it relates to trust. Botae was incompetent yet benevolent by design.

I intentionally tested it only with my friends (30 college students), who I have a personal relationship with, as I was very aware of the ethical implications of using deception in human-computer interaction research.

Botae Bot Building Process

Botae was my introduction to the realm of conversation design, which I was able to go through building an interactive conversational user interface. Botae was also my first online prototyping experience as a novice programmer.

tech_stck

From Botae, I learnt that...

  • Social influence and referrals are key to the users’ trust in accepting stranger agents.
  • If referred by a trusted party, some may over(trust) conversational agents with their data.
  • Some don’t know what agents already know about them.
botae_recording4
Quick reply buttons like “Got it” above
made conversation end quicker.
botae_recording5
Use of certain emojis added more
visual credibility.
botae_recording6
Botae is sending an image to user.

Surveybot – A bot that surveys for other bots

To learn more about the conversational agents and test my initial assumptions about online shopping experiences, I decided to create another chatbot to survey a potential target user group: university students.

dft

From Surveybot, I learnt that university students...

  • trust agents on doing mundane and low-value transactions.
  • do not trust agents with managing valuable assets, human level understanding, agents’ intents, the level of data privacy
    they provide, and agents’ memory.
  • do not want the help of an agent because of performance
    issues such as responsiveness and because of the fear of
    losing their agency and joy of shopping.
  • both favor and dislike having a dumb agent. There is a
    paradox behind the agent intelligence
wordcloud
 Words clouds: Participant responses of three adjectives to describe the bot they like and hate.

Want to learn more? Have feedback?

Questions?

You can read more about the details of my process via my thesis documentation by downloading it below. 

If you are interested in collaboration or want to give feedback about Designing for Trust, please contact me. I'd like to hear your opinion.

2.23 mb .pdf via cmu.edu