Clicky

trust_splash2

Designing for Trust

Building Trust in Conversational Agents for E-commerce Through Collaboration and Personality

Designing for Trust addresses trust issues with conversational e-commerce agents by suggesting a collaboration between different expert agents with unique personalities.

Please check back later until May 2018 for more design sprints and initial research findings of my thesis.

This work is currently in-progress.

A mixed reality system that supports conversation and memory recall, to help aging adults connect with others and preserve memories in the moment.

A mixed reality system that supports conversation and memory recall,  to help aging adults connect with others and preserve memories in the moment.

Botae Source Code: Github
Critical Design Sprint: Medium Article
Artifact #3 Source Code: TBD
Thesis Advisors: Dan Lockton, Daragh Byrne 
Expected Publication: May 2018
Artifact #1 Source Code: https://github.com/mericda/botae_v2
Artifact #2 Source Code: TBD
Artifact #3 Source Code: TBD
Relevant Courses: Programming for Online Prototyping, Understanding Game Engines
Thesis Advisors: Dan Lockton, Daragh Byrne 
Expected Publication: May 2018

Starting from concepts such as black box algorithms, algorithm explainability, and deception/trust relationship, I decided to scope down to agents that we, as users, interact through conversational interfaces. These interfaces make algorithms invisible and I believe that often these agents don't communicate their intent well, which makes them fail to build trust with users.

While researching trust on conversational interfaces, one of the personal goals that I also want to learn how to develop working digital prototypes to test with users along with human-centered design research. To do this, I am learning how to prototype zero UIs and web services with Ruby, Sinatra, Node.js, and Dialogflow. 

So far, I explored

  1. how agent characteristics such as personality, etiquette, and multimedia use affect users' trust on data privacy through a working prototype of a Facebook Messenger bot and
  2. introducing the psychological techniques of cold readers (illusionists, fortune tellers, palm readers etc.) to the interaction designers to make them critically think about before using them through a critical design sprint tool.

WIP Artifact #1: Botae

Botae is a Messenger bot informs its users' on their trust level with other bots by walking them through a food recommendation scenario. It finds the best places for food/coffee through using users' location and promises user to find the most popular places among their Facebook friends by accessing their Facebook data.

What can it do for users?

Botae works similarly with Surebot.io. As users go through the flow of getting recommendations for nearby places by providing their location, it aims to establish an initial trust with users by working as they expected. Then it aims to get users' consent of accessing their Facebook data by asking them to click a pseudo-authorization button. After users 'authorize', it shows its real intention, which is showing users how easily they give access to their data.

1170_v1.0_pixel1

What can it do for the researcher?

Botae is mainly a bot conversation research tool. As its replies are all tied to numerous conversation flows that have slightly different content, it enables to test different dimensions of content such as personality, etiquette, use of other media such as GIFs, emojis, photos etc. for gaining user trust in relation to persuasive design. 

As Botae also keeps a log of user actions, it also becomes a point of data collection. It provides insights into how many participants used the system and what is the level of trust that they had with the system. In the current scenario, the level of trust is measured as:

Not Trusted: Users do not give access any of their data.
Low Trust: Users only give access to their location data.
Medium Trust: Users only give access to their FB data.
High Trust:
 Users approve access to both of their data.

1170_heroku_v1.0

Changing Personalities, Faces as Words

By default, Botae is smart, somehow poker-face, caring. Its most important characteristic is being poker-faced, a little mysterious until it builds up trust with its user. It is task-driven, but also have a sense of humor, especially when it things go as not planned.

As it can't understand many general commands that people may expect from a general bot such as Alexa or Siri, it is forgiving in a way that it will inform the user what it can do. No matter how people interact with it, it is polite.

personality_worksheet_v1.0

How does it work?

Botae uses Facebook Messenger as a platform. It is powered by several Ruby gems and a PostgreSQL database that are hosted on Heroku. Its technology stack as follows: 

  • Sinatra gem as the main web app structure.
  • Facebook Messenger API, Graph API through Facebook-Messenger gem and Rubotnik Boilerplate. 
  • Facebook Wit.AI NLP for understanding human natural language, and turning user intentions into actionable entities.
  • GMaps API for location inquiries via httparty and json gems.
  • Puma for a basic web server.
  • PostgreSQL database through PG and ActiveRecord gems.
  • Heroku as hosting the app, and other back-end actions. 
tech_stck

Dialog Flow

In its first version, Botae consists of two flows: the main flow and a persuasion flow. If the user needs more explanation before entering the main flow to try its "functions", it provides a separate 'persuasion' flow, that gives information on different levels. In the next iteration, I will combine two flows into one, which users can get answers about how the system works.

flow_v1.0_1170x2

Data Structure

I also experimented with a PostgreSQL database to reading all of the bot responses and storing user data and replies.

data_v1.0_1170

Initial User/Participant Reactions

While I was developing Botae, I was able to test with my close friends and save their initial reactions to specific patterns, such as bouncing back their profile information, or not replying them in a way that they expect. For example, when I bounced back their public profile data, many of my friends shocked and questioned the intention of the bot.

Learnings from Botae

While developing Botae, so far I learned:

  • Privacy paradox is real, but people are also hesitant with starting a conversation with a "stranger" agent even if agent promises them a value in exchange their data.
  • The main reason that my pilot study participants gave their permissions was that they trusted me, social influence is crucial if people have to try an interact with a new agent.
  • "Tell me more" flow shouldn't be a separate flow, a bot should ideally provide an in-context explanation if users ask.
  • Although FB Messenger enables users to overcome the onboarding, users tend to end the conversation quickly, especially if the chatbot is task-oriented. This inspired me to make the conversation intentionally long.

© 2017 Meriç Dağlı