Clicky

socialvr-splash

SocialVR Lab

SocialVR Lab

SocialVR Lab

Virtual Reality Authoring Tool

Easy-to-use Virtual Reality Authoring Platform

A browser-based virtual reality authoring tool designed for non-technical users and youth to create immersive stories.

This work is currently in-progress.

Role: research, VR/editor UI/UX 

Team: Ali Momeni, Aparna Wilder, Luke Anderson, Ben Scott, Molly Bernsten, and Vikas Yadav

About SocialVR Lab: SocialVR started at ArtFab in the School of Art at Carnegie Mellon University in 2015. Currently, it is being developed under the roof of IRL Labs for getting seed funding for commercialization. 

Side Project in Spring, 2017 - Ongoing
splash_2

SocialVR Lab uses simple drag-n-drop interactions to combine 360 photographs and videos with user-generated annotations. Stories can be viewed with a low-cost VR headset, using an Android App. 

We launched our first stable beta MVP last year. We have been user testing it in EdTech events and in-class workshops in Pittsburgh.

For now, you can create your stories with our beta web editor and view them with SocialVR beta app from Google Play Store.

 

Users can create their immersive stories by adding text, images, audio on top of 360 panorama images, which creates a room. Then they can connect multiple rooms by creating doors between them.

Storytellers can enhance their stories by guiding the reader in a room through a narrative or add a soundtrack to create a more uniform story through sound as well as gaze-activated sounds.

Currently, I am helping the team to launch an all-mobile browsed-based editor and viewer by Fall, 2017.

Ongoing Design Process

Ongoing Design Process
Ongoing Design Process

When we started to work on SocialVR Lab, our main focus was creating two separate experiences. An experience in web-browser to create stories, another Unity-based mobile app experience to view stories. Therefore, we started to explore interaction styles with annotation hotspots for the mobile app.

Since one goal of SocialVR Lab is accessibility, we designed gaze-only interactions, which would not need any physical interface such as a button. With the help of our VR developer, we were able to view our reticle and hotspot designs on the device quickly and iterate on them.

On the other hand, we started with defining a visual design system to create a uniform look across the platforms and mediums. 

We have also explored different layouts for the editor. We researched best practices for designing web-based editors and proposed similar patterns such as drag-and-drop interactions for backgrounds, bounding boxes for active hotspots.

Our early explorations for the editor layout included a drop-down / slide-up style layouts with fixed menus. We decided not to go with a fixed menu bar since it won't work for smaller-displays. To make it work, we would have to decrease the size of the working area to make it responsive for different screen sizes.

We have also proposed different functions such as undo/redo, help pages,  sharing and downloading a created story.

Some of the functions that we have proposed are implemented and merged as we moved to a cloud powered back-end. Before AWS integration, our users were downloading their stories as .zip files and transfer it to their device to view it. Our later layout explorations also included slide in-out panels, which we iterated on our latest proposal.

Our last design iteration for the editor included a content drawer to enable users to manage their uploaded assets among their stories. We decided to go with multi-directional slide-in panels, which each direction has a different function.

The left panel for managing stories, the right panel is for managing user details, the bottom panel is for managing assets. We have also used our logo as an escape button, which users can learn about the project or ask for help by clicking on it.

Thanks for scrolling!

© 2017 Meriç Dağlı