A browser-based virtual reality authoring tool designed for non-technical users and youth to create immersive stories.
This work is currently in-progress.
While there are many technology-driven solutions for education to make students better storytellers, very few of these solutions are for immersive storytelling and accessible to students at the same time. With the increasing availability of smartphones and inexpensive virtual reality headsets, our challenge was to create a simple tool that will improve the storytelling skills of students from different grades.
To enable students to tell their stories more immersively, SocialVR Lab uses simple drag-n-drop interactions to combine 360 photographs and videos with user-generated annotations. Stories can be viewed with a low-cost VR headset such as Google Cardboard, using an Android App.
We launched our first iteration last year. We have been user testing it in EdTech events and in-class workshops in the US.
Users can create their immersive stories by adding text, images, audio on top of 360 panorama images, which creates a room. Then they can connect multiple rooms by creating doors between them.
Storytellers can enhance their stories by guiding the reader in a room through a narrative or add a soundtrack to create a more uniform story through sound as well as gaze-activated sounds.
Currently, I am helping the team to launch an all-mobile browsed-based editor and viewer by Fall, 2017.
DECEMBER 2017 UPDATE
Create-and-view your immersive stories with our beta platform on both desktop and mobile.
When we started to work on SocialVR Lab, it consisted of two separate experiences. A web browser experience to create stories, another Unity-based mobile app experience to view stories. Our first design exploration were interaction styles of the annotation hotspots for the mobile app.
Since the most important goal of SocialVR Lab is accessibility, we designed gaze-only interactions, which would not need any physical interface such as Google Cardboard's fuse button. With the help of our VR developer, we were able to view our reticle and hotspot designs on the device quickly and iterate on them.
On the other hand, we started with defining a visual design system to create a uniform look across the platforms and mediums.
We have explored different layouts for the editor. We researched best practices and used familiar patterns such as drag-and-drop interactions for backgrounds, bounding boxes for active hotspots.
Our early explorations for the editor layout included a drop-down / slide-up style layouts with fixed menus. We decided not to go with a fixed menu bar since it won't work for smaller-displays. To make it work, we would have to decrease the size of the working area and make it responsive.
We have also proposed different functions such as undo/redo, help pages, sharing and downloading a created story.
Some of the functions that we have proposed are implemented and merged as we moved to a cloud powered back-end. Before AWS integration, our users were downloading their stories as .zip files and transfer it to their device to view it. Our later layout explorations also included slide in-out panels, which we iterated on our latest proposal.
Our last iteration for the editor included a content drawer to enable users to manage their uploaded assets among their stories. We went with multi-directional slide-in panels and assign a different function to each direction.
The left panel is for managing stories, the right panel is for managing user details, the bottom panel is for managing assets. We also used our logo as an escape button, which users can learn about the project or ask for help.