VR Proprioception Experiment

Experimental flow of the VR Proprioception Experiment


An experiment to gauge how effective people are at grabbing objects outside of their field of view in VR. We use proprioception daily to interact with objects outside our vision, yet it proves to be challenging in a virtual environment. The lack of propriorception is one way our physical abilities do not transpose in the virtual world.

The Problem

Virtual Reality (VR) provides a wonderful escape to simulated environments, but with the tradeoff of using all our senses in VR. Research still has to address how to effectively trick our sense of touch in VR environments. Combine this with the restricted peripheral vision when one dons a head-mounted display, and it results in losing one's sense of proprioception: being aware of the position and movement of one's body.

In the real world, we often interact with objects outside our field of view (FoV), a random example being reaching for our phones when fixated on a TV show. Such kinesthetic interactions are very challenging to execute in VR. Imagine a VR shooter where at a specific point, one has to fend off zombies while turning a wheel to open a door. Often the task is divided into: shoot zombies first, then open door; or vice versa. Multitasked interactions with objects outside of our FoV can be managed in reality, but become challenging in VR.

The Experiment

A quick experiment was designed to test how well do VR users grab objects outside their FoV. For several trials, objects are placed outside the FoV by forcing the user to stare at an object in front of them. We then have users reach for the object and bring it to their FoV to complete the trial. We measured variables such as the number of grab attempts per object and the time it took to bring the object forward. We varied the following conditions in the experiment:

Proprioception - Body vs. Environment: Grabbing an object attached to one's body is different than grabbing an object farther in the environment. Games like Arizona Sunshine and Hot Dogs, Horseshoes & Hand Grenades are examples of inventory systems that are attached to the player's virtual body.
Feedback - Haptic vs. None: Haptic feedback (i.e., controller vibration) still remains the predominant mass-produced method of indicating an event. Haptics play a strong role in recognizing a touch event during the trials, hence we expect that the presence of vibration allows for a higher amount of successful grabs.
Training - Allowed vs. Disallowed: Proprioception and muscle memory go hand in hand. If we allow users to interact and try grabbing the object outside of the FoV before the trial (i.e., training), they'd build a limited amount of muscle memory that would allow for more successful grabs compared to no training at all.

The position of the objects is randomly selected from either a location adjacent to the body (like a body inventory) or farther behind the player as part of the environment (like pulling a lever behind the avatar). All possible locations are displayed in these 3 figures:

Front View
Top View
Side View

The Outcome

This experiment was conducted under the University of Illinois at Urbana-Champaign Class Assignment IRB policy (TL;DR: "do the experiment, but you can't publish your results").

There was plenty of conversation and feedback! A couple of important points brought up:
1) Proprioception relative to one's body is much easier to execute than proprioception relative to the environment. Hence why it is easier for VR inventory systems attached to the player's body to be used with enough training and muscle memory. Everyone had a rough time reaching for objects farther than their own bodies!
2) 360° tracking is still needed for precise tracking of objects behind us (these interactions are seen often: backpacks, swords, levers behind us, etc.), thus inside-out tracking systems face limitations when controllers fall outside of the cameras' field of view.