Integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
First Claim
1. A system for creating a multi-user interactive virtual environment, the system including:
- a plurality of wearable sensor systems for displaying a virtual reality to a user wearing the system, including first wearable sensor system engaged by a first user and a second wearable sensor system engaged by a second user;
a plurality of cameras, wherein at least one camera is electronically coupled to each of a wearable sensor system of the plurality of wearable sensor systems; and
one or more processors coupled to a memory storing instructions for creating a multi-user interactive virtual environment, which instructions when executed by one or more processors perform;
capturing a first video stream of a real world space using at least one camera electronically coupled to the first wearable sensor system engaged by a first user;
capturing a second video stream of a real world space using at least one camera electronically coupled to the second wearable sensor system engaged by a second user;
using images from the first and second video streams captured by the camera coupled to the first wearable sensor system and by the camera coupled to the second wearable sensor system, generating respective three-dimensional maps of a real world space by extracting one or more feature values from the real world space from the first and second video streams;
determining motion information of the first and second wearable sensor systems with respect to each other based on comparison between the respective three-dimensional maps of the real world space;
extracting body portion movement information captured by the first and second wearable sensor system with respect to a moving reference frame, including;
repeatedly determining movement information for the wearable sensor system and at least one body portion at successive times to form a sequence of movement information; and
analyzing the sequence of movement information formed to determine a path of the body portion;
tracking movement of the body portion along the path of the body portion over a region of the real world space;
comparing the path tracked to a plurality of path templates and identifying a template that best matches the path; and
using the template that matches the path to obtain control information to control an external system.
8 Assignments
0 Petitions
Accused Products
Abstract
The technology disclosed relates to tracking motion of a wearable sensor system using a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. In particular, it relates to capturing gross features and feature values of a real world space using RGB pixels and capturing fine features and feature values of the real world space using IR pixels. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. It also relates to capturing different sceneries of a shared real world space from the perspective of multiple users. It further relates to sharing content between wearable sensor systems. In further relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
-
Citations
15 Claims
-
1. A system for creating a multi-user interactive virtual environment, the system including:
-
a plurality of wearable sensor systems for displaying a virtual reality to a user wearing the system, including first wearable sensor system engaged by a first user and a second wearable sensor system engaged by a second user; a plurality of cameras, wherein at least one camera is electronically coupled to each of a wearable sensor system of the plurality of wearable sensor systems; and one or more processors coupled to a memory storing instructions for creating a multi-user interactive virtual environment, which instructions when executed by one or more processors perform; capturing a first video stream of a real world space using at least one camera electronically coupled to the first wearable sensor system engaged by a first user; capturing a second video stream of a real world space using at least one camera electronically coupled to the second wearable sensor system engaged by a second user; using images from the first and second video streams captured by the camera coupled to the first wearable sensor system and by the camera coupled to the second wearable sensor system, generating respective three-dimensional maps of a real world space by extracting one or more feature values from the real world space from the first and second video streams; determining motion information of the first and second wearable sensor systems with respect to each other based on comparison between the respective three-dimensional maps of the real world space; extracting body portion movement information captured by the first and second wearable sensor system with respect to a moving reference frame, including; repeatedly determining movement information for the wearable sensor system and at least one body portion at successive times to form a sequence of movement information; and analyzing the sequence of movement information formed to determine a path of the body portion; tracking movement of the body portion along the path of the body portion over a region of the real world space; comparing the path tracked to a plurality of path templates and identifying a template that best matches the path; and using the template that matches the path to obtain control information to control an external system. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for creating a multi-user interactive virtual environment, the method including:
-
capturing a first video stream of a real world space using at least one camera electronically coupled to the first wearable sensor system engaged by a first user; capturing a second video stream of a real world space using at least one camera electronically coupled to the second wearable sensor system engaged by a second user; using images from the first and second video streams captured by the camera coupled to the first wearable sensor system and by the camera coupled to the second wearable sensor system, generating respective three-dimensional maps of a real world space by extracting one or more feature values from the real world space from the first and second video streams; determining motion information of the first and second wearable sensor systems with respect to each other based on comparison between the respective three-dimensional maps of the real world space; extracting body portion movement information captured by the first and second wearable sensor system with respect to a moving reference frame, including; repeatedly determining movement information for the wearable sensor system and at least one body portion at successive times to form a sequence of movement information; and analyzing the sequence of movement information formed to determine a path of the body portion; tracking movement of the body portion along the path of the body portion over a region of the real world space; comparing the path tracked to a plurality of path templates and identifying a template that best matches the path; and using the template that matches the path to obtain control information to control an external system. - View Dependent Claims (13)
-
-
14. A non-transitory computer readable medium storing instructions for creating a multi-user interactive virtual environment, which instructions when executed by one or more processors perform:
-
capturing a first video stream of a real world space using at least one camera electronically coupled to the first wearable sensor system engaged by a first user; capturing a second video stream of a real world space using at least one camera electronically coupled to the second wearable sensor system engaged by a second user; using images from the first and second video streams captured by the camera coupled to the first wearable sensor system and by the camera coupled to the second wearable sensor system, generating respective three-dimensional maps of a real world space by extracting one or more feature values from the real world space from the first and second video streams; determining motion information of the first and second wearable sensor systems with respect to each other based on comparison between the respective three-dimensional maps of the real world space; extracting body portion movement information captured by the first and second wearable sensor system with respect to a moving reference frame, including; repeatedly determining movement information for the wearable sensor system and at least one body portion at successive times to form a sequence of movement information; and analyzing the sequence of movement information formed to determine a path of the body portion; tracking movement of the body portion along the path of the body portion over a region of the real world space; comparing the path tracked to a plurality of path templates and identifying a template that best matches the path; and using the template that matches the path to obtain control information to control an external system. - View Dependent Claims (15)
-
Specification