User interface for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
First Claim
1. A method of tracking motion of a wearable sensor system, the method including:
- capturing a video stream of a scene of a real world space using at least one camera electronically coupled to the wearable sensor system;
detecting, from the captured video stream, (i) a hand of a user, (ii) multiple electronic devices included in the scene of the real world space, and (iii) one or more features of the detected hand;
determining at least one point on the hand to which a virtual device including a contextual menu can be affixed, the virtual device providing a virtual interface for interacting with the multiple electronic devices included in the scene of the real world space;
generating for display, a presentation output based on information from the at least one camera, the presentation output including (i) a visual rendering of the hand and (ii) at least one instance of the virtual device affixed to the visual rendering of the hand, wherein the contextual menu included in the virtual device includes menu items facilitating a control interface for changing operational modes of the multiple electronic devices included in the scene of the real world space; and
responsive to a selection of one of the menu items of the contextual menu by a detected finger gesture to select an operation of a particular electronic device of the multiple electronic devices, (i) updating the contextual menu included in the virtual device to include menu items specifically for the particular electronic device and (ii) detecting, from the captured video stream, a finger gesture made freely in 3D space and determining that the finger gesture detected from the captured video stream and made freely in the 3D space indicates a request to interact with the particular electronic device of the multiple electronic devices included in the scene of the real world space.
11 Assignments
0 Petitions
Accused Products
Abstract
The technology disclosed relates to user interfaces for controlling augmented reality environments. Real and virtual objects can be seamlessly integrated to form an augmented reality by tracking motion of one or more real objects within view of a wearable sensor system using a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. In particular, it relates to capturing different sceneries of a shared real world space from the perspective of multiple users. The technology disclosed further relates to sharing content between wearable sensor systems. In particular, it relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
-
Citations
19 Claims
-
1. A method of tracking motion of a wearable sensor system, the method including:
-
capturing a video stream of a scene of a real world space using at least one camera electronically coupled to the wearable sensor system; detecting, from the captured video stream, (i) a hand of a user, (ii) multiple electronic devices included in the scene of the real world space, and (iii) one or more features of the detected hand; determining at least one point on the hand to which a virtual device including a contextual menu can be affixed, the virtual device providing a virtual interface for interacting with the multiple electronic devices included in the scene of the real world space; generating for display, a presentation output based on information from the at least one camera, the presentation output including (i) a visual rendering of the hand and (ii) at least one instance of the virtual device affixed to the visual rendering of the hand, wherein the contextual menu included in the virtual device includes menu items facilitating a control interface for changing operational modes of the multiple electronic devices included in the scene of the real world space; and responsive to a selection of one of the menu items of the contextual menu by a detected finger gesture to select an operation of a particular electronic device of the multiple electronic devices, (i) updating the contextual menu included in the virtual device to include menu items specifically for the particular electronic device and (ii) detecting, from the captured video stream, a finger gesture made freely in 3D space and determining that the finger gesture detected from the captured video stream and made freely in the 3D space indicates a request to interact with the particular electronic device of the multiple electronic devices included in the scene of the real world space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method of providing an augmented reality environment, the method including:
-
capturing a position of a user body portion and multiple electronic devices within a field of view of one or more cameras; rendering for display, a representation of the user body portion; determining at least one point on the rendered representation of the user body portion to which a virtual device including a contextual command input can be affixed; integrating the virtual device including the contextual command input onto the rendered representation of the user body portion; and generating for display, a presentation output based on information from the one or more cameras, the presentation output including (i) a visual rendering of the user body portion and (ii) at least one instance of the virtual device affixed to the visual rendering of the user body portion, wherein the contextual command input included in the virtual device facilitates a control interface for changing operational modes of the multiple electronic devices; and responsive to a selection of one item of the contextual command input by a detected finger gesture to select an operation of a particular electronic device of the multiple electronic devices, (i) updating the contextual command input included in the virtual device to include items specifically for the particular electronic device and (ii) detecting, from a captured video stream, a finger gesture made freely in 3D space and determining that the finger gesture detected from the captured video stream and made freely in the 3D space indicates a request to interact with the particular electronic device of the multiple electronic devices included in the field of view. - View Dependent Claims (14, 15, 16, 17)
-
-
18. A system, comprising:
-
a sensory system including one or more optical sensors configured to sense information of a real world space; a processing system configured to (i) determine, from the information sensed by the sensory system, at least one of a position and a motion of a hand of a user, (ii) determine at least one point on the hand to which a virtual device including a contextual menu item can be affixed for interacting with multiple electronic devices included in a scene of the real world space, (iii) detect multiple electronic devices included in a scene of the real world space and (iv) detect, from a captured video stream of the scene of the real world space, a finger gesture made freely in 3D space; and a wearable rendering subsystem configured to display a presentation including (i) a visual rendering of the hand and (ii) at least one instance of the virtual device affixed to the visual rendering of the hand, wherein the contextual menu item included in the virtual device includes menu items facilitating a control interface for changing operational modes of the multiple electronic devices included in the scene of the real world space, wherein, the processing system and the wearable rendering subsystem, responsive to a selection of one of the menu items of the contextual menu by a detected finger gesture to select an operation of a particular electronic device of the multiple electronic devices, (i) updating the contextual menu included in the virtual device to include menu items specifically for the particular electronic device and (ii) detecting, from the captured video stream, a finger gesture made freely in 3D space and determining that the finger gesture detected from the captured video stream and made freely in the 3D space indicates a request to interact with the particular electronic device of the multiple electronic devices included in the scene of the real world space.
-
-
19. A non-transitory computer readable storage medium impressed with computer program instructions to track motion of a wearable sensor system, the instructions, when executed on a processor, implement a method comprising:
-
capturing a video stream of a scene of a real world space using at least one camera electronically coupled to the wearable sensor system; detecting, from the captured video stream, (i) a hand of a user, (ii) multiple electronic devices included in the scene of the real world space, and (iii) one or more features of the detected hand; determining at least one point on the hand to which a virtual device including a contextual menu item can be affixed, the virtual device providing a virtual interface for interacting with the multiple electronic devices included in the scene of the real world space; generating for display, a presentation output based on information from the at least one camera, the presentation output including (i) a visual rendering of the hand and (ii) at least one instance of the virtual device affixed to the visual rendering of the hand, wherein the contextual menu item included in the virtual device is determined so as to facilitate a control interface for changing operational modes of the multiple electronic devices included in the scene of the real world space; and responsive to a selection of one of the menu items of the contextual menu by a detected finger gesture to select an operation of a particular electronic device of the multiple electronic devices, (i) updating the contextual menu included in the virtual device to include menu items specifically for the particular electronic device and (ii) detecting, from the captured video stream, a finger gesture made freely in 3D space and determining that the finger gesture detected from the captured video stream and made freely in the 3D space indicates a request to interact with the particular electronic device of the multiple electronic devices included in the scene of the real world space.
-
Specification