Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
First Claim
1. A computer-implemented process for constructing shared virtual environments, comprising:
- applying a computer to perform process actions for;
generating a real-time environmental model of a real-world environment in which a shared immersive virtual environment is being presented to two or more users;
generating a unified real-time tracking model for each of the two or more users;
for each particular user, applying the real-time environmental model and the real-time tracking model to generate frames of the shared immersive virtual environment corresponding to a real-time field of view of the particular user;
rendering virtual representations of one or more non-participant persons into the shared immersive virtual environment in positions corresponding to a real-world position of each non-participant person; and
presenting the virtual representation of one or more of the non-participant persons as a translucent rendering that increases in solidity whenever it is determined that the non-participant person is speaking to any user.
1 Assignment
0 Petitions
Accused Products
Abstract
A “Shared Tactile Immersive Virtual Environment Generator” (STIVE Generator) constructs fully immersive shared virtual reality (VR) environments wherein multiple users share tactile interactions via virtual elements that are mapped and rendered to real objects that can be touched and manipulated by multiple users. Generation of real-time environmental models of shared real-world spaces enables mapping of virtual interactive elements to real objects combined with multi-viewpoint presentation of the immersive VR environment to multiple users. Real-time environmental models classify geometry, positions, and motions of real-world surfaces and objects. Further, a unified real-time tracking model comprising position, orientation, skeleton models and hand models is generated for each user. The STIVE Generator then renders frames of the shared immersive virtual reality corresponding to a real-time field of view of each particular user. Each of these frames is jointly constrained by both the real-time environmental model and the unified real-time tracking model.
-
Citations
17 Claims
-
1. A computer-implemented process for constructing shared virtual environments, comprising:
-
applying a computer to perform process actions for; generating a real-time environmental model of a real-world environment in which a shared immersive virtual environment is being presented to two or more users; generating a unified real-time tracking model for each of the two or more users; for each particular user, applying the real-time environmental model and the real-time tracking model to generate frames of the shared immersive virtual environment corresponding to a real-time field of view of the particular user; rendering virtual representations of one or more non-participant persons into the shared immersive virtual environment in positions corresponding to a real-world position of each non-participant person; and presenting the virtual representation of one or more of the non-participant persons as a translucent rendering that increases in solidity whenever it is determined that the non-participant person is speaking to any user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system for constructing interactive virtual reality environments, comprising:
-
a general purpose computing device; and a computer program comprising program modules executable by the computing device, wherein the computing device is directed by the program modules of the computer program to; apply one or more sensors to capture real-time tracking information relating to real motions and positions in a shared real-world environment of two or more users participating in a real-time rendering of a shared immersive virtual environment, and further relating to motions and positions of one or more non-participant persons in the shared real-world environment; apply the tracking information to generate a real-time environmental model of the real-world environment; apply the tracking information to generate a real-time position and skeleton model for each of the two or more users and one or more of the non-participant persons; apply the real-time environmental model and the real-time position and skeleton models to generate frames of the shared immersive virtual environment that are tailored to a corresponding real-time field of view of each user; render virtual representations of one or more of the non-participant persons into the shared immersive virtual environment in positions corresponding to a real-world position of each non-participant person; and present the virtual representation of one or more of the non-participant persons as a translucent rendering that increases in solidity whenever it is determined that the non-participant person is speaking to any user. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A computer-readable storage device having computer executable instructions stored therein, said instructions causing a computing device to execute a method comprising:
-
applying one or more sensors to capture real-time tracking information of users, non-participant persons, and objects in a shared real-world environment; applying the tracking information to generate a real-time environmental model of the real-world environment; applying the tracking information to generate a real-time position and skeleton model for each of two or more users and one or more of the non-participant persons; generating frames of a shared immersive virtual environment that are tailored to corresponding real-time field of view of each user; applying the real-time environmental model and the real-time position and skeleton models to render virtual representations of one or more non-participant persons into the shared immersive virtual environment in positions corresponding to a real-world position of each non-participant person; and presenting one or more of the virtual representations of one or more non-participant persons as a translucent rendering that increases in solidity whenever it is determined that the non-participant person is speaking to any user. - View Dependent Claims (16, 17)
-
Specification