Virtual reality system with control command gestures
First Claim
1. A virtual reality system with control command gestures, comprising:
- at least one display viewable by a user;
at least one sensor that generates sensor data that measures one or more aspects of a pose of one or more body parts of said user;
a pose analyzer coupled to said at least one sensor, that calculates pose data of said pose of one or more body parts of said user, based on said sensor data generated by said at least one sensor;
a control state;
one or more control commands, each configured to modify said control state when executed, each associated with one or more gestures of one or more of said one or more body parts of said user;
a gesture recognizer coupled to said pose analyzer and to said one or more control commands, wherein said gesture recognizerreceives said pose data from said pose analyzer;
determines whether said user has performed a gesture associated with a control command; and
,executes said control command to modify said control state when said user has performed said gesture associated with said control command;
a 3D model of a scene; and
,a scene renderer coupled to said at least one display, said pose analyzer, said control state, and said 3D model, wherein said scene rendereroptionally modifies or selects said 3D model of a scene based on said control state;
receives said pose data from said pose analyzer;
calculates one or more rendering virtual camera poses, based on said pose data and on said control state;
calculates one or more 2D projections of said 3D model, based on said one or more rendering virtual camera poses and on said control state;
transmits said one or more 2D projections to said at least one display;
wherein said control state comprisesa user input in progress flag that can be either true or false; and
,a user selection value;
wherein said one or more control commands comprisea start user input command that sets said user input in progress flag to true;
one or more modify user selection commands that change said user selection value when said user input in progress flag is true; and
,a complete user input command that sets said user input in progress flag to false;
wherein said scene rendereroverlays a user input control onto one or more of said one or more 2D projections while said user input in progress flag is true; and
,modifies an appearance of said user input control based on said user selection value;
wherein said one or more body parts of said user comprise a head of said user;
wherein said one or more gestures comprise gesture motions of said head of said user; and
,said scene renderer freezes said one or more rendering virtual camera poses while said user input in progress flag is true; and
,wherein said complete user input command is associated with a gesture motion of said head of said user comprising said head remaining substantially still for a period of time exceeding a complete input time threshold value.
3 Assignments
0 Petitions
Accused Products
Abstract
A virtual reality system that uses gestures to obtain commands from a user. Embodiments may use sensors mounted on a virtual reality headset to detect head movements, and may recognize selected head motions as gestures associated with commands. Commands associated with gestures may modify the user'"'"'s virtual reality experience, for example by selecting or modifying a virtual world or by altering the user'"'"'s viewpoint within the virtual world. Embodiments may define specific gestures to place the system into command mode or user input mode, for example to temporarily disable normal head tracking within the virtual environment. Embodiments may also recognize gestures of other body parts, such as wrist movements measured by a smart watch.
-
Citations
15 Claims
-
1. A virtual reality system with control command gestures, comprising:
-
at least one display viewable by a user; at least one sensor that generates sensor data that measures one or more aspects of a pose of one or more body parts of said user; a pose analyzer coupled to said at least one sensor, that calculates pose data of said pose of one or more body parts of said user, based on said sensor data generated by said at least one sensor; a control state; one or more control commands, each configured to modify said control state when executed, each associated with one or more gestures of one or more of said one or more body parts of said user; a gesture recognizer coupled to said pose analyzer and to said one or more control commands, wherein said gesture recognizer receives said pose data from said pose analyzer; determines whether said user has performed a gesture associated with a control command; and
,executes said control command to modify said control state when said user has performed said gesture associated with said control command; a 3D model of a scene; and
,a scene renderer coupled to said at least one display, said pose analyzer, said control state, and said 3D model, wherein said scene renderer optionally modifies or selects said 3D model of a scene based on said control state; receives said pose data from said pose analyzer; calculates one or more rendering virtual camera poses, based on said pose data and on said control state; calculates one or more 2D projections of said 3D model, based on said one or more rendering virtual camera poses and on said control state; transmits said one or more 2D projections to said at least one display; wherein said control state comprises a user input in progress flag that can be either true or false; and
,a user selection value; wherein said one or more control commands comprise a start user input command that sets said user input in progress flag to true; one or more modify user selection commands that change said user selection value when said user input in progress flag is true; and
,a complete user input command that sets said user input in progress flag to false; wherein said scene renderer overlays a user input control onto one or more of said one or more 2D projections while said user input in progress flag is true; and
,modifies an appearance of said user input control based on said user selection value; wherein said one or more body parts of said user comprise a head of said user; wherein said one or more gestures comprise gesture motions of said head of said user; and
,said scene renderer freezes said one or more rendering virtual camera poses while said user input in progress flag is true; and
,wherein said complete user input command is associated with a gesture motion of said head of said user comprising said head remaining substantially still for a period of time exceeding a complete input time threshold value. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. The system of 13, wherein said calculating said pixel translation vector comprises
approximating said change in pose as a rotation around a unit vector {circumflex over (ω - )} comprising {circumflex over (ω
)}y and {circumflex over (ω
)}x by an angle Δ
θ
;calculating a spatial translation vector ({circumflex over (ω
)}yΔ
θ
,−
{circumflex over (ω
)}xΔ
θ
);calculating a scaling factor to convert spatial distances to pixels based on pixel dimensions and fields of view of said one or more 2D projections; and
,calculating said pixel translation vector by scaling said spatial translation vector by said scaling factor.
- )} comprising {circumflex over (ω
-
15. A virtual reality system with control command gestures, comprising:
-
at least one display viewable by a user; at least one sensor that generates sensor data that measures one or more aspects of a pose of one or more body parts of said user; a pose analyzer coupled to said at least one sensor, that calculates pose data of said pose of one or more body parts of said user, based on said sensor data generated by said at least one sensor; a control state; one or more control commands, each configured to modify said control state when executed, each associated with one or more gestures of one or more of said one or more body parts of said user; a gesture recognizer coupled to said pose analyzer and to said one or more control commands, wherein said gesture recognizer receives said pose data from said pose analyzer; determines whether said user has performed a gesture associated with a control command; and
,executes said control command to modify said control state when said user has performed said gesture associated with said control command; a 3D model of a scene; and
,a scene renderer coupled to said at least one display, said pose analyzer, said control state, and said 3D model, wherein said scene renderer optionally modifies or selects said 3D model of a scene based on said control state; receives said pose data from said pose analyzer; calculates one or more rendering virtual camera poses, based on said pose data and on said control state; calculates one or more 2D projections of said 3D model, based on said one or more rendering virtual camera poses and on said control state; transmits said one or more 2D projections to said at least one display; and
,an image warper coupled to said at least one display, said scene renderer, and said pose analyzer, wherein said image warper receives said one or more rendering virtual camera poses from said scene renderer; receives said pose data from said pose analyzer; calculates a change in pose between said one or more virtual camera poses and said pose data; generates a rerendering approximation of said one or more 2D projections of said 3D model on said at least one display based on said change in pose; and
,modifies one or more pixels of said at least one display based on said rerendering approximation; wherein said rerendering approximation comprises calculating a pixel translation vector; and
,translating one or more pixels of said one or more 2D projections by said pixel translation vector; and
,wherein said calculating said pixel translation vector comprises approximating said change in pose as a rotation around a unit vector {circumflex over (ω
)} comprising {circumflex over (ω
)}y and {circumflex over (ω
)}x by an angle Δ
θ
;calculating a spatial translation vector ({circumflex over (ω
)}yΔ
θ
,−
{circumflex over (ω
)}xΔ
θ
);calculating a scaling factor to convert spatial distances to pixels based on pixel dimensions and fields of view of said one or more 2D projections; and
,calculating said pixel translation vector by scaling said spatial translation vector by said scaling factor.
-
Specification