Virtual reality presentation of eye movement and eye contact
First Claim
1. A method implemented in a three-dimensional virtual reality world, the method comprising:
- detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; and
in response to the event,predicting a point of interest to the avatar without tracking eyes of a user of the avatar;
computing an animation of the eye of the avatar, wherein the animation is configured to move a gaze of the avatar from an initial point at a time of the event to the point of interest; and
switching from a first input mode before the animation to a second input mode during the animation, wherein;
when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and
when in the second input mode, inputs from the one or more devices controls the eye movements;
wherein the animation is computed according to a predetermined eye movement model, comprising;
determining, using a machine learning technique, personalization parameters of the eye movement model based on inputs received in the second input mode.
4 Assignments
0 Petitions
Accused Products
Abstract
A computing system and method to implement a three-dimensional virtual reality world with avatar eye movements without user eye tracking. A position and orientation of a respective avatar in the virtual reality world is tracked to generate a view of the virtual world for the avatar and to present the avatar to others. In response to detecting a predetermined event, the computing system predicts a point (e.g., the eye of another avatar) that is of interest to the respective avatar responsive to the event, and computes, according to an eye movement model, an animation of the eyes of the respective avatar where the gaze of the avatar moves from an initial point to the predicted point and/or its vicinity for a period of time and back to the initial point.
15 Citations
17 Claims
-
1. A method implemented in a three-dimensional virtual reality world, the method comprising:
-
detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; and in response to the event, predicting a point of interest to the avatar without tracking eyes of a user of the avatar; computing an animation of the eye of the avatar, wherein the animation is configured to move a gaze of the avatar from an initial point at a time of the event to the point of interest; and switching from a first input mode before the animation to a second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements; wherein the animation is computed according to a predetermined eye movement model, comprising; determining, using a machine learning technique, personalization parameters of the eye movement model based on inputs received in the second input mode. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A non-transitory computer storage medium storing instructions configured to instruct a computer device to perform a method implemented in a three-dimensional virtual reality world, the method comprising:
-
detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; and in response to the event, predicting a point of interest to the avatar without tracking eyes of a user of the avatar; computing an animation of the eye of the avatar, wherein the animation is configured to move a gaze of the avatar from an initial point at a time of the event to the point of interest; and switching from a first input mode before the animation to a second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements; wherein the animation is computed according to a predetermined eye movement model, comprising; determining, using a machine learning technique, personalization parameters of the eye movement model based on inputs received in the second input mode.
-
-
14. A computing system to implement a three-dimensional virtual reality world, the system comprising:
-
a server system; and a data storage device storing; a three-dimensional model of the virtual reality world; and avatar models representing residences of the virtual reality world; wherein the server system generates, from the three-dimensional model of the virtual reality world and the avatar models, data stream to provide views of the virtual reality world to client devices that are connected to the server system via a computer network; wherein the computing system tracks a position and orientation of a respective avatar in the virtual reality world; and wherein, in response to detecting a predetermined event related to the respective avatar, the computing system is configured to; predict a point that is of interest to the respective avatar; compute an animation of one or more eyes of the respective avatar by moving a gaze of the respective avatar from an initial point at a time of the event to the predicted point of interest to the avatar; and switch from a first input mode before the animation to a second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements; wherein the animation is computed according to a predetermined eye movement model, comprising; determine, using a machine learning technique, personalization parameters of the eye movement model based on inputs received in the second input mode. - View Dependent Claims (15, 16, 17)
-
Specification