Virtual reality presentation of eye movement and eye contact
First Claim
1. A method implemented in a three-dimensional virtual reality world, the method comprising:
- detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye;
predicting a point of interest to the avatar responsive to the event that triggers eye movement, wherein the point of interest includes an eye of a second avatar and the event corresponds to a real-time communication to or from the second avatar;
computing an animation of the eye of the avatar to move a gaze of the avatar from an initial point to the point of interest without tracking eyes of a user of the avatar, wherein;
the animation moves the gaze of the avatar to the point of interest for a period of time and turns the gaze back to the initial point,the animation is computed without using user inputs during the animation, andthe animation is computed according to an eye movement model and based on a context determined from the event and personalization parameters of the avatar for which the animation is computed;
switching from a first input mode before the animation to a second input mode during the animation, wherein;
when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and
when in the second input mode, inputs from the one or more devices controls the eye movements;
presenting eye movements of the animation to avatars that have visibility into the animated eye; and
adjusting a view of the virtual world computed for the avatar according to the eye movements of the animation.
5 Assignments
0 Petitions
Accused Products
Abstract
A computing system and method to implement a three-dimensional virtual reality world with avatar eye movements without user eye tracking. A position and orientation of a respective avatar in the virtual reality world is tracked to generate a view of the virtual world for the avatar and to present the avatar to others. In response to detecting a predetermined event, the computing system predicts a point (e.g., the eye of another avatar) that is of interest to the respective avatar responsive to the event, and computes, according to an eye movement model, an animation of the eyes of the respective avatar where the gaze of the avatar moves from an initial point to the predicted point and/or its vicinity for a period of time and back to the initial point.
13 Citations
17 Claims
-
1. A method implemented in a three-dimensional virtual reality world, the method comprising:
-
detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; predicting a point of interest to the avatar responsive to the event that triggers eye movement, wherein the point of interest includes an eye of a second avatar and the event corresponds to a real-time communication to or from the second avatar; computing an animation of the eye of the avatar to move a gaze of the avatar from an initial point to the point of interest without tracking eyes of a user of the avatar, wherein; the animation moves the gaze of the avatar to the point of interest for a period of time and turns the gaze back to the initial point, the animation is computed without using user inputs during the animation, and the animation is computed according to an eye movement model and based on a context determined from the event and personalization parameters of the avatar for which the animation is computed; switching from a first input mode before the animation to a second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements; presenting eye movements of the animation to avatars that have visibility into the animated eye; and adjusting a view of the virtual world computed for the avatar according to the eye movements of the animation. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method implemented in a three-dimensional virtual reality world, the method comprising:
-
detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; predicting a point of interest to the avatar responsive to the event that triggers eye movement computing an animation of the eye of the avatar to move a gaze of the avatar from an initial point to the point of interest without tracking eyes of a user of the avatar; determining personalization parameters of an eye movement model based on inputs received in a second input mode, wherein the determining personalization parameters is performed using a machine learning technique, and wherein the eye movement model is trained via machine learning using eye movements captured in video images of people engaging in social activities; and switching from a first input mode before the animation to the second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements. - View Dependent Claims (7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer storage medium storing instructions configured to instruct a computer device to perform a method implemented in a three-dimensional virtual reality world, the method comprising:
-
detecting an event related to an avatar having a position and orientation in the virtual reality world, the avatar having at least one eye; predicting a point of interest to the avatar responsive to the event that triggers eye movement; computing an animation of the eye of the avatar to move a gaze of the avatar from an initial point to the point of interest without tracking eyes of a user of the avatar; determining personalization parameters of an eye movement model based on inputs received in a second input mode, wherein the determining personalization parameters is performed using a machine learning technique, and wherein the eye movement model is trained via machine learning using eye movements captured using eye tracking devices of users using the virtual reality world; and switching from a first input mode before the animation to the second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements.
-
-
16. A computing system to implement a three-dimensional virtual reality world, the system comprising:
-
a server system; and a data storage device storing; a three-dimensional model of the virtual reality world; and avatar models representing residences of the virtual reality world; wherein the server system generates, from the three-dimensional model of the virtual reality world and the avatar models, data stream to provide views of the virtual reality world to client devices that are connected to the server system via a computer network; wherein the computing system tracks a position and orientation of a respective avatar in the virtual reality world; wherein, in response to detecting a predetermined event related to the respective avatar, the computing system predicts a point that is of interest to the respective avatar responsive to the event, and computes an animation of one or more eyes of the respective avatar in which animation a gaze of the respective avatar is moved from an initial point to the predicted point of interest to the avatar; and wherein the computing system is configured to switch from a first input mode before the animation to a second input mode during the animation, wherein; when in the first input mode, inputs from one or more devices control the position and orientation of the avatar; and when in the second input mode, inputs from the one or more devices controls the eye movements; wherein the animation is computed without input from a client device tracking eyes of a user represented by the respective avatar in the virtual reality world; and wherein a view of the virtual world generated for the respective avatar is adjusted in accordance with a swift of the gaze of the respective avatar; and
during a period of the animation, a user input device that controls the position and orientation of the respective avatar before the period of the animation is automatically reconfigured to control the movement of the gaze during the period of the animation. - View Dependent Claims (17)
-
Specification