Wireless head mounted display with differential rendering and sound localization
First Claim
1. A method, comprising:
- receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed;
receiving inertial data processed from at least one inertial sensor of the HMD;
analyzing the captured images of the interactive environment and the inertial data to determine a current location of the HMD and a predicted future location of the HMD;
using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD;
tracking a gaze of a user of the HMD, wherein tracking the gaze of the user includes capturing images of an eye of the user by a gaze tracking camera in the HMD;
generating video depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user;
tracking a trajectory of the gaze of the user over a predetermined period of time, wherein tracking the trajectory of the gaze uses the captured images of the eye of the user;
tracking a trajectory of the HMD over the predetermined period of time;
predicting, while tracking the trajectory of the gaze, a movement of the gaze of the user to a predicted future region where the user will look next in the virtual environment based on analyzing a trend in the tracked trajectory of the gaze of the user and based on analyzing a trend in the tracked trajectory of the HMD;
wherein the regions of the view are differentially rendered based on the predicted movement of the gaze of the user, wherein the predicted future region starts to render before the gaze of the user is at the predicted future region;
generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user when rendered to headphones that are connected to the HMD;
wirelessly transmitting the video and the audio data via the RF transceiver to the HMD using the adjusted beamforming direction.
1 Assignment
0 Petitions
Accused Products
Abstract
A method is provided, including the following method operations: receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current and predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user; transmitting the image data and the audio data via the RF transceiver to the HMD.
-
Citations
18 Claims
-
1. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images of the interactive environment and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; tracking a gaze of a user of the HMD, wherein tracking the gaze of the user includes capturing images of an eye of the user by a gaze tracking camera in the HMD; generating video depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user; tracking a trajectory of the gaze of the user over a predetermined period of time, wherein tracking the trajectory of the gaze uses the captured images of the eye of the user; tracking a trajectory of the HMD over the predetermined period of time; predicting, while tracking the trajectory of the gaze, a movement of the gaze of the user to a predicted future region where the user will look next in the virtual environment based on analyzing a trend in the tracked trajectory of the gaze of the user and based on analyzing a trend in the tracked trajectory of the HMD; wherein the regions of the view are differentially rendered based on the predicted movement of the gaze of the user, wherein the predicted future region starts to render before the gaze of the user is at the predicted future region; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user when rendered to headphones that are connected to the HMD; wirelessly transmitting the video and the audio data via the RF transceiver to the HMD using the adjusted beamforming direction. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images of the interactive environment and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; tracking a gaze of a user of the HMD, wherein tracking the gaze of the user includes capturing images of an eye of the user by a gaze tracking camera in the HMD; generating video depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user; tracking a trajectory of the gaze of the user over a predetermined period of time, wherein tracking the trajectory of the gaze uses the captured images of the eye of the user; tracking a trajectory of the HMD over the predetermined period of time; predicting, while tracking the trajectory of the gaze, a movement of the gaze of the user to a predicted future region where the user will look next in the virtual environment based on analyzing a trend in the tracked trajectory of the gaze of the user and based on analyzing a trend in the tracked trajectory of the HMD; wherein the regions of the view are differentially rendered based on the predicted movement of the gaze of the user, wherein the predicted future region starts to render before the gaze of the user is at the predicted future region; transmitting the video via the RF transceiver to the HMD using the adjusted beamforming direction. - View Dependent Claims (12, 13, 14)
-
-
15. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images of the interactive environment and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by a user when rendered to headphones that are connected to the HMD; transmitting the audio data via the RF transceiver to the HMD using the adjusted beamforming direction; tracking a trajectory of a gaze of the user over a predetermined period of time, wherein tracking the trajectory of the gaze uses captured images of an eye of the user captured by a gaze tracking camera in the HMD; predicting, while tracking the trajectory of the gaze, a movement of the gaze of the user to a predicted future region where the user will look next in the virtual environment based on analyzing a trend in the tracked trajectory of the gaze of the user; wherein regions of a view of a virtual environment for the HMD are differentially rendered based on the predicted movement of the gaze of the user, wherein the predicted future region starts to render before the gaze of the user is at the predicted future region. - View Dependent Claims (16, 17, 18)
-
Specification