Wireless Head Mounted Display with Differential Rendering and Sound Localization
First Claim
1. A method, comprising:
- receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed;
receiving inertial data processed from at least one inertial sensor of the HMD;
analyzing the captured images and the inertial data to determine a current location of the HMD and a predicted future location of the HMD;
using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD;
tracking a gaze of a user of the HMD;
generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user;
generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user when rendered to headphones that are connected to the HMD;
transmitting the image data and the audio data via the RF transceiver to the HMD using the adjusted beamforming direction.
1 Assignment
0 Petitions
Accused Products
Abstract
A method is provided, including the following method operations: receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current and predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user; transmitting the image data and the audio data via the RF transceiver to the HMD.
143 Citations
20 Claims
-
1. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user when rendered to headphones that are connected to the HMD; transmitting the image data and the audio data via the RF transceiver to the HMD using the adjusted beamforming direction. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 20)
-
-
11. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered based on the tracked gaze of the user; transmitting the image data via the RF transceiver to the HMD using the adjusted beamforming direction. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A method, comprising:
-
receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current location of the HMD and a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by a user when rendered to headphones that are connected to the HMD; transmitting the audio data via the RF transceiver to the HMD using the adjusted beamforming direction. - View Dependent Claims (17, 18, 19)
-
Specification