WEARABLE HEAD-MOUNTED DISPLAY AND CAMERA SYSTEM WITH MULTIPLE MODES
First Claim
Patent Images
1. A head-mounted device for providing multiple modes of user interaction, the head-mounted device comprising:
- a display device configured to present visual information to the user; and
a user input device configured to receive an input from the user selecting one of a plurality of interaction modes, including at least a reality mode, an augmented reality mode, and a virtual reality mode;
wherein when in reality mode, the visual information includes a live video feed from an image capture device associated with the head-mounted display device, the image capture device configured to capture images of a physical environment;
wherein when in augmented reality mode, the visual information includes live video feed from the image capture device along with computer-generated simulated objects or computer image processing; and
wherein when in virtual reality mode, the visual information includes a computer-generated simulated environment.
3 Assignments
0 Petitions
Accused Products
Abstract
Embodiments of the present disclosure include systems and methods for a wearable head-mounted display and camera system with multiple modes of user interaction including at in some embodiments a natural reality mode, an augmented reality mode, and a virtual reality mode.
-
Citations
28 Claims
-
1. A head-mounted device for providing multiple modes of user interaction, the head-mounted device comprising:
-
a display device configured to present visual information to the user; and a user input device configured to receive an input from the user selecting one of a plurality of interaction modes, including at least a reality mode, an augmented reality mode, and a virtual reality mode; wherein when in reality mode, the visual information includes a live video feed from an image capture device associated with the head-mounted display device, the image capture device configured to capture images of a physical environment; wherein when in augmented reality mode, the visual information includes live video feed from the image capture device along with computer-generated simulated objects or computer image processing; and wherein when in virtual reality mode, the visual information includes a computer-generated simulated environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system for providing multiple modes of user interaction via a head-mounted device, the system comprising:
-
a sensor unit configured to capture sensor data from the physical environment surrounding the head-mounted device; a sensory output unit configured to present sensory output information to a user of the head-mounted device; a user input unit configured to receive a user input; a processor unit; and a memory unit having instructions stored thereon, which when executed by the processor unit, cause the system to; receive, via the user input unit, a user selection of a first, second, or third user interaction mode; and if the user selection is the first user interaction mode; capture a live sensor data feed via the sensor unit; and present sensory information including the live sensor data feed via the sensory output unit; if the user selection is the second user interaction mode; capture a live sensor data feed via the sensor unit; generate one or more simulated objects, the one or more simulated objects including user perceptible characteristics; present sensory output information including the live sensor data feed and the one or more simulated objects via the sensory output unit; and if the user selection is the third user interaction mode; generate a simulated environment, the simulated environment including user perceptible characteristics; and present sensory output information including the simulated environment via the sensory output unit. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A method for providing multiple modes of user interaction via a head-mounted device, the head-mounted device associated with a sensor unit configured to capture sensor data from the physical environment surrounding the head-mounted device, a sensory output unit configured to present sensory output information to a user of the head-mounted device, and a user input unit configured to receive a user input, the method comprising:
-
receiving, via the user input unit, a user selection of a first, second, or third user interaction mode; and if the user selection is the first user interaction mode; capturing a live sensor data feed via the sensor unit; and presenting sensory output information including the live sensor data feed via the sensory output unit; if the user selection is the second user interaction mode; capturing a live sensor data feed via the sensor unit; generating one or more simulated objects, the one or more simulated objects including user perceptible characteristics; presenting sensory output information including the live sensor data feed with the one or more simulated objects via the sensory output unit; and if the user selection is the third user interaction mode; generating a simulated environment, the simulated environment including user perceptible characteristics; and presenting sensory output information including the simulated environment via the sensory output unit. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28)
-
Specification