Systems and methods for computer assisted operation
First Claim
Patent Images
1. A system, comprising:
- a display in a wearable housing with camera and sensors;
a mirror coupled to the display to selectably turn on or off view of an outside environment in front of the person'"'"'s eye to provide augmented reality, virtual reality, or a combination thereof;
a processor in the wearable housing coupled to the camera and to the variable reflectance mirror to selectably switch between augmented reality and virtual reality, the processor including code to perform motion tracking and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment using one video camera without a depth sensor in a mobile phone;
code to capture images from a plurality of angles of the environment;
code to acquire sensor data from sensors and optimize features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location;
a learning machine in the wearable housing including graphical processing unit (GPU wherein the GPU discriminates between learned user gestures in real-time and wherein the learning machine determines content to be presented based on previous activities or selections while generating content including virtual images to be integrated with surrounding images; and
a wireless transceiver coupled to the transceiver to communicate with a remote processor.
0 Assignments
0 Petitions
Accused Products
Abstract
A reality system includes a display aimed at a retina, the display providing 3D images with different depth view points; a variable reflectance mirror to selectably turn on or off view of an outside environment in front of the person'"'"'s eye; a processor coupled to the camera and to the variable reflectance mirror to selectably switch between augmented reality and virtual reality; and a wireless transceiver coupled to the transceiver to communicate with a remote processor.
-
Citations
20 Claims
-
1. A system, comprising:
-
a display in a wearable housing with camera and sensors; a mirror coupled to the display to selectably turn on or off view of an outside environment in front of the person'"'"'s eye to provide augmented reality, virtual reality, or a combination thereof; a processor in the wearable housing coupled to the camera and to the variable reflectance mirror to selectably switch between augmented reality and virtual reality, the processor including code to perform motion tracking and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment using one video camera without a depth sensor in a mobile phone;
code to capture images from a plurality of angles of the environment;
code to acquire sensor data from sensors and optimize features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location;a learning machine in the wearable housing including graphical processing unit (GPU wherein the GPU discriminates between learned user gestures in real-time and wherein the learning machine determines content to be presented based on previous activities or selections while generating content including virtual images to be integrated with surrounding images; and a wireless transceiver coupled to the transceiver to communicate with a remote processor. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification