Three dimensional user interface effects on a display
First Claim
1. A graphical user interface method, comprising:
- receiving optical data from one or more optical sensors disposed within a device, wherein the optical data comprises one or more of;
two-dimensional image data, stereoscopic image data, structured light data, depth map data, and Lidar data;
receiving non-optical data from one or more non-optical sensors;
determining a position of a user of the device'"'"'s head based, at least in part, on the received optical data and the received non-optical data;
generating a virtual 3D depiction of at least part of a graphical user interface on a display of the device; and
applying an appropriate perspective transformation to the virtual 3D depiction of the at least part of the graphical user interface on the display of the device,wherein the acts of generating and applying are based, at least in part, on the determined position of the user of the device'"'"'s head, the received optical data, and the received non-optical data, andwherein the at least part of the graphical user interface is represented in a virtual 3D operating system environment.
1 Assignment
0 Petitions
Accused Products
Abstract
The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user'"'"'s head may either be inferred or calculated directly by using one or more of a device'"'"'s optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user'"'"'s head, more realistic virtual 3D depictions of the graphical objects on the device'"'"'s display may be created—and interacted with—by the user.
23 Citations
20 Claims
-
1. A graphical user interface method, comprising:
-
receiving optical data from one or more optical sensors disposed within a device, wherein the optical data comprises one or more of;
two-dimensional image data, stereoscopic image data, structured light data, depth map data, and Lidar data;receiving non-optical data from one or more non-optical sensors; determining a position of a user of the device'"'"'s head based, at least in part, on the received optical data and the received non-optical data; generating a virtual 3D depiction of at least part of a graphical user interface on a display of the device; and applying an appropriate perspective transformation to the virtual 3D depiction of the at least part of the graphical user interface on the display of the device, wherein the acts of generating and applying are based, at least in part, on the determined position of the user of the device'"'"'s head, the received optical data, and the received non-optical data, and wherein the at least part of the graphical user interface is represented in a virtual 3D operating system environment. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A device, comprising:
-
a display; one or more optical sensors; one or more positional sensors; a memory; and one or more programmable control devices communicatively coupled to the display, the optical sensors, the positional sensors, and the memory, wherein the memory includes instructions for causing the one or more programmable control devices to; receive optical data from the one or more optical sensors, wherein the optical data comprises one or more of;
two-dimensional image data, stereoscopic image data, structured light data, depth map data, and Lidar data;receive non-optical data from one or more non-optical sensors; determine a position of a user of the device'"'"'s head based, at least in part, on the received optical data and the received non-optical data; generate a virtual 3D depiction of at least part of a graphical user interface on the display; and apply an appropriate perspective transformation to the virtual 3D depiction of the at least part of the graphical user interface on the display of the device, wherein the instructions to generate and apply are based, at least in part, on the determined position of the user of the device'"'"'s head, the received optical data, and the received non-optical data, and wherein the at least part of the graphical user interface is represented in a virtual 3D operating system environment. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to:
-
receive optical data from one or more optical sensors in a device, wherein the optical data comprises one or more of;
two-dimensional image data, stereoscopic image data, structured light data, depth map data, and Lidar data;receive non-optical data from one or more non-optical sensors; determine a position of a user of the device'"'"'s head based, at least in part, on the received optical data and the received non-optical data; and generate a virtual 3D depiction of at least part of a graphical user interface on a display of the device; and apply an appropriate perspective transformation to the virtual 3D depiction of the at least part of the graphical user interface on the display of the device, wherein the instructions to generate and apply are based, at least in part, on the determined position of the user of the device'"'"'s head, the received optical data, and the received non-optical data, and wherein the at least part of the graphical user interface is represented in a virtual 3D operating system environment. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification