Three dimensional user interface effects on a display by using properties of motion
First Claim
1. A method, comprising:
- generating a 2D depiction of a graphical user interface on a display of a device, wherein the graphical user interface comprises at least one graphical user interface object;
receiving positional data from one or more positional sensors disposed within the device;
determining a 3D frame of reference for the device based, at least in part, on the received positional data;
receiving optical data from one or more optical sensors disposed within the device;
determining a position of a user of the device'"'"'s eyes based, at least in part, on the received optical data;
detecting an activating gesture based, at least in part, on the received positional data; and
generating a virtual 3D depiction of the graphical user interface on the display of the device in response to the detection of the activating gesture,wherein the act of generating the virtual 3D depiction is based, at least in part, on the determined 3D frame of reference and the determined position of the user'"'"'s eyes.
0 Assignments
0 Petitions
Accused Products
Abstract
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user'"'"'s eyes may either be inferred or calculated directly by using a device'"'"'s front-facing camera. With the position of the user'"'"'s eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device'"'"'s display may be created and interacted with by the user.
22 Citations
20 Claims
-
1. A method, comprising:
-
generating a 2D depiction of a graphical user interface on a display of a device, wherein the graphical user interface comprises at least one graphical user interface object; receiving positional data from one or more positional sensors disposed within the device; determining a 3D frame of reference for the device based, at least in part, on the received positional data; receiving optical data from one or more optical sensors disposed within the device; determining a position of a user of the device'"'"'s eyes based, at least in part, on the received optical data; detecting an activating gesture based, at least in part, on the received positional data; and generating a virtual 3D depiction of the graphical user interface on the display of the device in response to the detection of the activating gesture, wherein the act of generating the virtual 3D depiction is based, at least in part, on the determined 3D frame of reference and the determined position of the user'"'"'s eyes. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. An apparatus, comprising:
-
a display; one or more optical sensors; one or more positional sensors; a memory; and one or more programmable control devices communicatively coupled to the display, the optical sensors, the positional sensors, and the memory, wherein the memory includes instructions for causing the one or more programmable control devices to; generate a 2D depiction of a graphical user interface on the display, wherein the graphical user interface comprises at least one graphical user interface object; receive positional data from the one or more positional sensors; determine a 3D frame of reference for the apparatus based, at least in part, on the received positional data; receive optical data from the one or more optical sensors; determine a position of a user of the apparatus'"'"'s eyes based, at least in part, on the received optical data; detect an activating gesture based, at least in part, on the received positional data; and generate a virtual 3D depiction of the graphical user interface on the display in response to the detection of the activating gesture, wherein the instructions for causing the one or more programmable control devices to generate the virtual 3D depiction are based, at least in part, on the determined 3D frame of reference and the determined position of the user'"'"'s eyes. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory program storage device, readable by a programmable control device and comprising instructions stored thereon to cause one or more processing units to:
-
generate a 2D depiction of a graphical user interface on a display of a device, wherein the graphical user interface comprises at least one graphical user interface object; receive positional data from one or more positional sensors disposed within the device; determine a 3D frame of reference for the device based, at least in part, on the received positional data; receive optical data from one or more optical sensors disposed within the device; determine a position of a user of the device'"'"'s eyes based, at least in part, on the received optical data; detect an activating gesture based, at least in part, on the received positional data; and generate a virtual 3D depiction of the graphical user interface on the display of the device in response to the detection of the activating gesture, wherein the instructions to cause the one or more processing units to generate the virtual 3D depiction are based, at least in part, on the determined 3D frame of reference and the determined position of the user'"'"'s eyes. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification