Three Dimensional User Interface Effects on a Display by Using Properties of Motion
First Claim
1. A graphical user interface method, comprising:
- receiving positional data from one or more position sensors disposed within a device;
determining a 3D frame of reference for the device based at least in part on the received positional data;
receiving optical data from one or more optical sensors disposed within the device;
determining a position of a user'"'"'s eyes based at least in part on the received optical data; and
generating a virtual 3D depiction of at least one graphical user interface object on a display of the device,wherein the at least one graphical user interface object is represented in a virtual 3D operating system environment, andwherein the act of generating is based at least in part on the determined 3D frame of reference and the position of the user'"'"'s eyes.
1 Assignment
0 Petitions
Accused Products
Abstract
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user'"'"'s eyes may either be inferred or calculated directly by using a device'"'"'s front-facing camera. With the position of the user'"'"'s eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device'"'"'s display may be created and interacted with by the user.
149 Citations
43 Claims
-
1. A graphical user interface method, comprising:
-
receiving positional data from one or more position sensors disposed within a device; determining a 3D frame of reference for the device based at least in part on the received positional data; receiving optical data from one or more optical sensors disposed within the device; determining a position of a user'"'"'s eyes based at least in part on the received optical data; and generating a virtual 3D depiction of at least one graphical user interface object on a display of the device, wherein the at least one graphical user interface object is represented in a virtual 3D operating system environment, and wherein the act of generating is based at least in part on the determined 3D frame of reference and the position of the user'"'"'s eyes. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A graphical user interface, comprising:
-
a viewing surface; a virtual 3D operating system environment; and one or more graphical user interface objects, wherein the one or more graphical user interface objects are represented in the virtual 3D operating system environment and depicted on the viewing surface, and wherein the depiction of the one or more graphical user interface objects on the viewing surface is determined at least in part by a determined 3D frame of reference of the viewing surface with respect to a user of the viewing surface and a determined location of eyes of the user with respect to the viewing surface. - View Dependent Claims (15, 16, 17, 18, 19)
-
-
20. A graphical user interface method, comprising:
-
receiving positional data from one or more position sensors disposed within a device; determining a 3D frame of reference for the device based at least in part on the received positional data; receiving optical data from one or more optical sensors disposed within the device; generating a depiction of a graphical user interface on a display of the device, wherein the graphical user interface comprises at least one graphical user interface object; and applying one or more visual effects to at least one of the at least one graphical user interface objects, wherein the one or more applied visual effects are based at least in part on the determined 3D frame of reference and the received optical data. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29, 30)
-
-
31. A method, comprising:
-
generating a 2D depiction of a graphical user interface on a display of a device, wherein the graphical user interface comprises at least one graphical user interface object; receiving positional data from one or more position sensors disposed within the device; determining a 3D frame of reference for the device based at least in part on the received positional data; receiving optical data from one or more optical sensors disposed within the device; determining a position of a user of the device'"'"'s eyes based at least in part on the received optical data; detecting an activating gesture based at least in part on the received positional data; and generating a virtual 3D depiction of the graphical user interface on the display of the device in response to the detection of the activating gesture, wherein the act of generating the virtual 3D depiction is based on the determined 3D frame of reference and the determined position of the user'"'"'s eyes. - View Dependent Claims (32, 33, 34, 35, 36, 37)
-
-
38. An apparatus, comprising:
-
a display; one or more optical sensors; one or more positional sensors; a memory; and one or more programmable control devices communicatively coupled to the display, the optical sensors, the positional sensors, and the memory, wherein the memory includes instructions for causing the one or more programmable control devices to; receive positional data from the one or more position sensors; determine a 3D frame of reference for the apparatus based at least in part on the received positional data; receive optical data from the one or more optical sensors; determine a position of a user of the apparatus'"'"'s eyes based at least in part on the received optical data; and render a virtual 3D depiction of at least one graphical user interface object on the display, wherein the instructions to render are based on the determined 3D frame of reference and the determined position of the user'"'"'s eyes. - View Dependent Claims (39, 40, 41, 42, 43)
-
Specification