Systems and methods for gesture based interaction with viewpoint dependent user interfaces
First Claim
1. A real-time gesture based interactive system configured to enable gesture based interaction with user interfaces rendered from a 3D object model in a viewpoint dependent manner, comprising:
- a processor;
a camera system configured to capture image data;
memory containing;
an operating system including a 3D object model that describes three dimensional spatial relationships between a set of user interface objects comprising a first user interface object and a second user interface object;
a head tracking application; and
an object tracking application;
wherein the operating system configures the processor to;
capture image data using the camera system;
detect first physical coordinates of a user'"'"'s head by processing at least a portion of the image data using the head tracking application;
determine a user viewpoint from which to render a user interface display based on the first physical coordinates of the user'"'"'s head such that a portion of the first user interface object is occluded by the second user interface object in the rendered user interface display;
determine an object location by processing at least a portion of the captured image data using the object tracking application;
map the object location to a cursor location comprising three dimensional coordinates;
render a user interface display from the 3D object model and the cursor location based upon the user viewpoint determined based on the first physical coordinates of the user'"'"'s head;
capture additional image data using the camera system;
detect second physical coordinates of the user'"'"'s head by processing at least a portion of the additional image data using the head tracking application, the second physical coordinates being different from the first physical coordinates;
determine an updated user viewpoint from which to render a user interface display based on the second physical coordinates of the user'"'"'s head, the updated user viewpoint being different from the user viewpoint such that the portion of the first user interface object is not occluded by the second user interface object in the updated user interface display;
determine an updated object location by processing at least a portion of the additional captured image data using the object tracking application;
map the updated object location to an updated cursor location comprising three dimensional coordinates; and
render an updated user interface display from the 3D object model and the updated cursor location based upon the updated user viewpoint determined based on the second physical coordinates of the user'"'"'s head and the updated object location, where the updated user interface display is rendered to simulate motion parallax based upon depth of the user interface objects and the updated cursor location in the 3D object model.
3 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods are disclosed for performing three-dimensional (3D) gesture based interaction with a user interface rendered from a 3D object model based upon the viewpoint of a user. Gesture based interactive systems in accordance with many embodiments of the invention utilize a spatial model of how user interface objects, such as icons and a cursor, spatially relate to one another. The user interface model can be constructed as a 3D object model. With a 3D object model, the operating system can use depth information to render a user interface display appropriate to the requirements of a specific display technology. In many embodiments, head tracking is used to determine a viewpoint from which to render a user interface display from a 3D object model maintained by the operating system and the user interface can be updated in response to the detection of 3D gestures.
288 Citations
19 Claims
-
1. A real-time gesture based interactive system configured to enable gesture based interaction with user interfaces rendered from a 3D object model in a viewpoint dependent manner, comprising:
-
a processor; a camera system configured to capture image data; memory containing; an operating system including a 3D object model that describes three dimensional spatial relationships between a set of user interface objects comprising a first user interface object and a second user interface object; a head tracking application; and an object tracking application; wherein the operating system configures the processor to; capture image data using the camera system; detect first physical coordinates of a user'"'"'s head by processing at least a portion of the image data using the head tracking application; determine a user viewpoint from which to render a user interface display based on the first physical coordinates of the user'"'"'s head such that a portion of the first user interface object is occluded by the second user interface object in the rendered user interface display; determine an object location by processing at least a portion of the captured image data using the object tracking application; map the object location to a cursor location comprising three dimensional coordinates; render a user interface display from the 3D object model and the cursor location based upon the user viewpoint determined based on the first physical coordinates of the user'"'"'s head; capture additional image data using the camera system; detect second physical coordinates of the user'"'"'s head by processing at least a portion of the additional image data using the head tracking application, the second physical coordinates being different from the first physical coordinates; determine an updated user viewpoint from which to render a user interface display based on the second physical coordinates of the user'"'"'s head, the updated user viewpoint being different from the user viewpoint such that the portion of the first user interface object is not occluded by the second user interface object in the updated user interface display; determine an updated object location by processing at least a portion of the additional captured image data using the object tracking application; map the updated object location to an updated cursor location comprising three dimensional coordinates; and render an updated user interface display from the 3D object model and the updated cursor location based upon the updated user viewpoint determined based on the second physical coordinates of the user'"'"'s head and the updated object location, where the updated user interface display is rendered to simulate motion parallax based upon depth of the user interface objects and the updated cursor location in the 3D object model. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A method for gesture based interaction with a user interfaces rendered from a 3D object model that describes three dimensional spatial relationships between a set of user interface objects comprising a first user interface object and a second user interface object in a viewpoint dependent manner, comprising:
-
capturing image data using a camera system; detecting first physical coordinates of a user'"'"'s head by processing at least a portion of the image data using the processor configured by a head tracking application; determining a user viewpoint from which to render a user interface display based on the first physical coordinates of the user'"'"'s head such that a portion of the first user interface object is occluded by the second user interface object; determining an object location by processing at least a portion of the captured image data using a processor configured by an object tracking application; mapping the object location to a cursor location using the processor configured by an operating system, the cursor location comprising three dimensional coordinates; rendering a user interface display from the 3D object model and the cursor location based upon the user viewpoint determined based on the first physical coordinates of the user'"'"'s head using the processor configured by the operating system; capturing additional image data using the camera system; detecting second physical coordinates of the user'"'"'s head by processing at least a portion of the additional image data using the head tracking application, the second physical coordinates being different from the first physical coordinates; determining an updated user viewpoint from which to render a user interface display based on the second physical coordinates of the user'"'"'s head, the updated user viewpoint being different from the user viewpoint such that the portion of the first user interface object is not occluded by the second user interface object; determining an updated object location by processing at least a portion of the additional captured image data using the processor configured by the object tracking application; mapping the updated object location to an updated cursor location using the processor configured by the operating system, the updated cursor location comprising three dimensional coordinates; and rendering an updated user interface display from the 3D object model and the updated cursor location based upon the updated user viewpoint determined based on the second physical coordinates of the user'"'"'s head and the updated object location using the processor configured by the operating system, where the updated user interface display is rendered to simulate motion parallax based upon depth of the user interface objects and the updated cursor location in the 3D object model.
-
Specification