Three-dimensional user interface
First Claim
Patent Images
1. A user interface method, comprising:
- displaying an object on a display screen;
defining an interaction surface containing an interaction region in space, and mapping the interaction surface to the display screen;
capturing a sequence of depth maps over time of at least a part of a body of a human subject;
processing the depth maps in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body, responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface;
controlling a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point; and
wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space.
3 Assignments
0 Petitions
Accused Products
Abstract
A user interface method includes defining an interaction surface containing an interaction region in space. A sequence of depth maps is captured over time of at least a part of a body of a human subject. The depth maps are processed in order to detect a direction and speed of movement of the part of the body as the part of the body passes through the interaction surface. A computer application is controlled responsively to the detected direction and speed.
-
Citations
19 Claims
-
1. A user interface method, comprising:
-
displaying an object on a display screen; defining an interaction surface containing an interaction region in space, and mapping the interaction surface to the display screen; capturing a sequence of depth maps over time of at least a part of a body of a human subject; processing the depth maps in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body, responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface; controlling a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point; and wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 15)
-
-
8. User interface apparatus, comprising:
-
a display screen, which is configured to display an object; a sensing device, which is configured to capture a sequence of depth maps over time of at least a part of a body of a human subject; a processor, which is configured to define an interaction surface, which contains an interaction region in space, and to map the interaction surface to the display screen, and to process the depth maps in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body, responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface and to control a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point; and wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer,
cause the computer to display an object on a display screen, to define an interaction surface, which contains an interaction region in space, and to map the interaction surface to the display screen, to process a sequence of depth maps created over time of at least a part of a body of a human subject in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body, responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface, to control a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point, and wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space.
-
16. A user interface method, comprising:
-
displaying an object on a display screen; defining, responsively to an input received from a user of the computer application, an interaction surface containing an interaction region in space for a computer application while specifying, based on the input received from the user, dimensions in space of the interaction region that correspond to an area of the display screen; capturing a sequence of depth maps over time of at least a part of a body of a human subject; processing the depth maps in order to detect a movement of the part of the body as the part of the body passes through the interaction surface; controlling the computer application so as to change the displayed object on the screen responsively to the movement of the part of the body within the specified dimensions of the interaction region; and wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. - View Dependent Claims (17, 18, 19)
-
Specification