Velocity field interaction for free space gesture interface and control
First Claim
Patent Images
1. A method of automatically interpreting a gesture of a control object, in a three-dimensional (3D) sensor space using a 3D sensor, as a first gesture or a second gesture, the method including:
- processing an output of a video camera of the 3D sensor thereby sensing a trajectory of movement of the control object in any direction in the 3D sensor space;
further processing the output of the video camera of the 3D sensor thereby sensing an orientation of the control object;
determining by processing the output of the video camera of the 3D sensor a surface of the control object;
defining a control plane that remains tangent to the surface of the control object throughout the movement of the control object in any direction in the 3D sensor space, the control plane being defined by processing the output of the video camera of the 3D sensor and the sensed orientation of the control object;
interpreting the gesture of the control object as the first gesture or the second gesture by comparing a direction of the trajectory of the movement of the control object to (i) a normal vector that is normal to the defined control plane and (ii) a surface of the defined control plane, wherein;
the gesture is the first gesture when the direction of the trajectory of the movement of the control object is within a pre-determined range of the normal vector of the control plane; and
the gesture is the second gesture when the direction of the trajectory of the movement of the control object is parallel to the surface of the control plane, within a pre-determined range,such that the gesture is the first gesture when the direction of the movement of the trajectory of the control object is more normal to the surface of the control plane than parallel to the surface of the control plane and the gesture is the second gesture when the direction of the movement of the trajectory of the control object is more parallel to the surface of the control plane than normal to the surface of the control plane.
11 Assignments
0 Petitions
Accused Products
Abstract
The technology disclosed relates to automatically interpreting a gesture of a control object in a three dimensional sensor space by sensing a movement of the control object in the three dimensional sensor space, sensing orientation of the control object, defining a control plane tangential to a surface of the control object and interpreting the gesture based on whether the movement of the control object is more normal to the control plane or more parallel to the control plane.
42 Citations
26 Claims
-
1. A method of automatically interpreting a gesture of a control object, in a three-dimensional (3D) sensor space using a 3D sensor, as a first gesture or a second gesture, the method including:
-
processing an output of a video camera of the 3D sensor thereby sensing a trajectory of movement of the control object in any direction in the 3D sensor space; further processing the output of the video camera of the 3D sensor thereby sensing an orientation of the control object; determining by processing the output of the video camera of the 3D sensor a surface of the control object; defining a control plane that remains tangent to the surface of the control object throughout the movement of the control object in any direction in the 3D sensor space, the control plane being defined by processing the output of the video camera of the 3D sensor and the sensed orientation of the control object; interpreting the gesture of the control object as the first gesture or the second gesture by comparing a direction of the trajectory of the movement of the control object to (i) a normal vector that is normal to the defined control plane and (ii) a surface of the defined control plane, wherein; the gesture is the first gesture when the direction of the trajectory of the movement of the control object is within a pre-determined range of the normal vector of the control plane; and the gesture is the second gesture when the direction of the trajectory of the movement of the control object is parallel to the surface of the control plane, within a pre-determined range, such that the gesture is the first gesture when the direction of the movement of the trajectory of the control object is more normal to the surface of the control plane than parallel to the surface of the control plane and the gesture is the second gesture when the direction of the movement of the trajectory of the control object is more parallel to the surface of the control plane than normal to the surface of the control plane. - View Dependent Claims (2, 3, 20, 26)
-
-
4. A method of automatically interpreting a gesture of a control object, in a three-dimensional (3D) sensor space relative to a flow depicted in a display, as a first gesture or a second gesture, the method including:
-
processing an output of a video camera of a 3D sensor thereby sensing a movement of the control object in any direction in the 3D sensor space using the 3D sensor; further processing the output of the video camera of the 3D sensor thereby sensing an orientation of the control object and determining a surface of the control object; defining a control plane that remains tangent to the surface of the control object throughout the movement of the control object in any direction in the 3D sensor space, the control plan being defined by processing the output of the video camera of the 3D sensor and the sensed orientation of the control object; and interpreting the gesture of the control object as the first gesture or the second gesture by comparing a direction of the flow to (i) a normal vector that is normal to the defined control plane and (ii) a surface of the defined control plane, wherein; the gesture is the first gesture when the direction of the flow is within a pre-determined range of the normal vector of the control plane; and the gesture is the second gesture when the direction of the flow is parallel to the surface of the control plane, within a pre-determined range, such that the gesture is the first gesture when the direction of the flow is more normal to the surface of the control plane than parallel to the surface of the control plane, and the gesture is the second gesture when the direction of the flow is more parallel to the surface of the control plane than normal to the surface of the control plane. - View Dependent Claims (5, 6, 21)
-
-
7. A method of navigating a multi-layer presentation tree using gestures of a control object in a three-dimensional (3D) sensor space, using a 3D sensor, by distinguishing between the control object and a sub-object of the control object, the method including:
-
processing an output of a video camera of the 3D sensor thereby sensing a movement of the control object in any direction in the 3D sensor space using a control plane that remains tangent to a surface of the control object throughout the movement of the control object in any direction in the 3D sensor space, the control plane being defined by processing the output of the video camera of the 3D sensor and a sensed orientation of the control object; interpreting by a computing device a direction of the movement of the control object as scrolling through a particular level of the multi-layer presentation tree when the direction of the movement of the control object is more normal with respect to the surface of the tangent control plane than parallel with respect to the surface of the control plane; and further processing the output of the video camera of the 3D sensor thereby sensing a movement of the sub-object in the 3D sensor space, and interpreting the movement of the sub-object as selecting a different level in the multi-layer presentation tree, the different level being either a deeper or higher level from the particular level, and subsequently interpreting by the computing device the movement of the control object as scrolling through the different level of the multi-layer presentation tree. - View Dependent Claims (8, 9, 22)
-
-
10. A method of navigating a multi-layer presentation tree using gestures of a control object in a three-dimensional (3D) sensor space, using a 3D sensor, by distinguishing between the control object and one or more sub-objects of the control object, the method including:
-
processing an output of a video camera of the 3D sensor thereby sensing a movement of the control object in any direction in the 3D sensor space using a control plane that remains tangent to a surface of the control object throughout the movement of the control object in any direction in the 3D sensor space, the control plane being defined by processing the output of the video camera of the 3D sensor and a sensed orientation of the control object determined by processing the output of the video camera of the 3D sensor; interpreting a direction of the movement of the control object as traversing through a particular level of the presentation tree when the direction of the movement of the control object is one of (i) more normal with respect to the surface of the tangent control plane than parallel with respect to the surface of the control plane and (ii) more parallel with respect to the surface of the tangent control plane than normal with respect to the surface of the control plane; further processing the output of the video camera of the 3D sensor thereby sensing a movement of a first sub-object of the control object in the 3D sensor space and interpreting the movement of the first sub-object as selecting a different level in the presentation tree, the different level being either deeper or higher level from the particular level, and subsequently interpreting the movement of the control object as traversing through the different level of the presentation tree; and further processing the output of the video camera of the 3D sensor thereby sensing a movement of a second sub-object of the control object in the 3D sensor space and interpreting the movement of the second sub-object as selecting a different presentation layout from a current presentation layout of the presentation tree, and subsequently presenting the presentation tree in the different presentation layout. - View Dependent Claims (11)
-
-
12. A method of automatically determining by a computing device a control to a virtual control by a control object in a three-dimensional (3D) sensor space, using a 3D sensor, by distinguishing the control object and a sub-object of the control object, the method including:
-
processing an output of a video camera of a 3D sensor thereby sensing a location of the control object in a 3D sensor space using a control plane that remains tangent to a surface of the control object throughout a movement of the control object in any direction in the 3D space, the control plane being defined by processing the output of the video camera of the 3D sensor and a sensed orientation of the control object determined by processing the output of the video camera of the 3D sensor; determining by a computing device whether the control object engages the virtual control based on the location of the control object; further processing the output of the video camera of the 3D sensor thereby sensing a direction of a movement of the sub-object of the control object in any direction in the 3D sensor space; and interpreting by a computing device the direction of the movement of the sub-object as a gesture controlling the virtual control if the control object engages the virtual control and when the direction of the movement of the control object is one of (i) more normal with respect to the surface of the tangent control plane than parallel with respect to the surface of the control plane and (ii) more parallel with respect to the surface of the tangent control plane than normal with respect to the surface of the control plane. - View Dependent Claims (13, 14, 23)
-
-
15. A method of automatically determining, by a computer device, a control to a virtual control by a control object in a three-dimensional (3D) sensory space, the method including:
-
processing an output of a video camera of the 3D sensor thereby sensing a location of the control object in the 3D sensor space using a 3D sensor; determining by a computing device whether the control object engages the virtual control based on the location of the control object; further processing the output of the video camera of the 3D sensor thereby sensing an orientation of the control object and determining a surface of the control object; defining a control plane that remains tangent to the surface of the control object throughout a movement of the control object in any direction in the 3D sensor space, the control plane being determined by processing the output of the video camera of the 3D sensor and the sensed orientation of the control object; and interpreting by a computing device a direction of a normal vector that is normal to a surface of the control plane that is tangent to the determined surface of the control object, the direction of the normal vector being interpreted with respect to a direction of a trajectory of a movement of the control object that creates a gesture controlling the virtual control, if the control object engages the virtual control. - View Dependent Claims (16, 17, 24)
-
-
18. A method of automatically interpreting by a computing device a gesture of a control object in a 3D sensor space relative to one or more objects depicted in a display, the method including:
-
processing an output of a video camera of a 3D sensor thereby sensing a speed of a movement of the control object moving in any direction through the 3D sensor space using the 3D sensor; interpreting by a computing device the movement as a path on the display if the speed of the movement exceeds a pre-determined threshold; and duplicating one or more of the objects in the display that intersect the interpreted path. - View Dependent Claims (19, 25)
-
Specification