Surface UI for gesture-based interaction
First Claim
1. A system comprising:
- one or more processors;
one or more memories;
at least one sensing plane positioned in space to receive input from one or more users that are interacting with the at least one sensing plane;
a detection component, maintained on the one or more memories and executable by the one or more processors, to detect one or more dimensions of a first input image and a second input image received from a first imaging component and a second imaging component, respectively, and to render a touch image, the touch image comprising a combination of the at least first and second input images, wherein each of the first and second input images include at least part of the received input from the one or more users;
an edge-detection filter that is applied to at least the first and second input images to highlight one or more edge contours of the first and second input images, respectively, to thereby yield a first and a second edge image; and
a pixel-wise comparison component, maintained on the one or more memories and executable by the one or more processors, to perform pixel-wise multiplication of the first and second edge images to render the touch image by identifying where the one or more edge contours of the first and second edge images overlap while excluding background objects that fail to align in the first and second edge images,wherein;
the detection component is further executable by the one or more processors to identify, using the touch image, the one or more users that are interacting with the at least one sensing plane; and
the first input image or the second input image includes data from one or more other users and the detection component is further executable by the one or more processors to determine that the one or more other users are not interacting with the at least one sensing plane and thereby the one or more other users are not in the touch image.
2 Assignments
0 Petitions
Accused Products
Abstract
Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image.
146 Citations
36 Claims
-
1. A system comprising:
-
one or more processors; one or more memories; at least one sensing plane positioned in space to receive input from one or more users that are interacting with the at least one sensing plane; a detection component, maintained on the one or more memories and executable by the one or more processors, to detect one or more dimensions of a first input image and a second input image received from a first imaging component and a second imaging component, respectively, and to render a touch image, the touch image comprising a combination of the at least first and second input images, wherein each of the first and second input images include at least part of the received input from the one or more users; an edge-detection filter that is applied to at least the first and second input images to highlight one or more edge contours of the first and second input images, respectively, to thereby yield a first and a second edge image; and a pixel-wise comparison component, maintained on the one or more memories and executable by the one or more processors, to perform pixel-wise multiplication of the first and second edge images to render the touch image by identifying where the one or more edge contours of the first and second edge images overlap while excluding background objects that fail to align in the first and second edge images, wherein; the detection component is further executable by the one or more processors to identify, using the touch image, the one or more users that are interacting with the at least one sensing plane; and the first input image or the second input image includes data from one or more other users and the detection component is further executable by the one or more processors to determine that the one or more other users are not interacting with the at least one sensing plane and thereby the one or more other users are not in the touch image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A method comprising:
-
acquiring first and second input images of one or more users interacting with a sensing plane from first and second imaging components, respectively; determining edge contours of the first and second input images to yield first and second highlighted edge images based at least in part on features of the one or more users interacting with the sensing plane; performing pixel-wise multiplication of the first and second highlighted edge images to render a touch image by identifying where the edge contours of the first and second highlighted edge images overlap while excluding background objects that fail to overlap in the first and second highlighted edge images; and recognizing each of the one or more users interacting with the sensing plane based on the first and second highlighted edge images, wherein the first input image or the second input image includes data from one or more other users not interacting with the sensing plane and thereby the data from the one or more other users is not part of the touch image. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32)
-
-
33. A system comprising:
-
a processor; a memory, coupled to the processor, storing; a first component, operable by the processor, to acquire at least first and second input images from first and second imaging components, respectively; a second component, operable by the processor, to remap the first and second input images with respect to a sensing plane; a third component, operable by the processor, to determine and highlight edge contours of the first and second input images to yield first and second highlighted images; and a fourth component, operable by the processor, to combine the first and second highlighted images to obtain a touch image that includes first data associated with edge contours that align after the combining and that excludes second data associated with edge contours that do not align after the combining, the first data being associated with one or more users interacting with the sensing plane and the second data being associated with one or more other users not interacting with the sensing plane. - View Dependent Claims (34, 35, 36)
-
Specification