Enhanced camera-based input
First Claim
Patent Images
1. A non-transitory computer readable medium encoded with a computer program product, the computer program product comprising instructions that, when executed, operate to cause a computer to perform operations comprising:
- generating an image of a user within a scene;
defining, in the image, a detection region surrounding the user, further comprising;
determining a position of a torso and a reach of an arm of the user,defining the detection region to exclude the torso and at least a portion of a region of the image unreachable by the arm;
determining a portion of the detection region in which a second user could be detected; and
defining the detection region to exclude the portion of the detection region in which the second user could be detected;
detecting a position of a hand of the user within the detection region; and
interacting with a control in a user interface based on the detected position of the hand, the control comprising items aligned with a guide line defined relative to an avatar representation of the user on the user interface.
2 Assignments
0 Petitions
Accused Products
Abstract
Enhanced camera-based input, in which a detection region surrounding a user is defined in an image of the user within a scene, and a position of an object (such as a hand) within the detection region is detected. Additionally, a control (such as a key of a virtual keyboard) in a user interface is interacted with based on the detected position of the object.
-
Citations
33 Claims
-
1. A non-transitory computer readable medium encoded with a computer program product, the computer program product comprising instructions that, when executed, operate to cause a computer to perform operations comprising:
-
generating an image of a user within a scene; defining, in the image, a detection region surrounding the user, further comprising; determining a position of a torso and a reach of an arm of the user, defining the detection region to exclude the torso and at least a portion of a region of the image unreachable by the arm; determining a portion of the detection region in which a second user could be detected; and defining the detection region to exclude the portion of the detection region in which the second user could be detected; detecting a position of a hand of the user within the detection region; and interacting with a control in a user interface based on the detected position of the hand, the control comprising items aligned with a guide line defined relative to an avatar representation of the user on the user interface.
-
-
2. A computer-implemented method comprising:
-
defining, in an image of a user within a scene, a detection region surrounding the user, the defining comprising; determining an unreachable region of the image not reachable by an object associated with the user, defining the detection region to exclude at least a portion of the unreachable region; determining a portion of the detection region in which a second user could be detected; and defining the detection region to exclude the portion of the detection region in which the second user could be detected; detecting a position of the object within the detection region; and interacting with a control in a user interface based on the detected position of the object. - View Dependent Claims (3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 30, 31, 32)
-
-
28. A non-transitory computer readable medium encoded with a computer program product, the computer program product comprising instructions that, when executed, operate to cause a computer to perform operations comprising:
-
defining, in an image of a user within a scene, a detection region surrounding the user, the defining comprising; determining an unreachable region of the image not reachable by an object associated with the user, defining the detection region to exclude at least a portion of the unreachable region; determining a portion of the detection region in which a second user could be detected; and defining the detection region to exclude the portion of the detection region in which the second user could be detected; detecting a position of the object within the detection region; and interacting with a control in a user interface based on the detected position of the object.
-
-
29. A device comprising a processor configured to:
-
define, in an image of a user within a scene, a detection region surrounding the user, at least in part by; determining an unreachable region of the image not reachable by an object associated with the user, defining the detection region to exclude at least a portion of the unreachable region; determining a portion of the detection region in which a second user could be detected; and defining the detection region to exclude the portion of the detection region in which the second user could be detected; detect a position of the object within the detection region; and interact with a control in a user interface based on the detected position of the object.
-
-
33. An apparatus comprising:
-
means for defining, in an image of a user within a scene, a detection region surrounding the user, the means for defining comprising; means for determining an unreachable region of the image not reachable by an object associated with the user, means for defining the detection region to exclude at least a portion of the unreachable region; means for determining a portion of the detection region in which a second user could be detected; and means for defining the detection region to exclude the portion of the detection region in which the second user could be detected; means for detecting a position of the object within the detection region; and means for interacting with a control in a user interface based on the detected position of the object.
-
Specification