Enhanced camera-based input
First Claim
Patent Images
1. A computer-implemented method comprising:
- obtaining an image of a control object in a scene, the image captured by a camera;
determining a location of a first object in a user interface on a display, based on the location of the control object in the scene, wherein the first object comprises a control portion and a non-control portion;
determining a range of motion of the control portion of the first object based on a dimension of the non-control portion of the first object;
determining a first set of items; and
dynamically positioning one or more guide lines in the user interface on the display based on the determined location of the first object, the determined range of motion of the control portion of the first object, and a quantity of the items in the first set of items, wherein;
the items are shown on the display and aligned with the one or more guide lines in the user interface;
andthe one or more guide lines are positioned such that the items do not overlap with the non-control portion of the first object.
2 Assignments
0 Petitions
Accused Products
Abstract
Enhanced camera-based input, in which a detection region surrounding a user is defined in an image of the user within a scene, and a position of an object (such as a hand) within the detection region is detected. Additionally, a control (such as a key of a virtual keyboard) in a user interface is interacted with based on the detected position of the object.
-
Citations
31 Claims
-
1. A computer-implemented method comprising:
-
obtaining an image of a control object in a scene, the image captured by a camera; determining a location of a first object in a user interface on a display, based on the location of the control object in the scene, wherein the first object comprises a control portion and a non-control portion; determining a range of motion of the control portion of the first object based on a dimension of the non-control portion of the first object; determining a first set of items; and dynamically positioning one or more guide lines in the user interface on the display based on the determined location of the first object, the determined range of motion of the control portion of the first object, and a quantity of the items in the first set of items, wherein; the items are shown on the display and aligned with the one or more guide lines in the user interface; and the one or more guide lines are positioned such that the items do not overlap with the non-control portion of the first object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A device comprising a processor configured to:
-
obtain an image of a control object in a scene, the image captured by a camera; determine a location of a first object in a user interface on a display, based on a the location of the control object in the scene, wherein the first object comprises a control portion and a non-control portion; determine a range of motion of the control portion of the first object based on a dimension of the non-control portion of the first object; determine a first set of items; and dynamically position one or more guide lines in the user interface on the display based on the determined location of the first object, the determined range of motion of the control portion of the first object, and a quantity of the items in the first set of items, wherein; the items are shown on the display and aligned with the one or more guide lines in the user interface; and the one or more guide lines are positioned such that the items do not overlap with the non-control portion of the first object. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. An apparatus comprising:
-
means for obtaining an image of a control object in a scene, the image captured by image capturing means; means for determining a location of a first object in a user interface on a display, based on the location of the control object in the scene, wherein the first object comprises a control portion and a non-control portion; means for determining a range of motion of the control portion of the first object based on a dimension of the non-control portion of the first object; means for determining a first set of items; and means for dynamically positioning one or more guide lines in the user interface on the display based on the determined location of the first object, the determined range of motion of the control portion of the first object, and a quantity of the items in the first set of items, wherein; the items are shown on the display and aligned with the one or more guide lines in the user interface; and the one or more guide lines are positioned such that the items do not overlap with the non-control portion of the first object. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28)
-
-
29. A non-transitory computer readable medium encoded with instructions that, when executed, operate to cause a computer to perform operations comprising:
-
obtaining an image of a control object in a scene, the image captured by a camera; determining a location of a first object in a user interface on a display, based on the location of the control object in the scene, wherein the first object comprises a control portion and a non-control portion; determining a range of motion of the control portion of the first object based on a dimension of the non-control portion of the first object; determining a first set of items; and dynamically positioning one or more guide lines in the user interface on the display based on the determined location of the first object, the determined range of motion of the control portion of the first object, and a quantity of the items in the first set of items, wherein; the items are shown on the display and aligned with the one or more guide lines in the user interface; and the one or more guide lines are positioned such that the items do not overlap with the non-control portion of the first object. - View Dependent Claims (30, 31)
-
Specification