Utilizing real world objects for user input
First Claim
1. A method of a device, comprising:
- processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object;
determining at least two regions of the interactive surface area;
tracking each of the at least two regions of the interactive surface area;
mapping commands of a user interface to the at least two regions of the interactive surface area, wherein the mapping commands of a user interface to regions of the interactive surface area further comprises;
selecting a number of commands of the user interface from a plurality of user commands based on how many tracked regions comprise the tracked regions of the interactive surface area, wherein a number of commands of the plurality of commands exceeds how many tracked regions comprise the tracked regions; and
correlating each tracked region with a different command based in part on priorities associated with each command, the priorities having been determined based in part on an application currently active on the device;
determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and
performing a mapped command of the user interface, on the device, wherein the mapped command is determined based on the selected region.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods, systems, apparatuses and computer-readable media for utilizing real world objects to interact with a user interface are presented. The method may comprise a device processing image data to identify an interactive surface area and an interacting object. Subsequently, the device may determine at least two regions of the interactive surface area. In addition, the device may map commands of a user interface to the at least two regions of the interactive surface area. Subsequently, the device may determine a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area. In addition, the device may perform a mapped command of the user interface, wherein the mapped command is determined based on the selected region.
-
Citations
27 Claims
-
1. A method of a device, comprising:
-
processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; determining at least two regions of the interactive surface area; tracking each of the at least two regions of the interactive surface area; mapping commands of a user interface to the at least two regions of the interactive surface area, wherein the mapping commands of a user interface to regions of the interactive surface area further comprises; selecting a number of commands of the user interface from a plurality of user commands based on how many tracked regions comprise the tracked regions of the interactive surface area, wherein a number of commands of the plurality of commands exceeds how many tracked regions comprise the tracked regions; and correlating each tracked region with a different command based in part on priorities associated with each command, the priorities having been determined based in part on an application currently active on the device; determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and performing a mapped command of the user interface, on the device, wherein the mapped command is determined based on the selected region. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method of a device, comprising:
processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; determining at least two regions of the interactive surface area; tracking each of the at least two regions of the interactive surface area; mapping commands of a user interface to the at least two regions of the interactive surface area; determining visibility of each tracked region of the interactive surface area; determining priority of commands mapped on a hidden tracked region and on a visible tracked region responsive to the hidden tracked region being hidden; remapping commands from the hidden tracked region to the visible tracked region based on the priority of commands, wherein remapping the commands comprises comparing a first priority associated with a first command mapped to a hidden tracked region and a second priority associated with a second command mapped to a visible tracked region, and remapping the first command to the second tracked region responsive to the first priority exceeding the second priority; determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and performing a mapped command of the user interface, on the device, wherein the mapped command is determined based on the selected region. - View Dependent Claims (16)
-
17. An apparatus, comprising:
-
at least one processor configured to; process image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; determine at least two regions of the interactive surface area; track each of the at least two regions of the interactive surface area; map commands of a user interface to the at least two regions of the interactive surface area, wherein the at least one processor configured to map commands of a user interface to the at least two regions of the interactive surface area further configured to; select a number of commands of the user interface from a plurality of user commands based on how many tracked regions comprise the tracked regions of the interactive surface area, wherein a number of commands of the plurality of commands exceeds how many tracked regions comprise the tracked regions; and correlate each tracked region with a different command based in part on priorities associated with each command, the priorities having been determined based in part on an application currently active on the apparatus; determine a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; perform a mapped command, wherein the mapped command is determined based on the selected region; and a memory coupled to the at least one processor. - View Dependent Claims (18, 19)
-
-
20. An apparatus, comprising:
-
at least one processor configured to; process image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; determine at least two regions of the interactive surface area; map commands of a user interface to the at least two regions of the interactive surface area; determine visibility of each tracked region of the interactive surface area; determine priority of commands mapped on a hidden tracked region and on a visible tracked region responsive to the hidden tracked region being hidden; remap commands from the hidden tracked region to the visible tracked region based on the priority of commands, wherein the at least one processor is configured to compare a first priority associated with a first command mapped to a hidden tracked region and a second priority associated with a second command mapped to a visible tracked region and to remap the first command to the second tracked region responsive to the first priority exceeding the second priority determine a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; perform a mapped command, wherein the mapped command is determined based on the selected region; and a memory coupled to the at least one processor.
-
-
21. An apparatus, comprising:
-
means for processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; means for determining at least two regions of the interactive surface area; means for mapping commands of a user interface to the at least two regions of the interactive surface area, wherein the means for mapping commands of a user interface to regions of the interactive surface area further comprises; means for tracking each of the at least two regions of the interactive surface area; means for selecting a number of commands of the user interface from a plurality of user commands based on how many tracked regions comprise the tracked regions of the interactive surface area, wherein a number of commands of the plurality of commands exceeds how many tracked regions comprise the tracked regions; and means for correlating each tracked region with a different command based in part on priorities associated with each command, the priorities having been determined based in part on an application currently active on the apparatus; means for determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and means for performing a mapped command of the user interface, wherein the mapped command is determined based on the selected region. - View Dependent Claims (22, 23, 24)
-
-
25. A computer program product, comprising:
a non-transitory computer-readable storage medium comprising; code for processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; code for determining at least two regions of the interactive surface area; code for mapping commands of a user interface to the at least two regions of the interactive surface area, wherein the code for mapping commands further comprises code for tracking each of the at least two regions of the interactive surface area, code for selecting a number of commands of the user interface from a plurality of user commands based on how many tracked regions comprise the tracked regions of the interactive surface area, wherein a number of commands of the plurality of commands exceeds how many tracked regions comprise the tracked regions, and code for correlating each tracked region with a different command based in part on priorities associated with each command, the priorities having been determined based in part on an active application; code for determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and code for performing a mapped command of the user interface, wherein the mapped command is determined based on the selected region.
-
26. An apparatus, comprising:
-
means for processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; means for determining at least two regions of the interactive surface area; means for tracking each of the at least two regions of the interactive surface area; means for mapping commands of a user interface to the at least two regions of the interactive surface area; means for determining visibility of each tracked region of the interactive surface area; means for determining priority of commands mapped on a hidden tracked region and on a visible tracked region responsive to the hidden tracked region being hidden; means for remapping commands from the hidden tracked region to the visible tracked region based on the priority of commands, wherein remapping the commands comprises comparing a first priority associated with a first command mapped to a hidden tracked region and a second priority associated with a second command mapped to a visible tracked region, and remapping the first command to the second tracked region responsive to the first priority exceeding the second priority; and means for determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and means for performing a mapped command of the user interface, on the device, wherein the mapped command is determined based on the selected region.
-
-
27. A computer program product, comprising:
a non-transitory computer-readable storage medium comprising; code for processing image data to identify an interactive surface area and an interacting object using an object identification technique that differentiates between objects in the image data based at least in part on relative positions of the interactive surface area and the interacting object to one another without first recognizing an object comprising the interactive surface area or the interacting object; code for determining at least two regions of the interactive surface area; code for tracking each of the at least two regions of the interactive surface area; code for mapping commands of a user interface to the at least two regions of the interactive surface area; code for determining visibility of each tracked region of the interactive surface area; code for determining priority of commands mapped on a hidden tracked region and on a visible tracked region responsive to the hidden tracked region being hidden; code for remapping commands from the hidden tracked region to the visible tracked region based on the priority of commands, wherein remapping the commands comprises comparing a first priority associated with a first command mapped to a hidden tracked region and a second priority associated with a second command mapped to a visible tracked region, and remapping the first command to the second tracked region responsive to the first priority exceeding the second priority; and code for determining a selected region of the interactive surface area based on a proximity of the interacting object to the interactive surface area; and code for performing a mapped command of the user interface, on the device, wherein the mapped command is determined based on the selected region.
Specification