ENHANCED VIRTUAL TOUCHPAD
First Claim
1. A method, comprising:
- receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface;
segmenting the physical surface into one or more physical regions;
assigning a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device;
receiving a sequence of three-dimensional (3D) maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions;
analyzing the 3D maps to detect a gesture performed by the user; and
simulating, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions.
3 Assignments
0 Petitions
Accused Products
Abstract
A method, including receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface and segmenting the physical surface into one or more physical regions. A functionality is assigned to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, and a sequence of three-dimensional (3D) maps is received, the sequence of 3D maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions. The 3D maps are analyzed to detect a gesture performed by the user, and based on the gesture, an input is simulated for the tactile input device corresponding to the one of the physical regions.
-
Citations
25 Claims
-
1. A method, comprising:
-
receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface; segmenting the physical surface into one or more physical regions; assigning a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device; receiving a sequence of three-dimensional (3D) maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions; analyzing the 3D maps to detect a gesture performed by the user; and simulating, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. An apparatus, comprising:
-
a sensing device configured to receive a two dimensional (2D) image containing at least a physical surface, and to receive a sequence of three dimensional (3D) maps containing at least a hand of a user, the hand positioned on the physical surface; a display; and a computer coupled to the sensing device and the display, and configured to segment the physical surface into one or more physical regions, to assign a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, to analyze the 3D maps to detect a gesture performed by the user, and to simulate, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a two-dimensional image (2D) containing at least a physical surface, to segment the physical surface into one or more physical regions, to assign a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, to receive a sequence of three-dimensional (3D) maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions, to analyze the 3D maps to detect a gesture performed by the user, and to simulate, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions.
-
16. A method, comprising:
-
receiving a sequence of three-dimensional (3D) maps containing at least a physical surface, one or more physical objects positioned on the physical surface, and a hand of a user of the computer, the hand positioned in proximity to the physical surface; analyzing the 3D maps to detect a gesture performed by the user; projecting, onto the physical surface, an animation in response to the gesture; and incorporating the one or more physical objects into the animation. - View Dependent Claims (17)
-
-
18. An apparatus, comprising:
-
a sensing device configured to receive a sequence of three dimensional (3D) maps containing at least a physical surface, one or more physical objects positioned on the physical surface, and a hand of a user, the hand positioned in proximity to the physical surface; a projector; and a computer coupled to the sensing device and the projector, and configured to analyze the 3D maps to detect a gesture performed by the user, to present, using the projector, an animation onto the physical surface in response to the gesture, and to incorporate the one or more physical objects into the animation. - View Dependent Claims (19)
-
-
20. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a sequence of three-dimensional (3D) maps containing at least a physical surface, one or more physical objects positioned on the physical surface, and a hand of a user of the computer, the hand positioned in proximity to the physical surface, to analyze the 3D maps to detect a gesture performed by the user, to project, onto the physical surface, an animation in response to the gesture, and to incorporate the one or more physical objects into the animation.
-
21. A method, comprising:
-
receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface; segmenting the physical surface into one or more physical regions; assigning a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device; receiving a sequence of three-dimensional (3D) maps containing at least an object held by a hand of a user of the computer, the object positioned on one of the physical regions; analyzing the 3D maps to detect a gesture performed using the object; and simulating, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions. - View Dependent Claims (22, 24)
-
-
23. An apparatus, comprising:
-
a sensing device configured to receive a two dimensional (2D) image containing at least a physical surface, and to receive a sequence of three dimensional (3D) maps containing at least an object held by a hand of a user, the object positioned on the physical surface; a display; and a computer coupled to the sensing device and the display, and configured to segment the physical surface into one or more physical regions, to assign a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, to analyze the 3D maps to detect a gesture performed using the object, and to simulate, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions.
-
-
25. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a two-dimensional image (2D) containing at least a physical surface, to segment the physical surface into one or more physical regions, to assign a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, to receive a sequence of three-dimensional (3D) maps containing at least an object held by a hand of a user of the computer, the object positioned on one of the physical regions, to analyze the 3D maps to detect a gesture performed using the object, and to simulate, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions.
Specification