Intelligent robotic interface input device
First Claim
1. A computer program product comprising a non-transitory computer usable medium having control logic stored therein for causing an input device to track and transform hand gestures into input commands for controlling a computer or electrically operated machine, said control logic comprising:
- First computer readable program code for processing X, Y and Z coordinates dimension data by recording a width and a height of a user to determine the X and Y coordinates and a distance of the user to determine the Z coordinatesSecond computer readable program code for providing input data and commands for executing various operations of computer or electrically operated machine;
Third computer readable program code for establishing a virtual working area by the user'"'"'s width and height, wherein a center point of the virtual working area relative to the user is established;
Fourth computer readable program code for establishing a virtual space keyboard zone, wherein keys in the virtual space keyboard zone are positioned relative to the center point of the virtual working area; and
Fifth computer readable program code for establishing a virtual hand-signal language zone, wherein X, Y, Z finger coordinates are established relative to the center point of the virtual working area to establish hand gestures.
0 Assignments
0 Petitions
Accused Products
Abstract
An intelligent object tracking and gestures sensing input device is operative to translate hand gestures of a user into data and commands for operating a computer or various machines. It is provided with video web cameras video vision camera, sensors. A logical vision sensor program in the device measures movements of the user'"'"'s hand gesture into X, Y and Z dimension definitions. It defines a working space spaced therefrom into virtual space mouse zone, space keyboard zone, and hang sign language zone. It automatically translates the change of coordinates of the user'"'"'s hand in puzzle cell positions in the virtual working space into the data and commands. Objects having enhance symbols, colors, shape, and illuminated lights are attachable on the user'"'"'s hand to provide precision input.
104 Citations
39 Claims
-
1. A computer program product comprising a non-transitory computer usable medium having control logic stored therein for causing an input device to track and transform hand gestures into input commands for controlling a computer or electrically operated machine, said control logic comprising:
-
First computer readable program code for processing X, Y and Z coordinates dimension data by recording a width and a height of a user to determine the X and Y coordinates and a distance of the user to determine the Z coordinates Second computer readable program code for providing input data and commands for executing various operations of computer or electrically operated machine; Third computer readable program code for establishing a virtual working area by the user'"'"'s width and height, wherein a center point of the virtual working area relative to the user is established; Fourth computer readable program code for establishing a virtual space keyboard zone, wherein keys in the virtual space keyboard zone are positioned relative to the center point of the virtual working area; and Fifth computer readable program code for establishing a virtual hand-signal language zone, wherein X, Y, Z finger coordinates are established relative to the center point of the virtual working area to establish hand gestures. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
-
-
29. A system for tracking and transforming a user'"'"'s hand gestures into input commands for controlling a computer or electrically operated machine, comprising,
An input device; -
At least one video vision camera, in operable communication with said input device; At least one vision sensor, in operable communication with said input device; At least one web camera, in operable communication with said input device, wherein said web at least one web camera, along with said at least one vision sensor, are configured to automatically scan a selected work space for tracking an object; An automatic motor in operable communication with said video vision camera wherein said motor is controlled by said input device to enable said video vision camera to track and follow said user continuously; A central control computer, in operable communication with said input device, said video vision cameras and vision sensors; Computer readable program code for processing X, Y and Z coordinates dimension readings by recording a width and a height of a user to determine the X and Y coordinates and a distance of the user to determine the Z coordinates, wherein a center point is determined based on the X, Y, and Z coordinates and the hand gestures is based on distances between the X, Y, and Z coordinates and the center point; A microphone, in operable communication with said input device; At least one sound sensor in operable communication with said input device. - View Dependent Claims (30, 31, 32, 33, 34, 35, 36, 37, 38, 39)
-
Specification