Intelligent robotic interface input device
First Claim
1. A vision space mouse-keyboard control panel Robot has computer system use video vision camera sensors, logical vision sensor programming as trainable computer vision seeing objects movements X, Y, Z dimensions'"'"' definitions to recognize users commands by their Hands gestures and/or enhance symbols, colors objects combination actions to virtually input data, and commands to operate computer, and machines.
0 Assignments
0 Petitions
Accused Products
Abstract
Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot has computer system use video vision camera sensors, logical vision sensor programming as trainable computer vision seeing objects movements X, Y, Z dimensions'"'"' definitions to recognize users commands by their Hands gestures and/or enhance symbols, colors objects combination actions to virtually input data, and commands to operate computer, and machines. The robot has automatically calibrated working space into Space Mouse Zone, Space Keyboard zone, and Hand-Sign Languages Zone between user and itself. The robot automatically translate the receiving coordination users'"'"' hand gesture actions combinations on the customizable puzzle-cell positions of working space and mapping to its software mapping lists for each of the puzzle-cell position definition and calibrate these user hand and/or body gestures'"'"' virtual space actions into entering data and commands to computer meaningful computer, machine, home appliances operations.
-
Citations
60 Claims
- 1. A vision space mouse-keyboard control panel Robot has computer system use video vision camera sensors, logical vision sensor programming as trainable computer vision seeing objects movements X, Y, Z dimensions'"'"' definitions to recognize users commands by their Hands gestures and/or enhance symbols, colors objects combination actions to virtually input data, and commands to operate computer, and machines.
- 18. A vision space mouse-keyboard control panel Robot according to claim 18 wherein said has trainable computer vision to recognize the user'"'"'s hand gestures commands with specific symbol shape, size, color, and/or optional embedded wireless sensors, LED lights, Laser beam Lights for reliable vision tracking reading to remote control all of the appliances at user'"'"'s property.
-
20. A vision space mouse-keyboard control panel Robot according to claim 20 wherein said When the robot sensor detect user, and robot use video web cameras, video vision camera sensor to measure of user height and width, and automatically calibrated virtual working space, robot adjusting the distant between user and itself to projecting the Virtual Space Mouse zone, Virtual Space Keyboard, and Hand-Sign Languages Zone.
-
21. A vision space mouse-keyboard control panel Robot according to claim 21 wherein said the working space can be selected and choose to work on either one of these 3 space function zones, or to have divide whole working space into 3 zones for space mouse, keyboard, and Hand-Sign Languages zone together.
-
23. A vision space mouse-keyboard control panel Robot according to claim 23 wherein said logical vision tracking program tracking will changing X surface direction, Y surface direction, Z surface direction to virtual space XYZ value relative distant with robot'"'"'s Vision-G-Point center.
-
24. A vision space mouse-keyboard control panel Robot according to claim 24 wherein said Position Translate Program convert the new tracking position space XYZ value into its mapping computer operation actions to auto execute the commands that by user hand gestures actions.
-
26. A vision space mouse-keyboard control panel Robot according to claim 26 wherein said The user can mimic Regular physical mouse operating actions in one hand in Virtual Space Mouse zone, the robot able to precisely tracking fingers X, Y, Z gesture movements and perform the Virtual Space Mouse functions.
- 28. A vision space mouse-keyboard control panel Robot according to claim 28 wherein said The robot sensor detect user, and robot use video web camera to measure of user height and width, and automatically calibrated working space, robot will virtual project the dimensions'"'"' axis G-Point that represent the center point of whole working space in relative 3D level user working space of X dimension surface, Y dimension surface, and Z dimension surface. The user'"'"'s hand gesture'"'"'s X, Y, Z space positions will be base on the relation distant of the G-Point.
-
29. A vision space mouse-keyboard control panel Robot according to claim 29 wherein said projecting the mimic physical keyboard alignments angles relation and arrange the puzzle-cell positions in keyboard style as Virtual Space Keyboard.
-
31. A vision space mouse-keyboard control panel Robot according to claim 31 wherein said The robot'"'"'s logical vision tracking program received dimension X, Y changing value will be automatically to be translated by its Position Translate Program into keyboard mapping listing, the new X tracking value will be match on “
- H”
key and display the “
H”
character on the monitor.
- H”
-
32. A vision space mouse-keyboard control panel Robot according to claim 32 wherein said The two steps Z value selections method, Example, Use Space “
- Shift”
key or any special function keys, two steps to accept the Z surface direction, the user punch out on left hand on “
Shift”
key position, the Z dimension value will be add −
1, and robot'"'"'s Position Translate Program map value into keyboard mapping listing aware of that is a meaningful puzzle space as “
Shift”
key position, and wait for the Second selection, then user move right hand to the “
A”
key position and then use left hand punch out toward robot further again to make confirm key selection and the robot logical vision tracking program accept the Z surface direction, the Z dimension value will be add −
1 to be −
2, and its Position Translate Program into keyboard mapping listing aware of that is double “
Shift”
key twice will confirm the select key, and the new X surface direction, and the X dimension value will be add −
5 relative distant with robot'"'"'s Vision-G-Point center and the new Y surface direction, and the Y dimension value will be add 0 relative distant with robot'"'"'s Vision-G-Point center, and its Position Translate Program into keyboard mapping listing aware of that is a meaningful puzzle space as Capital “
A”
key.
- Shift”
-
33. A vision space mouse-keyboard control panel Robot according to claim 33 wherein said The same 2 steps special function keys selection principal method can apply to using “
- Ctrl”
, “
Alt”
, Special function keys, “
!”
, “
@”
, “
#”
, “
$”
, “
%”
, “
̂
”
, “
&
”
, “
*”
, “
(”
, “
)”
, “
{”
, “
}”
, “
_”
, “
+”
all of keys that require two steps selection method.
- Ctrl”
-
35. A vision space mouse-keyboard control panel Robot according to claim 35 wherein said The Hand-Sign 360 degree XYZ Position Translate Program will match the series tracking value to get the specific Hand-Sign words that user'"'"'s Hand Sign Language gesture.
-
36. A vision space mouse-keyboard control panel Robot according to claim 36 wherein said train robot'"'"'s logical vision tracking program to recognize a special object such as a sharp point of a pen. The user hold the sharp point of the pen face to robot, and start to move around the pen as it writing word on air or drawing a picture on air, the robot watch each video frame and mark the sharp point of the pen XYZ value, and then update the value to monitor or a painting software, the series frames signals xyz values will compose into meaning symbolic character writing or a unique drawing picture from user.
Specification