Dynamic Multi-Sensor and Multi-Robot Interface System
First Claim
10. A method for controlling a robot arm through gaze composed of at least one sensor acquiring images of one or more end-user'"'"'s eyes, one or more processing units or controllers and one or more robotic devices;
- receiving at least one sequence of a plurality of images imaging a human performing at least one gesture by moving at least one eye;
performing an analysis of said sequence of a plurality of images to identify said at least one gesture; and
identifying an association between said at least one gesture and at least one command to be executed; and
Instructing a robotic machine with said at least one command.
0 Assignments
0 Petitions
Accused Products
Abstract
An adaptive learning interface system for end-users for controlling one or more machines or robots to perform a given task, combining identification of gaze patterns, EEG channel'"'"'s signal patterns, voice commands and/or touch commands. The output streams of these sensors are analysed by the processing unit in order to detect one or more patterns that are translated into one or more commands to the robot, to the processing unit or to other devices. A pattern learning mechanism is implemented by keeping immediate history of outputs collected from those sensors, analysing their individual behaviour and analysing time correlation between patterns recognized from each of the sensors. Prediction of patterns or combination of patterns is enabled by analysing partial history of sensors'"'"' outputs. A method for defining a common coordinate system between robots and sensors in a given environment, and therefore dynamically calibrating these sensors and devices, is used to share characteristics and positions of each object detected on the scene.
-
Citations
13 Claims
-
10. A method for controlling a robot arm through gaze composed of at least one sensor acquiring images of one or more end-user'"'"'s eyes, one or more processing units or controllers and one or more robotic devices;
-
receiving at least one sequence of a plurality of images imaging a human performing at least one gesture by moving at least one eye; performing an analysis of said sequence of a plurality of images to identify said at least one gesture; and
identifying an association between said at least one gesture and at least one command to be executed; andInstructing a robotic machine with said at least one command.
-
-
11-1. The method of claim 11, wherein said at least one external sensor is at least one depth map sensor.
-
13. An apparatus associated with at least one processing unit comprising:
-
at least one sensor for collecting a sequence of images of at least one human eye; at least one processing unit configured for analyzing a sequence images from at least one of said sensor to identify gestures performed by moving eyes in certain directions, determining the center of the eye in each frame and determining at least one command to perform based on moving pattern, and generating at least one processing unit command; at least one storage unit configured for saving said sequence of a plurality of images and detected at least one command, said at least one command of said processing unit, at least one processing algorithm and processed motion of said robotic device; at least one digital communication means configured for communicating said at least one command to said at least one associated processing unit; and housing configured for containing said at least one spatial sensor, said at least one processing unit, said storage unit and said communication unit.
-
Specification