Dynamic multi-sensor and multi-robot interface system
First Claim
1. A method for generating a common coordinate system between robotic devices and sensors in a given environment comprising:
- providing at least one processing unit, a first sensor and a second sensor, and at least one robotic device;
collecting a sequence of images from said first sensor showing said second sensor and said robotic device;
analyzing said sequence of images to uniquely identify said second sensor and said robot device and their relative location and pose;
generating a set of conversion parameters for permuting said relative location to location data relative to at least one of said second sensor and said robotic device, wherein said second sensor is a gaze tracker.
0 Assignments
0 Petitions
Accused Products
Abstract
An adaptive learning interface system for end-users for controlling one or more machines or robots to perform a given task, combining identification of gaze patterns, EEG channel'"'"'s signal patterns, voice commands and/or touch commands. The output streams of these sensors are analyzed by the processing unit in order to detect one or more patterns that are translated into one or more commands to the robot, to the processing unit or to other devices. A pattern learning mechanism is implemented by keeping immediate history of outputs collected from those sensors, analyzing their individual behavior and analyzing time correlation between patterns recognized from each of the sensors. Prediction of patterns or combination of patterns is enabled by analyzing partial history of sensors'"'"' outputs. A method for defining a common coordinate system between robots and sensors in a given environment, and therefore dynamically calibrating these sensors and devices, is used to share characteristics and positions of each object detected on the scene.
24 Citations
8 Claims
-
1. A method for generating a common coordinate system between robotic devices and sensors in a given environment comprising:
-
providing at least one processing unit, a first sensor and a second sensor, and at least one robotic device; collecting a sequence of images from said first sensor showing said second sensor and said robotic device; analyzing said sequence of images to uniquely identify said second sensor and said robot device and their relative location and pose; generating a set of conversion parameters for permuting said relative location to location data relative to at least one of said second sensor and said robotic device, wherein said second sensor is a gaze tracker. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
Specification