Adaptive vision-based controller
First Claim
1. A vision based controller for use with an effector for controlling movement of the effector in the execution of a task having a predetermined task definition, the controller comprising:
- at least one electronic camera arranged for providing a plurality of images relating to different views of objects or features in a defined workspace;
image processing means for processing images received from said at least one camera and corresponding to different views of said workspace to extract information relating to features in the images, said image processing means comprising an image segmenting means for segmenting images received from said at least one camera into regions of substantial uniformity and reducing the segmented images into a two-dimensional contour map representing edges of objects or features detected in the images;
information comparison means for comparing information extracted from at least two processed images corresponding to different views of the workspace with information held in a knowledge base to derive a three-dimensional internal model of the workspace;
planning means for planning a sequence of actions to be performed by said effector in the execution of said task, the sequence being derived from said predetermined task definition and from the derived three-dimensional internal model of the workspace;
monitoring means for monitoring actions performed by said effector; and
dynamic comparing means for dynamically comparing said performed actions with planned actions of said sequence, and for interrupting the sequence if the performed action deviates to a predetermined extent from the planned action and for requesting amendment to the sequence.
1 Assignment
0 Petitions
Accused Products
Abstract
An adaptive vision based controller for controlling a robot arm comprises a camera, a segmenter for analyzing images from the camera, a tracker, sketcher and ranger responsive to information from the segmenter for creating a three dimensional segmented data list, a recognizer for receiving the data list and comparing data in the list against a database of plausible objects, and a planner interactive with the recognizer and responsive to task definitions for developing control outputs. The recognizer uses scenic information such as feature maps produced by the segmenter in conjunction with a knowledge base to construct a world model. The planner uses the world model and the task definitions to construct a plan in the form of a set of actions for accomplishing the defined task. By way of the control system, information about how the robot arm is actually performing a task can be compared with the desired task and the task can be updated if necessary. Thus the controller provides visual feed back control of the task performed by the robot arm.
-
Citations
20 Claims
-
1. A vision based controller for use with an effector for controlling movement of the effector in the execution of a task having a predetermined task definition, the controller comprising:
-
at least one electronic camera arranged for providing a plurality of images relating to different views of objects or features in a defined workspace; image processing means for processing images received from said at least one camera and corresponding to different views of said workspace to extract information relating to features in the images, said image processing means comprising an image segmenting means for segmenting images received from said at least one camera into regions of substantial uniformity and reducing the segmented images into a two-dimensional contour map representing edges of objects or features detected in the images; information comparison means for comparing information extracted from at least two processed images corresponding to different views of the workspace with information held in a knowledge base to derive a three-dimensional internal model of the workspace; planning means for planning a sequence of actions to be performed by said effector in the execution of said task, the sequence being derived from said predetermined task definition and from the derived three-dimensional internal model of the workspace; monitoring means for monitoring actions performed by said effector; and dynamic comparing means for dynamically comparing said performed actions with planned actions of said sequence, and for interrupting the sequence if the performed action deviates to a predetermined extent from the planned action and for requesting amendment to the sequence.
-
-
2. A controller according to claim 1, in which the image segmenting means provides a vertex list which describes the contour map in terms of the connecting relationship between vertices in the contour map.
-
3. A controller according to claim 2, in which the image processing means comprises conversion means for converting contour maps and/or vertex lists from a plurality of images into a three-dimensional model of the workspace for comparison with information in the knowledge base by the information comparison means.
-
4. A controller according to claim 3 in which the conversion means comprises feature tracking means for tracking features found in at least a portion of one image to a corresponding feature in another image.
-
5. A controller according to claim 4 in which the conversion means comprises range finding means for finding the range of objects in the workspace by examining corresponding features in at least two images and deriving therefrom three-dimensional range information.
-
6. A controller according to claim 5 in which the range finding means comprises self-calibrating means for calibrating the camera by analysing images received by the camera of a known calibration object in the workspace.
-
7. A controller according to any one of claims 3 to 6 in which the conversion means comprises sketching means for sketching an image in terms of curves interconnecting the vertices identified in the contour map by the segmenting means.
-
8. A controller according to claim 7 further comprising means for bypassing the information comparison means once the internal model of the workspace has been derived.
-
9. A controller according to claim 7 in which the electronic camera provides color images which are converted into a monochrome scaler representation thereof by the image processing means prior to extraction of feature information.
-
10. A controller according to any one of claims 3 to 6 further comprising means for bypassing the information comparison means once the internal model of the workspace has been derived.
-
11. A controller according to claim 10 in which the electronic camera provides color images which are converted into a monochrome scaler representation thereof by the image processing means prior to extraction of feature information.
-
12. A controller according to any one of claims 3 to 6 in which the electronic camera provides, color images which are converted into a monochrome scalar representation thereof by the image processing means prior to extraction of feature information.
-
13. A vision based controller for controlling movement of a robot arm in a defined workspace, the controller comprising:
-
task decomposition means for decomposing a desired task input by a user into discrete actions to be performed by the robot arm; image reducing means for reducing images of the workspace derived from one or more electronic cameras or other electronic imaging devices to reduced images containing only pertinent features; workspace modelling means for deriving a three-dimensional model of the workspace from said reduced images; storage means for storing a knowledge base of feature models known to the controller; identifying means for identifying objects and the relative positions thereof in the workspace by comparing said three-dimensional model of the workspace derived from said reduced images with models of features stored in said knowledge base; calculating means for calculating the robot arm movement required to perform the desired task from information associated with the discrete actions and the relative positions of the identified objects; servo means for effecting movement of the robot arm in accordance with said calculations; sensor means for indicating actual movements of the robot arm; and comparing means for comparing actual performance of the task as indicated by said sensor means with the required performance as determined by said calculating means and for stimulating recalculation by the calculating means in the event of a predetermined deviation from the required performance; and wherein said image reducing means comprises edge detecting means for detecting edges of objects or other features in the images, mapping means for mapping the detected edges into a topographical representation thereof, vertex detecting means for detecting vertices in the topographical representation and for producing descriptions of the detected vertices, and line detecting means for detecting lines in the topographical representation and for producing descriptions of the detected lines.
-
-
14. A controller according to claim 13 in which the task decomposition means comprise servo actuating means for actuating serves to drive the robot arm and the calculating means comprise converting means for converting calculated movements into signals to drive the servo actuating means.
-
15. A controller according to claim 14 in which the comparing means comprise means for requesting further images from the image reducing means to assist in the recalculation.
-
16. A controller according to claim 13 in which the comparing means comprise means for requesting further images from the image reducing means to assist in the recalculation.
-
17. A vision based method of controlling movement of a robot arm in a defined workspace, said method comprising:
-
decomposing a desired task into discrete actions to be performed by the robot arm; reducing images of the workspace derived from one or more electronic cameras or other electronic imaging devices to images containing only pertinent features; deriving a three-dimensional model of the workspace from said reduced images; storing a knowledge base of known feature models; identifying objects and their relative positions in the workspace by comparing said three dimensional model of the workspace derived from the reduced images with features stored in said knowledge base; determining the robot arm movements required to perform the desired task from information associated with said discrete actions and the relative positions of the identified objects; and moving the robot arm and comparing sensed movements of the robot arm with the required movements and recalculating the required movements in the event of a predetermined deviation therefrom;
said images being reduced to images containing pertinent features by detecting edges of objects or other features in the images and producing a topographical representation thereof, said topographical representation comprising a closed contour map and a corresponding vertex list providing connecting information relating to vertices in the contour map and further comprising a curve list providing connecting information relating to curves connecting the vertices in the contour map.
-
-
18. A method according to claims 17 in which the features known to the controller are held in a knowledge base of object features.
-
19. A method according to claim 18 in which further images are requested from the sensory system to assist in the recalculation.
-
20. A method according to claim 18 in which the sensory system is a vision system.
Specification