Method and system for providing autonomous control of a platform
First Claim
1. A method for operating an autonomous vehicle that includes a manipulator and one or more sets of cameras on the autonomous vehicle, the method comprising:
- calibrating the manipulator with the one or more sets of cameras to establish calibration parameters describing a relationship between a location of features of the manipulator in a two-dimensional image acquired by the one or more sets of cameras and a three-dimensional position of the features of the manipulator, wherein the one or more sets of cameras comprises a first set of cameras and a second set of cameras;
determining a first camera-space target projection based on a relationship between a three-dimensional location of a target and a location of the target in a given two-dimensional image acquired by the first set of cameras;
using the calibration parameters and the first camera-space target projection to estimate a location of the target relative to the manipulator;
creating a trajectory for the autonomous vehicle and the manipulator to follow to position the autonomous vehicle and the manipulator such that the manipulator can engage the target based on the three-dimensional location of the target due to the first camera-space target projection in the relationship between the three-dimensional location of the target and the location of the target in the given two-dimensional image acquired by the first set of cameras;
updating the first camera-space target projection based on subsequent two-dimensional images acquired by the first set of cameras as the autonomous vehicle traverses the trajectory;
based on a distance of the autonomous vehicle to the target, transitioning the target from the first set of cameras to the second set of cameras;
determining a second camera-space target projection based on a relationship between a three-dimensional location of the target and a location of the target in a given two-dimensional image acquired by the second set of cameras; and
updating the trajectory for the autonomous vehicle and the manipulator to follow based on the second camera-space target projection,wherein transitioning the target from the first set of cameras to the second set of cameras comprises;
using the given two-dimensional image acquired by the first set of cameras, providing a laser onto the target;
receiving the given two-dimensional image of the target acquired by at least one of the second set of cameras; and
determining a camera-space location of the laser in the given two-dimensional image acquired by the second set of cameras based on a location of the laser in the image.
0 Assignments
0 Petitions
Accused Products
Abstract
The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).
41 Citations
11 Claims
-
1. A method for operating an autonomous vehicle that includes a manipulator and one or more sets of cameras on the autonomous vehicle, the method comprising:
-
calibrating the manipulator with the one or more sets of cameras to establish calibration parameters describing a relationship between a location of features of the manipulator in a two-dimensional image acquired by the one or more sets of cameras and a three-dimensional position of the features of the manipulator, wherein the one or more sets of cameras comprises a first set of cameras and a second set of cameras; determining a first camera-space target projection based on a relationship between a three-dimensional location of a target and a location of the target in a given two-dimensional image acquired by the first set of cameras; using the calibration parameters and the first camera-space target projection to estimate a location of the target relative to the manipulator; creating a trajectory for the autonomous vehicle and the manipulator to follow to position the autonomous vehicle and the manipulator such that the manipulator can engage the target based on the three-dimensional location of the target due to the first camera-space target projection in the relationship between the three-dimensional location of the target and the location of the target in the given two-dimensional image acquired by the first set of cameras; updating the first camera-space target projection based on subsequent two-dimensional images acquired by the first set of cameras as the autonomous vehicle traverses the trajectory; based on a distance of the autonomous vehicle to the target, transitioning the target from the first set of cameras to the second set of cameras; determining a second camera-space target projection based on a relationship between a three-dimensional location of the target and a location of the target in a given two-dimensional image acquired by the second set of cameras; and updating the trajectory for the autonomous vehicle and the manipulator to follow based on the second camera-space target projection, wherein transitioning the target from the first set of cameras to the second set of cameras comprises; using the given two-dimensional image acquired by the first set of cameras, providing a laser onto the target; receiving the given two-dimensional image of the target acquired by at least one of the second set of cameras; and determining a camera-space location of the laser in the given two-dimensional image acquired by the second set of cameras based on a location of the laser in the image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method for operating an autonomous vehicle that includes a manipulator and one or more sets of cameras on the autonomous vehicle, the method comprising:
-
calibrating the manipulator with the one or more sets of cameras to establish calibration parameters describing a relationship between a location of features of the manipulator in a two-dimensional image acquired by the one or more sets of cameras and a three-dimensional position of the features of the manipulator, wherein the one or more sets of cameras comprises a first set of cameras and a second set of cameras; determining a first camera-space target projection based on a relationship between a three-dimensional location of a target and a location of the target in a given two-dimensional image acquired by the first set of cameras; using the calibration parameters and the first camera-space target projection to estimate a location of the target relative to the manipulator; creating a trajectory for the autonomous vehicle and the manipulator to follow to position the autonomous vehicle and the manipulator such that the manipulator can engage the target based on the three-dimensional location of the target due to the first camera-space target projection in the relationship between the three-dimensional location of the target and the location of the target in the given two-dimensional image acquired by the first set of cameras; updating the first camera-space target projection based on subsequent two-dimensional images acquired by the first set of cameras as the autonomous vehicle traverses the trajectory; based on a distance of the autonomous vehicle to the target, transitioning the target from the first set of cameras to the second set of cameras; determining a second camera-space target projection based on a relationship between a three-dimensional location of the target and a location of the target in a given two-dimensional image acquired by the second set of cameras; and updating the trajectory for the autonomous vehicle and the manipulator to follow based on the second camera-space target projection, wherein creating the trajectory for the autonomous vehicle and the manipulator to follow comprises; using the first camera-space target projection when the distance between the autonomous vehicle and the target is above a threshold distance; and using the second camera-space target projection when the distance between the autonomous vehicle and the target is below a threshold distance.
-
Specification