Context independent fusion of range and intensity imagery
First Claim
1. A calibration process with a self-scanned active three dimensional optical triangulation-based sensor and an independent TV camera and a planar target to provide a fused sensory source for subsequent complete data fusion of a scene imaged by both its constituent sensory systems, and is not constrained by absence of spatial or spectral structures in the scene, said calibration process comprising the steps:
- of aligning two sensors with view volumes so that their view volumes partially overlap;
imaging a laser-projected stripe of a self-scanned 3D sensor at every step by both sensory systems as the stripe is being traversed at steps commensurate with a desired lateral resolution across a target comprising a planar rectangular slab;
processing imaged lasers stripes to yield a set of corresponding least squares fitted line segments in a first sensory space and a second sensory space arising from least squares line fitting of preprocessed and Hough-like transformed points between leading and trailing edge elements of the imaged stripe at every step and projection of the said edge elements onto said lines in both said spaces;
sampling corresponding line segments at equal numbers of equidistant intervals to a desired vertical resolution to yield two ordered point sets that are a mapping of one another;
recording coordinates of 3D points against those of corresponding points in said second space;
repeating all said steps as the target is being depressed through the overlapping view volume; and
least squares line fitting in three space coordinate measurement system of said first space, said second space having resolution cells, three space points corresponding to each resolution cell of said second space;
viewing line equations as those of rays connecting said second space resolution cells to their corresponding three space points; and
storing thereafter said equations along with their corresponding second space resolution cells in a data fusion table for subsequent data fusion during run-time.
8 Assignments
0 Petitions
Accused Products
Abstract
A method and apparatus for fusion of three dimensional data due to an active optical triangulation based vision sensor, and two-dimensional data obtained from an independent TV camera. The two sensors are aligned so that their view volumes are partially overlapped. In the course of a calibration process, a planar target is placed and depressed through the common view volume of the two sensors. The illuminant of vision sensor projects a stripe onto the target. As the stripe is traversed across the target incrementally, it is imaged at every position not only by the camera of the vision sensor, but also by the independent TV camera. The calibration process yields a table which connects the resolution cells of the TV camera to a set of rays whose equations are established in the coordinate measurement system of the three dimensional vision sensor. The calibration table is subsequently used to inter-relate points in either sensory space via their connecting rays.
46 Citations
13 Claims
-
1. A calibration process with a self-scanned active three dimensional optical triangulation-based sensor and an independent TV camera and a planar target to provide a fused sensory source for subsequent complete data fusion of a scene imaged by both its constituent sensory systems, and is not constrained by absence of spatial or spectral structures in the scene, said calibration process comprising the steps:
- of aligning two sensors with view volumes so that their view volumes partially overlap;
imaging a laser-projected stripe of a self-scanned 3D sensor at every step by both sensory systems as the stripe is being traversed at steps commensurate with a desired lateral resolution across a target comprising a planar rectangular slab;
processing imaged lasers stripes to yield a set of corresponding least squares fitted line segments in a first sensory space and a second sensory space arising from least squares line fitting of preprocessed and Hough-like transformed points between leading and trailing edge elements of the imaged stripe at every step and projection of the said edge elements onto said lines in both said spaces;
sampling corresponding line segments at equal numbers of equidistant intervals to a desired vertical resolution to yield two ordered point sets that are a mapping of one another;
recording coordinates of 3D points against those of corresponding points in said second space;
repeating all said steps as the target is being depressed through the overlapping view volume; and
least squares line fitting in three space coordinate measurement system of said first space, said second space having resolution cells, three space points corresponding to each resolution cell of said second space;
viewing line equations as those of rays connecting said second space resolution cells to their corresponding three space points; and
storing thereafter said equations along with their corresponding second space resolution cells in a data fusion table for subsequent data fusion during run-time. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
- of aligning two sensors with view volumes so that their view volumes partially overlap;
Specification