GROUP OPTIMIZATION DEPTH INFORMATION METHOD AND SYSTEM FOR CONSTRUCTING A 3D FEATURE MAP
First Claim
1. A computer vision system, comprising:
- a processor;
a plurality of cameras coupled to the processor, wherein the plurality of cameras are located about a host object, the plurality of cameras comprising at least two front facing cameras having a front facing view of an environment in which the object is located, at least one left side facing camera having a left side facing view of the environment in which the object is located, at least one right side facing camera having a right side facing view of the environment in which the object is located, and at least one rear facing camera having a rear facing view of the environment in which the object is located;
a memory coupled to the processor, the memory tangibly storing thereon executable instructions that, when executed by the processor, cause the computer vision system to;
select a group of images from images captured by the plurality of cameras, the group of images including one captured image for each camera in the plurality of cameras, which were captured at a common time;
perform image distortion compensation on each image in the group of images based on intrinsic parameters of the respective camera that captured the image;
rectify each image in the group of images so that all images in the group of images have a common image plane;
determine correspondence information for a plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on a relative position and alignment of each camera in the plurality of cameras;
determine depth information for each environmental feature in the plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on the correspondence information of the environmental features; and
determine group optimized depth information for each environmental feature in the plurality of environmental features using the determined depth information of each respective environmental feature.
1 Assignment
0 Petitions
Accused Products
Abstract
A group optimization method for constructing a 3D feature map is disclosed. In one embodiment, the method comprises determining correspondence information for a plurality of environmental features for each image in a group of images in which a respective environmental feature is present and relative position and alignment of each camera. Depth information is determined for each environmental feature in the plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on the correspondence information of the environmental features. Group optimized depth information is determined for each environmental feature in the plurality of environmental features using the determined depth information of each respective environmental feature.
11 Citations
23 Claims
-
1. A computer vision system, comprising:
-
a processor; a plurality of cameras coupled to the processor, wherein the plurality of cameras are located about a host object, the plurality of cameras comprising at least two front facing cameras having a front facing view of an environment in which the object is located, at least one left side facing camera having a left side facing view of the environment in which the object is located, at least one right side facing camera having a right side facing view of the environment in which the object is located, and at least one rear facing camera having a rear facing view of the environment in which the object is located; a memory coupled to the processor, the memory tangibly storing thereon executable instructions that, when executed by the processor, cause the computer vision system to; select a group of images from images captured by the plurality of cameras, the group of images including one captured image for each camera in the plurality of cameras, which were captured at a common time; perform image distortion compensation on each image in the group of images based on intrinsic parameters of the respective camera that captured the image; rectify each image in the group of images so that all images in the group of images have a common image plane; determine correspondence information for a plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on a relative position and alignment of each camera in the plurality of cameras; determine depth information for each environmental feature in the plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on the correspondence information of the environmental features; and determine group optimized depth information for each environmental feature in the plurality of environmental features using the determined depth information of each respective environmental feature. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method of generating a three-dimensional (3D) feature map, comprising:
-
capturing images from each of a plurality of cameras, wherein the cameras are located about a host object, the plurality of cameras comprising at least two front facing cameras having a front facing view of an environment in which the object is located, at least one left side facing camera having a left side facing view of the environment in which the object is located, at least one right side facing camera having a right side facing view of the environment in which the object is located, and at least one rear facing camera having a rear facing view of the environment in which the object is located; selecting a group of images from the captured images, one captured image for each camera in the plurality of cameras, which were captured at a common time; performing image distortion compensation on each image in the group of images based on intrinsic parameters of the respective camera that captured the image; rectifying each image in the group of images so that all images in the group of images have a common image plane; determining correspondence information for a plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on a relative position and alignment of each camera in the plurality of cameras; determining depth information for each environmental feature in the plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on the correspondence information of the environmental features; and determining group optimized depth information for each environmental feature in the plurality of environmental features using the determined depth information of each respective environmental feature. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. A non-transitory machine readable medium having tangibly stored thereon executable instructions for execution by at least one processor of a computer vision system, the computer vision system comprising a processor, a memory coupled to the processor, and a plurality of cameras coupled to the processor, wherein the cameras are located about a host object, the plurality of cameras comprising at least two front facing cameras having a front facing view of an environment in which the object is located, at least one left side facing camera having a left side facing view of the environment in which the object is located, at least one right side facing camera having a right side facing view of the environment in which the object is located, and at least one rear facing camera having a rear facing view of the environment in which the object is located, wherein the executable instructions, when executed by the at least one processor, cause the computer vision system to:
-
select a group of images from images captured by the plurality of cameras, the group of images including one captured image for each camera in the plurality of cameras, which were captured at a common time; perform image distortion compensation on each image in the group of images based on intrinsic parameters of the respective camera that captured the image; rectify each image in the group of images so that all images in the group of images have a common image plane; determine correspondence information for a plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on a relative position and alignment of each camera in the plurality of cameras; determine depth information for each environmental feature in the plurality of environmental features for each image in the group of images in which a respective environmental feature is present based on the correspondence information of the environmental features; and determine group optimized depth information for each environmental feature in the plurality of environmental features using the determined depth information of each respective environmental feature.
-
Specification