3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
First Claim
1. A computer-implemented method, comprising:
- receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object;
retrieving previously stored three-dimensional (3D) locations of a plurality of reference points of the environment, wherein the 3D locations of the plurality of reference points that were previously stored comprises a road in the environment;
projecting the plurality of reference points into the 2D image based on the previously stored 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle;
selecting reference points from the plurality of projected reference points such that the selected reference points form a polygon that surrounds the object in the 2D image;
determining an intersection point of a ray directed toward the object and a 3D polygon formed by the previously stored 3D locations of the selected reference points;
determining a location of the object in the environment based on the determined intersection point, wherein the object is a portion of a different vehicle, wherein the method further comprises determining a location of the different vehicle based on the location of the object in the environment; and
providing instructions to control the autonomous vehicle based on the determined location of the object in the environment.
2 Assignments
0 Petitions
Accused Products
Abstract
Disclosed herein are methods and systems for determining a location of an object within an environment. An example method may include determining a three-dimensional (3D) location of a plurality of reference points in an environment, receiving a two-dimensional (2D) image of a portion of the environment that contains an object, selecting certain reference points from the plurality of reference points that form a polygon when projected into the 2D image that contains at least a portion of the object, determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points, and based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment.
-
Citations
19 Claims
-
1. A computer-implemented method, comprising:
-
receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object; retrieving previously stored three-dimensional (3D) locations of a plurality of reference points of the environment, wherein the 3D locations of the plurality of reference points that were previously stored comprises a road in the environment; projecting the plurality of reference points into the 2D image based on the previously stored 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle; selecting reference points from the plurality of projected reference points such that the selected reference points form a polygon that surrounds the object in the 2D image; determining an intersection point of a ray directed toward the object and a 3D polygon formed by the previously stored 3D locations of the selected reference points; determining a location of the object in the environment based on the determined intersection point, wherein the object is a portion of a different vehicle, wherein the method further comprises determining a location of the different vehicle based on the location of the object in the environment; and providing instructions to control the autonomous vehicle based on the determined location of the object in the environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A vehicle, comprising:
-
a camera configured to capture a two-dimensional (2D) image of a portion of an environment, wherein the image contains an object; and a computing system configured to; retrieve previously stored three-dimensional (3D) locations of a plurality of reference points of the environment, wherein the 3D locations of the plurality of reference points that were previously stored comprises a road in the environment; project the plurality of reference points into the 2D image based on the previously stored 3D locations of the plurality of reference points and a location of the camera on the vehicle; select reference points from the plurality of projected reference points such that the selected reference points form a polygon that surrounds the object in the 2D image; determine an intersection point of a ray directed toward the object and a 3D polygon formed by the previously stored 3D locations of the selected reference points; determine a location of the object in the environment based on the determined intersection point, wherein the object is a portion of a different vehicle, wherein the computing system is further configured to determine a location of the different vehicle based on the location of the object in the environment; and provide instructions to control the vehicle based on the determined location of the object in the environment. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A non-transitory computer readable medium having stored therein instructions, that when executed by a computing system, cause the computing system to perform functions comprising:
-
receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object; retrieving previously stored three-dimensional (3D) locations of a plurality of reference points of the environment, wherein the 3D locations of the plurality of reference points that were previously stored comprises a road in the environment; projecting the plurality of reference points into the 2D image based on the previously stored 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle; selecting reference points from the plurality of projected reference points such that the selected reference points form a polygon that surrounds the object in the 2D image; determining an intersection point of a ray directed toward the object and a 3D polygon formed by the previously stored 3D locations of the selected reference points; determining a location of the object in the environment based on the determined intersection point, wherein the object is a portion of a different vehicle, wherein the computing system is further configured to determine a location of the different vehicle based on the location of the object in the environment; and providing instructions to control the autonomous vehicle based on the determined location of the object in the environment. - View Dependent Claims (16, 17, 18, 19)
-
Specification