Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
First Claim
1. A method of distinguishing a three dimensional object from a two dimensional object using a vehicular vision system, said method comprising:
- (a) disposing a camera at a vehicle, the camera having a field of view external of the vehicle;
(b) providing a control having a processor;
(c) providing to the control height of the camera at the vehicle and angular orientation of the camera at the vehicle;
(d) capturing image frames of image data via the camera while the vehicle is in motion, each of the captured image frames defining an image plane having a vertical aspect and a horizontal aspect;
(e) via processing of captured image frames by the processor, detecting by edge detection a first object present in multiple captured image frames;
(f) via processing of captured image frames by the processor, detecting by edge detection a second object present in the multiple captured image frames;
(g) for the first detected object,(1) selecting, via the control, and responsive to processing of captured image frames by the processor, first and second feature points from the first detected object that are spaced apart in a first captured image frame of the multiple captured image frames,(2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the first and second feature points in at least a second captured image frame of the multiple captured image frames, and(3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the first and second feature points over the multiple captured image frames;
(h) for the second detected object,(1) selecting, via the control, and responsive to processing of captured image frames by the processor, third and fourth feature points from the second detected object that are spaced apart in the first captured image frame,(2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the third and fourth feature points in at least the second captured image frame, and(3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the third and fourth feature points over the multiple captured image frames;
(i) comparing, via the control, movement of the first and second feature points over the multiple captured image frames to movement of the third and fourth feature points over the multiple captured image frames; and
(j) distinguishing, via the control, between the first detected object being a three dimensional object and the second detected object being a two dimensional object by determining, via the control, that movement of the first feature point over the multiple captured image frames is dissimilar to that of the second feature point and by determining, via the control, that movement of the third feature point over the multiple captured image frames is similar to that of the fourth feature point.
0 Assignments
0 Petitions
Accused Products
Abstract
A method of distinguishing a three dimensional object from a two dimensional object using a vehicular system includes acquiring image frames captured by a vehicle camera while the vehicle is in motion. First and second feature points are selected from a first detected object in a first captured image frame and tracked in at least a second captured image frame. Third and fourth feature points are selected from a second detected object in the first captured image frame and tracked over at least the second captured image frame. Movements of the first and second feature points over the multiple captured image frames are compared to movements of the third and fourth feature points the multiple captured image frames to distinguish the first object as a three dimensional object and the second object as a two dimensional object.
341 Citations
20 Claims
-
1. A method of distinguishing a three dimensional object from a two dimensional object using a vehicular vision system, said method comprising:
-
(a) disposing a camera at a vehicle, the camera having a field of view external of the vehicle; (b) providing a control having a processor; (c) providing to the control height of the camera at the vehicle and angular orientation of the camera at the vehicle; (d) capturing image frames of image data via the camera while the vehicle is in motion, each of the captured image frames defining an image plane having a vertical aspect and a horizontal aspect; (e) via processing of captured image frames by the processor, detecting by edge detection a first object present in multiple captured image frames; (f) via processing of captured image frames by the processor, detecting by edge detection a second object present in the multiple captured image frames; (g) for the first detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, first and second feature points from the first detected object that are spaced apart in a first captured image frame of the multiple captured image frames, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the first and second feature points in at least a second captured image frame of the multiple captured image frames, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the first and second feature points over the multiple captured image frames; (h) for the second detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, third and fourth feature points from the second detected object that are spaced apart in the first captured image frame, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the third and fourth feature points in at least the second captured image frame, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the third and fourth feature points over the multiple captured image frames; (i) comparing, via the control, movement of the first and second feature points over the multiple captured image frames to movement of the third and fourth feature points over the multiple captured image frames; and (j) distinguishing, via the control, between the first detected object being a three dimensional object and the second detected object being a two dimensional object by determining, via the control, that movement of the first feature point over the multiple captured image frames is dissimilar to that of the second feature point and by determining, via the control, that movement of the third feature point over the multiple captured image frames is similar to that of the fourth feature point. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method of distinguishing a three dimensional object from a two dimensional object using a vehicular vision system, said method comprising:
-
(a) disposing a camera at a vehicle, the camera having a field of view external of the vehicle; (b) providing a control having a processor; (c) providing to the control height of the camera at the vehicle and angular orientation of the camera at the vehicle; (d) providing to the control a focal length of a lens of the camera; (e) providing to the control vehicle data of the vehicle the camera is disposed at, the vehicle data comprising speed and steering angle while the vehicle is in motion; (f) capturing image frames of image data via the camera while the vehicle is in motion, each of the captured image frames defining an image plane having a vertical aspect and a horizontal aspect; (g) via processing of captured image frames by the processor, detecting by edge detection a first object present in multiple captured image frames; (h) via processing of captured image frames by the processor, detecting by edge detection a second object present in the multiple captured image frames; (i) for the first detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, first and second feature points from the first detected object that are spaced apart in a first captured image frame of the multiple captured image frames, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the first and second feature points in at least a second captured image frame of the multiple captured image frames, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the first and second feature points over the multiple captured image frames; (j) for the second detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, third and fourth feature points from the second detected object that are spaced apart in the first captured image frame, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the third and fourth feature points in at least the second captured image frame, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the third and fourth feature points over the multiple captured image frames; (k) comparing, via the control, movement of the first and second feature points over the multiple captured image frames to movement of the third and fourth feature points over the multiple captured image frames; and (l) distinguishing, via the control, between the first detected object being a three dimensional object and the second detected object being a two dimensional object by determining, via the control, that movement of the first feature point over the multiple captured image frames is dissimilar to that of the second feature point and by determining, via the control, that movement of the third feature point over the multiple captured image frames is similar to that of the fourth feature point. - View Dependent Claims (15, 16)
-
-
17. A method of distinguishing a three dimensional object from a two dimensional object using a vehicular vision system, said method comprising:
-
(a) disposing a camera at a vehicle, the camera having a field of view external of the vehicle; (b) providing a control having a processor; (c) providing to the control height of the camera at the vehicle and angular orientation of the camera at the vehicle; (d) providing to the control vehicle data of the vehicle the camera is disposed at, the vehicle data comprising speed and steering angle while the vehicle is in motion, wherein the vehicle data is provided to the control via a controller area network; (e) capturing image frames of image data via the camera while the vehicle is in motion, each of the captured image frames defining an image plane having a vertical aspect and a horizontal aspect; (f) via processing of captured image frames by the processor, detecting by edge detection a first object present in multiple captured image frames; (g) via processing of captured image frames by the processor, detecting by edge detection a second object present in the multiple captured image frames; (h) for the first detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, first and second feature points from the first detected object that are spaced apart in a first captured image frame of the multiple captured image frames, wherein the first and second feature points selected from the first detected object are spaced vertically apart in the first captured image frame, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the first and second feature points in at least a second captured image frame of the multiple captured image frames, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the first and second feature points over the multiple captured image frames; (i) for the second detected object, (1) selecting, via the control, and responsive to processing of captured image frames by the processor, third and fourth feature points from the second detected object that are spaced apart in the first captured image frame, (2) tracking, via the control, and responsive to processing of captured image frames by the processor, positions of the third and fourth feature points in at least the second captured image frame, and (3) determining, via the control, and responsive to processing of captured image frames by the processor, movement of the third and fourth feature points over the multiple captured image frames; (j) comparing, via the control, movement of the first and second feature points over the multiple captured image frames to movement of the third and fourth feature points over the multiple captured image frames; and (k) distinguishing, via the control, between the first detected object being a three dimensional object and the second detected object being a two dimensional object by determining, via the control, that movement of the first feature point over the multiple captured image frames is dissimilar to that of the second feature point and by determining, via the control, that movement of the third feature point over the multiple captured image frames is similar to that of the fourth feature point. - View Dependent Claims (18, 19, 20)
-
Specification