Blind-spot monitoring using machine vision and precise FOV information
First Claim
1. An apparatus comprising:
- a first sensor configured to generate a video signal based on a targeted view of a driver;
an interface configured to receive status information about one or more components of a vehicle;
a second sensor configured to generate a proximity signal in response to detecting an object within a predetermined radius of said second sensor; and
a processor configured to(A) determine a location of said object with respect to said vehicle in response to said proximity signal,(B) calculate current location coordinates of eyes of said driver detected in said video signal,(C) determine a field of view of said driver at a time when said proximity signal is received based on (i) said status information and (ii) said current location coordinates of said eyes,(D) perform a cross reference between (i) said field of view of said driver and (ii) said location of said object from said proximity signal, and(E) generate a control signal, wherein (i) said cross reference is configured to determine whether said object is within said field of view using said current location coordinates of said eyes at said time said proximity signal is received, (ii) said current location coordinates of said eyes are determined based on an analysis of one or more video frames of said video signal to determine (a) one or more of a first location coordinate and a second location coordinate of said eyes and (b) a depth coordinate representing a distance of said eyes from said first sensor, (iii) said depth coordinate is determined based on a comparison of a reference number of pixels of a vehicle component in a reference video frame to a current number of pixels of said vehicle component in said video frames, (iv) said vehicle component is capable of relative movement with respect to said first sensor and (v) said control signal is used to alert said driver when said location of said object is not in said field of view of said driver.
2 Assignments
0 Petitions
Accused Products
Abstract
An apparatus comprising a first sensor, an interface, a second sensor and a processor. The first sensor may be configured to generate a video signal based on a targeted view of a driver. The interface may be configured to receive status information about one or more components of a vehicle. The second sensor may be configured to detect an object within a predetermined radius of the second sensor. The processor may be configured to determine a field of view of the driver based on the status information. The processor may be configured to generate a control signal in response to a cross reference between (i) a field of view of the driver and (ii) the detected object. The control signal may be used to alert the driver if the detected object is not in the field of view of the driver.
-
Citations
19 Claims
-
1. An apparatus comprising:
-
a first sensor configured to generate a video signal based on a targeted view of a driver; an interface configured to receive status information about one or more components of a vehicle; a second sensor configured to generate a proximity signal in response to detecting an object within a predetermined radius of said second sensor; and a processor configured to (A) determine a location of said object with respect to said vehicle in response to said proximity signal, (B) calculate current location coordinates of eyes of said driver detected in said video signal, (C) determine a field of view of said driver at a time when said proximity signal is received based on (i) said status information and (ii) said current location coordinates of said eyes, (D) perform a cross reference between (i) said field of view of said driver and (ii) said location of said object from said proximity signal, and (E) generate a control signal, wherein (i) said cross reference is configured to determine whether said object is within said field of view using said current location coordinates of said eyes at said time said proximity signal is received, (ii) said current location coordinates of said eyes are determined based on an analysis of one or more video frames of said video signal to determine (a) one or more of a first location coordinate and a second location coordinate of said eyes and (b) a depth coordinate representing a distance of said eyes from said first sensor, (iii) said depth coordinate is determined based on a comparison of a reference number of pixels of a vehicle component in a reference video frame to a current number of pixels of said vehicle component in said video frames, (iv) said vehicle component is capable of relative movement with respect to said first sensor and (v) said control signal is used to alert said driver when said location of said object is not in said field of view of said driver. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
Specification