Imaging surface modeling for camera modeling and virtual view synthesis
First Claim
Patent Images
1. A method for displaying a captured image on a display device comprising the steps of:
- capturing a real image by a vision-based imaging device;
generating a virtual image from the captured real image based on a mapping by a processor, wherein the mapping utilizes a virtual camera model with a non-planar imaging surface, wherein generating the virtual image comprises the steps of;
providing a pre-calibrated real camera model by the processor, the real camera model representative of the vision-based imaging device capturing the scene;
determining real incident ray angles of each pixel in the captured image based on the pre-calibrated real camera model;
identifying an arbitrary shape of the non-planar imaging surfaceidentifying a pose of the virtual camera model;
determining virtual incident ray angles of each pixel in the virtual image based on the virtual image model and the non-planar imaging surface;
mapping a virtual incident ray to a correlated real incident ray of the real image capture device, wherein rotational compensation is applied to the virtual incident ray angles for correlating the virtual incident ray and the real incident ray if a pose of the virtual camera model is different from a pose of the of the real image capture device;
mapping pixels in the virtual image corresponding to coordinates on the non-planar virtual imaging surface to correlated pixels on the real captured image as a function of the correlation mapping between the real incident ray and the virtual incident ray;
projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for displaying a captured image on a display device. A real image is captured by a vision-based imaging device. A virtual image is generated from the captured real image based on a mapping by a processor. The mapping utilizes a virtual camera model with a non-planar imaging surface. Projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device.
-
Citations
34 Claims
-
1. A method for displaying a captured image on a display device comprising the steps of:
-
capturing a real image by a vision-based imaging device; generating a virtual image from the captured real image based on a mapping by a processor, wherein the mapping utilizes a virtual camera model with a non-planar imaging surface, wherein generating the virtual image comprises the steps of; providing a pre-calibrated real camera model by the processor, the real camera model representative of the vision-based imaging device capturing the scene; determining real incident ray angles of each pixel in the captured image based on the pre-calibrated real camera model; identifying an arbitrary shape of the non-planar imaging surface identifying a pose of the virtual camera model; determining virtual incident ray angles of each pixel in the virtual image based on the virtual image model and the non-planar imaging surface; mapping a virtual incident ray to a correlated real incident ray of the real image capture device, wherein rotational compensation is applied to the virtual incident ray angles for correlating the virtual incident ray and the real incident ray if a pose of the virtual camera model is different from a pose of the of the real image capture device; mapping pixels in the virtual image corresponding to coordinates on the non-planar virtual imaging surface to correlated pixels on the real captured image as a function of the correlation mapping between the real incident ray and the virtual incident ray; projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
-
-
5. The method of claim 4 wherein a radial distortion correction is applied to the fisheye camera model, wherein a radial distortion is determined by a distortion effect of a lens of the vision-based imaging device, wherein the radial distortion is represented by a function of the distorted radial distance with respect to an undistorted variable, wherein a distorted radial distance is measured from a respective image pixel to an image center on an image sensor of the real image capture device using the lens with distortion effect, wherein the un-distorted variable includes an incident ray angle of the incident chief ray which is imaged to the distorted image pixel, wherein the radial distortion for vision-based imaging device having a field-of-view greater than 135 degrees is represented by the following formula:
-
rd=p1·
θ
0+p2·
θ
03+p3·
θ
05+ . . .where p are distortion parameters.
-
-
6. The method of claim 3 wherein a radial distortion correction is applied to a real camera model when radial distortion is present, wherein the radial distortion is represented by a function of the distorted radial distance with respect to an undistorted variable, wherein the undistorted variable includes an undistorted radial distance of an image pixel based on a pin-hole camera model, and wherein the radial distortion for vision-based imaging device having a field-of-view less than 135 degrees is represented by the following formula:
-
rd=r0(1+k1·
r02+k2·
r04+k2·
r06+ . . . )where point r0 is the undistorted radial distance, wherein r0 is determined using a pinhole model and includes intrinsic and extrinsic parameters, and wherein k are radial distortion parameters.
-
-
7. The method of claim 3 wherein a real camera is modeled as a pinhole model with a non-planar imaging surface, wherein the pinhole model represents a lens without radial distortion, and wherein a radial distortion effect of a real camera lens is modeled into the non-planar imaging surface.
-
8. The method of claim 1 wherein if the pose of the virtual camera model is different than the real camera model, then θ
-
virt is an angle between the virtual incident ray and the virtual optical axis represented by a virtual Z axis, and φ
virt is an angle between a virtual camera x axis and a projection of the virtual incident ray on the virtual camera model x-y plane, wherein any point on the virtual incident ray is represented by the following matrix;
-
virt is an angle between the virtual incident ray and the virtual optical axis represented by a virtual Z axis, and φ
-
9. The method of claim 8 wherein the difference between the pose of the virtual camera model and the pose of the real camera model is represented by a rotation matrix.
-
10. The method of claim 9 wherein the coordinates of a same point on the correlated real incident ray and the virtual incident ray for the identified pose difference between the virtual camera model and the real camera model is represented by the following matrix:
-
11. The method of claim 10 wherein real incident ray angles (θ
-
real,φ
real) is determined by the following formula;
-
real,φ
-
12. The method of claim 1 wherein a virtual incident ray angles (θ
-
virt,φ
virt) is equal to a real incident ray (θ
real,φ
real) if the pose of the virtual camera model and the real camera model are the same.
-
virt,φ
-
13. The method of claim 1 wherein the virtual camera model includes a pin-hole camera model.
-
14. The method of claim 1 wherein the non-planar imaging surface includes an elliptical imaging surface.
-
15. The method of claim 1 wherein the non-planar imaging surface includes a cylindrical imaging surface.
-
16. The method of claim 1 wherein the non-planar imaging surface includes a spherical imaging surface.
-
17. The method of claim 1 wherein the non-planar imaging surface is an arbitrary shape that satisfies a homographic mapping from respective image pixels projected on the non-planar image surface to respective incident rays.
-
18. The method of claim 17 wherein the non-planar imaging surface is dynamically switched to another arbitrary shape based on a driving scenario for a respective vehicle operation, wherein the selected arbitrary shape enhances the virtual image projected on the display device for the respective vehicle operation.
-
19. The method of claim 1 wherein the vision-based imaging device is a rear back-up camera.
-
20. The method of claim 1 wherein the vision-based imaging device is a side-view camera.
-
21. The method of claim 1 wherein the vision-based imaging device is a front-view camera.
-
22. The method of claim 1 wherein generating the virtual camera model includes defining a virtual camera pose, wherein a difference between a pose of a virtual camera model and a pose of the vision-based imaging device is modeled as a rotation matrix.
-
23. The method of claim 22 wherein the difference in the pose of the virtual camera pose and the pose of the vision-based imaging device is modeled by rotating the image surface, wherein rotating the image surface maintains the same pose between the virtual camera model and the vision-based imaging device pose.
-
24. The method of claim 1 wherein a dynamic view synthesis for generating the virtual image is enabled based on a driving scenario of a vehicle operation, wherein the dynamic view synthesis generates a direction zoom to a region of the image for enhancing visual awareness to a driver for the respective region.
-
25. The method of claim 24 wherein the driving scenario of a vehicle operation for enabling the dynamic view synthesis includes determining whether the vehicle is driving in a parking lot.
-
26. The method of claim 24 wherein the driving scenario of a vehicle operation for enabling the dynamic view synthesis includes determining whether the vehicle is driving on a highway.
-
27. The method of claim 24 wherein the driving scenario of a vehicle operation for enabling the dynamic view synthesis includes actuating a turn signal.
-
28. The method of claim 24 wherein the driving scenario of a vehicle operation for enabling the dynamic view synthesis is based on a steering wheel angle.
-
29. The method of claim 24 wherein the driving scenario of a vehicle operation for enabling the dynamic view synthesis is based on a speed of the vehicle.
-
30. A method for displaying a captured image on a display device comprising the steps of:
-
capturing a real image by a vision-based imaging device; generating a virtual image from the captured real image based on a mapping by a processor, wherein the mapping utilizes a virtual camera model with a non-planar imaging surface, wherein generating the virtual image further comprises the steps of; providing a pre-calibrated real camera model by the processor, the real camera model representative of the vision-based imaging device capturing the scene; determining real incident ray angles of each pixel in the captured image based on the pre-calibrated real camera model; identifying an arbitrary shape of the non-planar imaging surface identifying a pose of the virtual camera model; determining virtual incident ray angles of each pixel in the virtual image based on the virtual image model and the non-planar imaging surface; mapping a virtual incident ray to a correlated real incident ray of the real image capture device, wherein rotational compensation is applied to the virtual incident ray angles for correlating the virtual incident ray and the real incident ray if a pose of the virtual camera model is different from a pose of the of the real image capture device; mapping pixels in the virtual image corresponding to coordinates on the non-planar virtual imaging surface to correlated pixels on the real captured image as a function of the correlation mapping between the real incident ray and the virtual incident ray; projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device; wherein a horizontal projection of a virtual incident angle θ
virt is represented by the angle α
, wherein angle α
is represented by the following formula; - View Dependent Claims (31, 32)
-
-
33. A method for displaying a captured image on a display device comprising the steps of:
-
capturing a real image by a vision-based imaging device; generating a virtual image from the captured real image based on a mapping by a processor, wherein the mapping utilizes a virtual camera model with a non-planar imaging surface, wherein generating the virtual image further comprises the steps of; constructing a coordinate system for a respective camera model, that includes a x-axis, a y-axis, and a z-axis, the z-axis being aligned with a camera optical axis extending outward the respective camera model, the x-axis and the y-axis forming a plane perpendicular to the z-axis, the x-axis and the z-axis intersecting at a camera aperture location; defining the camera pose as the location of the camera coordinates and the orientation of the camera z-axis; generating a real camera model that is representative of the vision-based imaging device, the real image being a scene captured by the vision-based imagining device; generating a virtual camera model including simulated camera model parameters, a simulated imaging surface, and a simulated camera pose; identifying a virtual image that is a synthesized image of a scene using the virtual camera model; selecting a shape of the non-planar imaging surface; selecting a pose of the virtual camera model; determining a virtual incident ray angle for each pixel in the virtual image based on the virtual camera model using the non-planar imaging surface; determining whether the pose of the virtual camera model is the same as a pose of the real camera model in response to comparing the virtual incident ray angle and the real incident real angle; applying a rotational matrix for correlating the virtual incident angle and the real incident angle in response to a difference in the poses of the virtual camera model and the virtual camera model, the rotational matrix representing a transformation for correlating the virtual camera pose to the real camera pose; determining a corresponding pixel location by applying the real camera model with angular distortion parameters, wherein the real camera model with angular distortion represents an imaging process of the vision-based imaging device capturing a scene, and wherein parameters of the real camera model with angular distortion are determined by a camera calibration process; generating a mapping between a respective virtual image pixel location and a respective real image pixel location as a function of a respective virtual image pixel coordinate, the virtual incident ray angle, the real incident ray angle, and the real image pixel coordinate; and synthesizing the virtual image of the scene from the real image, wherein a pixel value of the real image is utilized as the pixel value for a corresponding pixel in the virtual image; projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device. - View Dependent Claims (34)
-
Specification