Systems and methods for dewarping images
First Claim
1. A method comprising:
- extracting, by a processor, a setting from a file of a wide-angle camera, wherein the file contains a minimum viewing angle value, a maximum viewing angle value, and a border viewing angle value between the minimum viewing angle value and the maximum viewing angle value, wherein the wide-angle camera captures a first image;
determining, by the processor, based on the extracting, a distortion removal coefficient value for a second image based on a search of a balance between a viewing angle value of the wide-angle camera and a degree of a removal of a distortion introduced by the wide-angle camera, wherein the second image has same distortion as the first image based on the viewing angle value of the wide-angle camera being the maximum viewing angle value and the distortion removal coefficient value being zero, wherein the second image has less distortion than the first image based on the viewing angle value of the wide-angle camera being between the maximum viewing angle value and the border viewing angle value and the distortion removal coefficient value being between zero and one, wherein the second image has no distortion based on the viewing angle value of the wide-angle camera being between the border viewing angle value and the minimum viewing angle value and the distortion removal coefficient value being one;
determining, by the processor, based on the distortion removal coefficient value, a first set of coordinates for a pixel of the first image for each cell of a sparse conversion map, wherein the first set of coordinates is represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of the second image;
determining, by the processor, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the first image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the second image; and
instructing, by the processor, based on the interpolating, a display to present the second image such that the second image can be modified via an input controlling whether a distortion on the second image can be left without a change, removed partially, or removed fully based on the distortion removal coefficient value.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer-implemented method comprises: extracting a setting from a description file of a virtual pan-tilt-zoom (PTZ) camera used to capture an original image through a wide-angle lens; determining a first set of coordinates of a pixel of the original image for each cell of a sparse conversion map represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of an output image; determining, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the original image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the output image; instructing a display to present the output image, wherein the original image is less rectilinear than the output image.
-
Citations
11 Claims
-
1. A method comprising:
-
extracting, by a processor, a setting from a file of a wide-angle camera, wherein the file contains a minimum viewing angle value, a maximum viewing angle value, and a border viewing angle value between the minimum viewing angle value and the maximum viewing angle value, wherein the wide-angle camera captures a first image; determining, by the processor, based on the extracting, a distortion removal coefficient value for a second image based on a search of a balance between a viewing angle value of the wide-angle camera and a degree of a removal of a distortion introduced by the wide-angle camera, wherein the second image has same distortion as the first image based on the viewing angle value of the wide-angle camera being the maximum viewing angle value and the distortion removal coefficient value being zero, wherein the second image has less distortion than the first image based on the viewing angle value of the wide-angle camera being between the maximum viewing angle value and the border viewing angle value and the distortion removal coefficient value being between zero and one, wherein the second image has no distortion based on the viewing angle value of the wide-angle camera being between the border viewing angle value and the minimum viewing angle value and the distortion removal coefficient value being one; determining, by the processor, based on the distortion removal coefficient value, a first set of coordinates for a pixel of the first image for each cell of a sparse conversion map, wherein the first set of coordinates is represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of the second image; determining, by the processor, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the first image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the second image; and instructing, by the processor, based on the interpolating, a display to present the second image such that the second image can be modified via an input controlling whether a distortion on the second image can be left without a change, removed partially, or removed fully based on the distortion removal coefficient value. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method comprising:
-
extracting, by a processor, a setting from a file of a probe camera, wherein the file contains a minimum viewing angle value, a maximum viewing angle value, and a border viewing angle value between the minimum viewing angle value and the maximum viewing angle value, wherein the probe camera captures a first image; determining, by the processor, based on the extracting, a distortion removal coefficient value for a second image based on a search of a balance between a viewing angle value of the probe camera and a degree of a removal of a distortion introduced by the probe camera, wherein the second image has same distortion as the first image based on the viewing angle value of the probe camera being the maximum viewing angle value and the distortion removal coefficient value being zero, wherein the second image has less distortion than the first image based on the viewing angle value of the probe camera being between the maximum viewing angle value and the border viewing angle value and the distortion removal coefficient value being between zero and one, wherein the second image has no distortion based on the viewing angle value of the probe camera being between the border viewing angle value and the minimum viewing angle value and the distortion removal coefficient value being one; determining, by the processor, based on the distortion removal coefficient value, a first set of coordinates for a pixel of the first image for each cell of a sparse conversion map, wherein the first set of coordinates is represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of the second image; determining, by the processor, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the first image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the second image; and instructing, by the processor, based on the interpolating, a display to present the second image such that the second image can be modified via an input controlling whether a distortion on the second image can be left without a change, removed partially, or removed fully based on the distortion removal coefficient value.
-
-
10. A method comprising:
-
receiving, by a processor, a first image from a pan-tilt-zoom (PTZ) camera, wherein the first image captures a plurality of zones; for each of the zones depicted in the first image; extracting, by the processor, a setting from a file of the PTZ camera, wherein the file contains a minimum viewing angle value, a maximum viewing angle value, and a border viewing angle value between the minimum viewing angle value and the maximum viewing angle value; determining, by the processor, a distortion removal coefficient value for a second image based on a search of a balance between a viewing angle value of the PTZ camera and a degree of a removal of a distortion introduced by the PTZ camera, wherein the second image has same distortion as the first image based on the viewing angle value of the PTZ camera being the maximum viewing angle value and the distortion removal coefficient value being zero, wherein the second image has less distortion than the first image based on the viewing angle value of the virtual PTZ camera being between the maximum viewing angle value and the border viewing angle value and the distortion removal coefficient value being between zero and one, wherein the second image has no distortion based on the viewing angle value of the PTZ camera being between the border viewing angle value and the minimum viewing angle value and the distortion removal coefficient value being one; determining, by the processor, a first set of coordinates for a pixel of the first image for each cell of a sparse conversion map, wherein the first set of coordinates is represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of the second image; determining, by the processor, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the first image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the second image; and instructing, by the processor, based on the interpolating, a display to present the second image such that the second image can be modified via an input controlling whether a distortion on the second image can be left without a change, removed partially, or removed fully based on the distortion removal coefficient value.
-
-
11. A device comprising:
-
a vehicle including a processor, a memory, and a camera, wherein the processor is in communication with the memory and the camera, wherein the memory stores a set of instructions executable via the processor to perform a method, wherein the method comprises; instructing, by the processor, the camera to capture a first image; extracting, by the processor, a setting from a file of the camera, wherein the file contains a minimum viewing angle value, a maximum viewing angle value, and a border viewing angle value between the minimum viewing angle value and the maximum viewing angle value; determining, by the processor, a distortion removal coefficient value for a second image based on a search of a balance between a viewing angle value of the camera and a degree of a removal of a distortion introduced by the camera, wherein the second image has same distortion as the first image based on the viewing angle value of the camera being the maximum viewing angle value and the distortion removal coefficient value being zero, wherein the second image has less distortion than the first image based on the viewing angle value of the camera being between the maximum viewing angle value and the border viewing angle value and the distortion removal coefficient value being between zero and one, wherein the second image has no distortion based on the viewing angle value of the camera being between the border viewing angle value and the minimum viewing angle value and the distortion removal coefficient value being one; determining, by the processor, based on the distortion removal coefficient value, a first set of coordinates for a pixel of the first image for each cell of a sparse conversion map, wherein the first set of coordinates is represented as a first look-up table, wherein the sparse conversion map corresponds to a sparse grid of pixels of the second image; determining, by the processor, via interpolating the first set of coordinates, a second set of coordinates of a pixel of the first image for each cell of a full conversion map, wherein the second set of coordinates is represented as a second look-up table, wherein the full conversion map corresponds to a full grid of pixels of the second image; and instructing, by the processor, based on the interpolating, an output device to output the second image such that the second image can be modified via an input controlling whether a distortion on the second image can be left without a change, removed partially, or removed fully based on the distortion removal coefficient value.
-
Specification