Fast digital pan tilt zoom video
First Claim
1. A method for generating an image from an arbitrary direction and with an arbitrary zoom from a scene by combining the images of first and second cameras, said method comprising the steps of:
- computing a transform that maps corresponding pairs of image subregions lying in overlapping portions of first and second images derivable from said first and second cameras, respectively, to a substantially identical subregion of a composite image;
acquiring a first image from said first camera and a second image from said second camera;
transforming and merging at least one of said first and second images to form composite image combining data from said first and second images;
spatially blending at least one of intensity and color properties of said images to reduce abrupt transitions due to differences in said properties in said first and second images; and
forming an image from a selected portion of said composite image, wherein said image forming step comprises;
linearly transforming the selected portion of said composite image to a new plane coinciding with neither a plane of said first image and a plane of said second image.
2 Assignments
0 Petitions
Accused Products
Abstract
A virtual PTZ camera is described which forms a virtual image using multiple cameras whose fields of view overlap. Images from the cameras are merged by transforming to a common surface and property-blending overlapping regions to smooth transitions due to differences in image formation of common portions of a scene. To achieve high speed, the images may be merged to a common planar surface or set of surfaces so that transforms can be linear. Image information alone may be used to calculate the transforms from common feature points located in the images so that there is no need for three-dimensional geometric information about the cameras.
-
Citations
14 Claims
-
1. A method for generating an image from an arbitrary direction and with an arbitrary zoom from a scene by combining the images of first and second cameras, said method comprising the steps of:
-
computing a transform that maps corresponding pairs of image subregions lying in overlapping portions of first and second images derivable from said first and second cameras, respectively, to a substantially identical subregion of a composite image;
acquiring a first image from said first camera and a second image from said second camera;
transforming and merging at least one of said first and second images to form composite image combining data from said first and second images;
spatially blending at least one of intensity and color properties of said images to reduce abrupt transitions due to differences in said properties in said first and second images; and
forming an image from a selected portion of said composite image, wherein said image forming step comprises; linearly transforming the selected portion of said composite image to a new plane coinciding with neither a plane of said first image and a plane of said second image. - View Dependent Claims (2, 3, 4, 5, 6, 7)
identifying feature points in said first and second images and computing said transform responsively to said feature points such that information about orientations of said cameras is not required to compute said transform.
-
-
5. The method as claimed in claim 1, wherein said step of blending includes a weighted average, where a weight is computed responsively to a distance from a boundary line separating said first and second images.
-
6. The method as claimed in claim 1, wherein said step of blending includes a weighted average, where a weight is proportional to a distance from a boundary line separating said first and second images.
-
7. The method as claimed in claim 1, wherein said step of forming further comprises:
interpolating property values of pixels to generate a zoom effect.
-
8. A device for generating an image from an arbitrary direction and with an arbitrary zoom from a scene by combining the images of first and second cameras, comprising:
-
an image processor connectable to receive image data from two cameras;
said image processor having a memory;
said image processor being programmed to compute a transform that maps corresponding pairs of image subregions lying in overlapping portions of first and second images derivable from said first and second cameras, respectively, to a substantially identical subregion of a composite image and storing a definition of said transform in said memory;
said image processor being further programmed to receive first and second images from said first and second cameras, respectively, and transform and merge at least one of said first and second images to form composite image combining data from said first and second images;
said image processor being further programmed to spatially blend at least one of intensity and color properties of said images to reduce abrupt transitions due to differences in said properties in said first and second images; and
said image processor being still further programmed to generate a selected image from a selected portion of said composite images, wherein said image processor is programmed such that said selected image is generated from said composite image by transforming a portion of said composite image to a new plane coinciding with neither a plane of said first image and a plane of said second image. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
Specification