Mapping images from one or more sources into an image for display
First Claim
1. A system for providing images of an environment to a display, said system comprising:
- at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; and
a processor in communication with each image source and said display, wherein said processor;
receives a selected distance;
defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the mapping surface comprises a plurality of vertex vectors each representing a three-dimensional coordinate of a mapping space;
for a selected vertex of the mapping surface within the field of display of said image sources, determines a texture vector of the image provided by said image sources that corresponds to the selected vertex of the mapping surface, and provides a collection of vectors comprising the selected vertex of the mapping surface, the texture vector of the image, and a color vector;
defines a model that relates a geometry of said image sources, a geometry of said display, and a geometry of the mapping surface to each other, wherein said image sources, display and mapping surface all have different coordinate systems, and wherein said processor is configured to define the model so as to provide for transforming said image sources, said display, and the mapping surface to a different coordinate system; and
maps different types of images provided by said image sources to said display using the model,wherein said image sources including image source A have respective fields of view that overlap each other on the mapping space such that said image sources provide respective images having texture vectors that correspond to the selected vertex of the mapping space, wherein respective images provided by each of said image sources have a unique characteristic, wherein said processor, for each image source, provides the selected vertex of the mapping surface and the texture vector of the respective image source such that the respective images from said image sources overlap on said display, and wherein said processor combines the respective images into a resultant image containing the unique characteristic of each respective image utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth.
1 Assignment
0 Petitions
Accused Products
Abstract
The present invention provides systems and methods that provide images of an environment to the viewpoint of a display. The systems and methods define a mapping surface at a distance from the image source and display that approximates the environment within the field of view of the image source. The system methods define a model that relates the different geometries of the image source, display, and mapping surface to each other. Using the model and the mapping surface, the systems and methods tile images from the image source, correlate the images to the display, and display the images. In instants where two image sources have overlapping fields of view on the mapping surface, the systems and methods overlap and stitch the images to form a mosaic image. If two overlapping image sources each have images with unique characteristics, the systems and methods fuse the images into a composite image.
-
Citations
32 Claims
-
1. A system for providing images of an environment to a display, said system comprising:
-
at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; and a processor in communication with each image source and said display, wherein said processor; receives a selected distance; defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the mapping surface comprises a plurality of vertex vectors each representing a three-dimensional coordinate of a mapping space; for a selected vertex of the mapping surface within the field of display of said image sources, determines a texture vector of the image provided by said image sources that corresponds to the selected vertex of the mapping surface, and provides a collection of vectors comprising the selected vertex of the mapping surface, the texture vector of the image, and a color vector; defines a model that relates a geometry of said image sources, a geometry of said display, and a geometry of the mapping surface to each other, wherein said image sources, display and mapping surface all have different coordinate systems, and wherein said processor is configured to define the model so as to provide for transforming said image sources, said display, and the mapping surface to a different coordinate system; and maps different types of images provided by said image sources to said display using the model, wherein said image sources including image source A have respective fields of view that overlap each other on the mapping space such that said image sources provide respective images having texture vectors that correspond to the selected vertex of the mapping space, wherein respective images provided by each of said image sources have a unique characteristic, wherein said processor, for each image source, provides the selected vertex of the mapping surface and the texture vector of the respective image source such that the respective images from said image sources overlap on said display, and wherein said processor combines the respective images into a resultant image containing the unique characteristic of each respective image utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method for providing images of an environment to a display, said method comprising:
-
providing at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of the image sources, wherein at least two image sources including image source A have respective fields of view that overlap each other on the mapping surface; defining a model that relates a geometry of the image sources, a geometry of the display, and a geometry of the mapping surface to each other, wherein the image sources, the display and the mapping surface all have different coordinate systems, and wherein defining the model comprises transforming the image sources, the display, and a mapping surface to a different coordinate system; mapping different types of images provided by the image sources to the display using the model, wherein mapping comprises combining the respective images having fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the resultant image upon the display in accordance with the model. - View Dependent Claims (14, 15, 16, 17, 18)
-
-
19. A system for providing images of an environment to a display, said system comprising:
-
at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; and a processor in communication with said image sources and the display, wherein said processor; receives a selected distance; defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the field of view of said image sources defines an image projected on the mapping surface at the selected distance, and wherein said processor defines a tile that encompasses only a subset of an area covered by the image projected on the mapping surface by said image sources such that other portions of the image projected on the mapping surface lie outside the tile, wherein said image sources including image source A have respective fields of view that overlap each other on the mapping surface, wherein said processor defines respective tiles for each image such that the tiles having overlapping regions, wherein said image sources provide respective images that each have at least one unique characteristic, and wherein said processor combines the respective images into a resultant image containing these unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. - View Dependent Claims (20)
-
-
21. A method for providing images of an environment to a display, said method comprising:
-
providing at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment, wherein said image sources include image source A and provide respective images that each have at least one unique characteristic; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the field of view of said image sources defines an image projected on the mapping surface at the selected distance, wherein said at least two image sources have respective fields of view that overlap each other on the mapping surface, wherein said defining step defines respective tiles for each respective image such that the tiles have overlapping regions; defining a tile that encompasses only a subset of an area covered by the image projected on the mapping surface by said at least two image sources such that other portions of the image projected on the mapping surface lie outside the tile; combining the respective images into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the respective image within the tile on the display. - View Dependent Claims (22)
-
-
23. A system for providing images of an environment to a display, said system comprising:
-
at least two image sources having respective fields of view and providing different types of images of the environment having unique characteristics; and a processor in communication with said image sources and the display, wherein said processor receives a selected distance and defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources including image source A define respective images that project on to the mapping surface at the selected distance and have adjacent regions that overlap, and wherein said processor defines blend zones on the mapping surface within the overlap regions and modulates the intensity of the respective images in the blend zones to hide seams between the respective images, wherein said processor is configured to compare, for each of a plurality of pixels within a blend zone, an intensity of a pixel of one respective image to a predefined maximum intensity to determine an intensity percentage based thereupon, and to blend the corresponding pixels of the respective images based upon the intensity percentage, wherein said processor is configured to map different types of respective images provided by said image sources to the display by combining the respective images having respective fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. - View Dependent Claims (24)
-
-
25. A method for providing images of an environment to a display, said method comprising:
-
providing at least two image sources having respective fields of view and providing different types of images of the environment having unique characteristics; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources including image source A define respective images that project on to the mapping surface at the selected distance and have adjacent regions that overlap; defining blend zones on the mapping surface within the overlap regions; mapping different types of respective images provided by said image sources to the display by combining the respective images having respective fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; displaying the resultant image on the display; and modulating the intensity of the respective images in the blend zones to hide seams between the respective images, wherein modulating the intensity of the respective images comprises comparing, for each of a plurality of pixels within a blend zone, an intensity of a pixel of one respective image to a predefined maximum intensity, determining an intensity percentage based thereupon, and blending the corresponding pixels of the respective images based upon the intensity percentage. - View Dependent Claims (26)
-
-
27. A system for providing images of an environment to a display, said system comprising:
-
at least two image sources including image source A having respective fields of view that at least partially overlap, wherein said image sources are of different types and provide respective images that each have at least one unique characteristic, wherein said image sources include a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system; and a processor in communication with said image sources and the display, wherein said processor receives a selected distance and defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources define respective images that project on to the mapping surface at the selected distance and have regions that overlap, and wherein said processor combines the respective images from the different types of image sources into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. - View Dependent Claims (28, 29)
-
-
30. A method for providing images of an environment to a display, said method comprising:
-
providing at least two image sources including image source A having respective fields of view that at least partially overlap, wherein said image sources are of different types and provide respective images that each have at least one unique characteristic, wherein said image sources include a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources define respective images that project on to the mapping surface at the selected distance and have regions that overlap; combining the respective images from the different types of image sources into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows;
Display1=(ImageA/2N)*ImageA+(1−
ImageA/2N)*Display0wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the resultant image on the display. - View Dependent Claims (31, 32)
-
Specification