Image processing system
First Claim
1. An image processing apparatus comprising:
- means for storing source model data defining a function which relates a set of source parameters to image data defining an appearance of a source object;
means for storing target model data defining a function which relates a set of target parameters to image data defining an appearance of a target object;
means for storing transformation data defining a transformation which relates source parameters to target parameters;
means for receiving image data defining an appearance of the source object;
means for determining a set of source parameters for the source object in the received image using the received image data and the source model data;
means for determining a set of target parameters corresponding to the determined set of source parameters using the transformation data and the determined set of source parameters; and
means for determining image data defining an appearance of the target object using said determined set of target parameters and said target model data.
1 Assignment
0 Petitions
Accused Products
Abstract
An image processing system is provided which receives image data of a user and generates a set of appearance parameters representative of the appearance of the user in the received images. These appearance parameters may then be transformed either to change the identity of the user in the images or to change the resolution of the image of the user. Synthesised images of the user can then be generated from the transformed parameters. The system can be used as a stand alone image processing system or may form part of an image transmission and reception system. It may also be used for streaming data over limited bandwidth communication channels such as the Internet. A system is also provided for animating a single image from a set of image deviations obtained from a source video sequence. A system for changing the lighting conditions in the single image is also provided.
-
Citations
144 Claims
-
1. An image processing apparatus comprising:
-
means for storing source model data defining a function which relates a set of source parameters to image data defining an appearance of a source object;
means for storing target model data defining a function which relates a set of target parameters to image data defining an appearance of a target object;
means for storing transformation data defining a transformation which relates source parameters to target parameters;
means for receiving image data defining an appearance of the source object;
means for determining a set of source parameters for the source object in the received image using the received image data and the source model data;
means for determining a set of target parameters corresponding to the determined set of source parameters using the transformation data and the determined set of source parameters; and
means for determining image data defining an appearance of the target object using said determined set of target parameters and said target model data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 99, 113, 123, 143, 144)
-
-
28. An image processing apparatus comprising:
-
means for storing source model data defining a function which relates a set of source parameters to a source set of locations which identify the relative positions of a plurality of predetermined points on a source object;
means for storing target model data defining a function which relates a set of target parameters to a target set of locations which identify the relative positions of a plurality of predetermined points on a target object;
means for storing transformation data defining a transformation which relates source parameters to target parameters;
means for receiving image data for an image of the source object;
means for determining said source set of locations for the source object in the received image;
means for determining a set of source parameters for the source object in the received image using the determined source set of locations and the source model data;
means for determining a set of target parameters corresponding to the determined set of source parameters using the transformation data and the determined set of source parameters;
means for determining a target set of locations using said determined set of target parameters and said target model data; and
means for determining image data of said target object using said determined target set of locations. - View Dependent Claims (29, 30, 31, 32)
-
-
33. A camera comprising:
-
means for storing first model data defining a function which relates a set of first parameters to image data of said object at a first resolution;
means for storing second model data defining a function which relates a set of second parameters to image data of said object at a second resolution;
means for storing transformation data defining a transformation which relates said first parameters to said second parameters;
means for sensing light from the object and for generating image data at said first resolution therefor;
means for determining a set of first parameters for the object using the sensed image data and the first model data;
means for determining a set of second parameters corresponding to the determined set of first parameters using the transformation data and the determined set of first parameters; and
means for determining image data of the object at said second resolution using said determined set of second parameters and said second model data. - View Dependent Claims (34, 35, 36)
-
-
37. An apparatus for generating an appearance model for an object, the appearance model defining a function which relates a set of parameters to pixel values which define an appearance of the object, the apparatus comprising:
-
means for receiving plural training pixel images of the object having different appearances;
means for sampling pixel values at predetermined points over each of the training images to generate a respective plurality of sets of corresponding pixel values for the training images; and
means for processing the sets of corresponding pixel values to determine said appearance model;
wherein said object includes one or more features of interest and wherein said sampling means is operable to take more samples of pixel values over said one or more features of interest than over other parts of the object. - View Dependent Claims (38, 39, 40)
-
-
41. An apparatus for generating an appearance model for an object, the appearance model defining a function which relates a set of parameters to pixel values which define an appearance of the object, the apparatus comprising:
-
means for receiving plural training pixel images of the object having different appearances;
means for identifying the location within each training image of a plurality of predetermined points on the object, the predetermined points identifying the outline of the object and the locations of one or more features of interest on the object;
means for warping each training image so that the determined locations of said predetermined points are warped to the locations of the corresponding points in a reference image of the object;
means for sampling pixel values of each of the warped training images to generate a respective plurality of sets of corresponding pixel values for the training images; and
means for processing the sets of corresponding pixel values to determine said appearance model;
wherein said one or more features of interest on the object are enlarged in said reference image of the object relative to other parts of the object. - View Dependent Claims (42, 43, 44)
-
-
45. A video communication system comprising:
-
a transmitter and a receiver;
wherein the transmitter comprises;
means for storing source model data defining a function which relates a set of source parameters to image data defining an appearance of a source object;
means for storing target model data defining a function which relates a set of target parameters to image data defining an appearance of a target object;
means for storing transformation data defining a transformation which relates source parameters to target parameters;
means for receiving current image data defining a current appearance of the source object;
means for determining a set of source parameters for the source object in the current image data using the current image data and the source model data;
means for determining a set of target parameters corresponding to the determined set of source parameters using the transformation data and the determined set of source parameters; and
means for transmitting the target model data and the determined target parameters; and
wherein the receiver comprises;
means for receiving the transmitted target model data and said target parameters; and
means for determining image data defining an appearance of the target object using the determined set of target parameters and said target model data. - View Dependent Claims (46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 77, 78)
-
-
58. An image communication system comprising:
-
a transmitter and a receiver;
wherein the transmitter comprises;
means for storing transmitter model data defining a function which relates a set of parameters to image data defining an appearance of an object;
means for receiving image data defining an appearance of the object;
means for determining a set of parameters for the object in the received image using the received image data and the model data; and
means for transmitting the determined set of parameter values to said receiver;
wherein the receiver comprises;
means for storing receiver model data defining a function which relates a set of parameters to image data defining an appearance of the object;
means for receiving transmitted sets of parameters; and
means for generating image data defining an appearance of the object using said received set of parameters and said model data. - View Dependent Claims (59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76)
-
-
79. A data stream for driving an appearance model which relates a set of parameters to image data, to generate a synthesised image sequence, the data stream comprising:
- a sequence of data packets including information packets which include data concerning the animated video sequence to be generated; and
image data packets including parameter data for driving said appearance model to generate said synthesised image sequence. - View Dependent Claims (80, 82, 83, 84, 85, 86, 87, 88)
- a sequence of data packets including information packets which include data concerning the animated video sequence to be generated; and
-
81. A data stream according to 79 or 80, wherein one or more of said image packets include parameter data for plural images of the synthesised image sequence.
-
89. A video camera comprising:
-
means for storing model data defining a function which relates a set of parameters to image data defining an appearance of an object;
a wide angled lens and a light sensor for imaging a scene including the object and generating a sequence of images of the scene including the object;
means for processing said sequence of images to extract the image data corresponding to the object within each image of the sequence;
means for scaling the extracted image data to generate scaled image data for each image in the sequence;
means for generating sets of parameters for each scaled image using said model data; and
means for outputting said sets of parameters. - View Dependent Claims (90, 91, 92, 93)
-
-
94. An apparatus for encoding an appearance model which relates a set of parameters to image data representative the appearance of an object, for transmission over a communication link, the apparatus comprising:
means for applying predetermined sets of parameters to said appearance model to derive corresponding image data for each of the predetermined sets of parameters and means for compressing said determined image data generated from the predetermined sets of target parameters. - View Dependent Claims (95, 96, 97, 98)
-
100. An image processing method comprising the steps of:
-
receiving image data defining an appearance of a source object;
determining a set of source parameters for the, source object in the received image using the received image data and stored source model data which defines a function which relates a set of source parameters to image data defining an appearance of the source object;
determining a set of target parameters corresponding to the determined set of source parameters using the transformed data and the determined set of source parameters; and
determining image data defining an appearance of the target object using the determined set of target parameters and stored target model data which defines a function that relates a set of target parameters to image data defining appearance of the target object.
-
-
101. An image processing method comprising the steps of:
-
receiving image data for an image of a source object;
determining a source set of locations which identify the relative positions of a plurality of predetermined points on the source subject;
determining a set of source parameters for the source object in the received image using the determined source set of locations and stored source model data which defines a function which relates a set of source parameters to a source set of locations which identify the relative positions of a plurality of predetermined points on the source object;
determining a set of target parameters corresponding to the determined set or source parameters using stored transformation data and a determined set of source parameters;
determining a target set of locations using said determined set of target parameters and stored target model data which defines a function which relates a set of target parameters to a target set of locations which identify the relative positions of a plurality of predetermined points on the target object; and
determining image data of the target object using the determined target set of locations.
-
-
102. A method of generating an appearance model for an object, the appearance model defining a function which relates a set of parameters to pixel values which define an appearance of the object, the method comprising the steps of:
-
receiving plural training images of the object having different appearances;
sampling pixel values at predetermined points over each of the training images to generate a respective plurality of sets of corresponding pixel values for the training images; and
processing the sets of corresponding pixel values to determine said appearance model;
wherein the object includes one or more features of interest and wherein the sampling step takes more samples of pixel values over said one or more features of interest than over other parts of the object.
-
-
103. A image communication method comprising:
-
at a transmitter;
receiving current image data defining a current appearance of a source object;
determining a set of parameters for the object in the received image using the received image data and stored model data that defines a function which relates a set of parameters to image data defining an appearance of the object; and
transmitting the determined set of parameter values to a receiver; and
at the receiver;
receiving transmitted sets of parameters; and
generating image date defining an appearance of the object using the received set of parameters and stored receiver model data that defines a function which relates a set of parameters to image data defining an appearance of the object.
-
-
104. Apparatus for providing a user interface for image processing apparatus, the apparatus comprising:
-
means for causing a display to display on a display screen a first image on which specific positions are marked by landmark points;
means for causing the display to display on the display screen with the first image a second image on which landmark points are provided so as to be positionable by a user;
means for determining when a landmark point on the second image is selected by a user; and
means for visually identifying to the user the landmark point in the first image corresponding to the selected landmark point to assist the user in positioning the selected landmark point. - View Dependent Claims (105, 106, 107, 108, 109, 110, 111, 112, 114, 124)
-
-
115. Apparatus for providing a user interface for image processing apparatus, the apparatus comprising:
-
means for causing a display to display on a display screen different images of a first image sequence with each image having specific positions therein marked by landmark points;
means for causing the display to provide on the display screen an error display area for displaying to the user a visual representation of an error value relating to a difference between that image and a reconstructed version of image so as to show to the user how the error varies for different images; and
means for causing a visual representation of an error for an image to be added to the error display area when that image is displayed. - View Dependent Claims (116, 117, 118, 119, 120, 121, 122)
-
-
125. A method of operating a processor to provide a user interface for image processing apparatus, the method comprising causing the processor to:
-
cause the display to display on a display screen a first image on which specific positions are marked by landmark points;
cause the display to display on the display screen with the first image a second image on which landmark points are provided to as to be positionable by the user;
determine when a landmark point on the second image has been selected by a user; and
visually identify to the user the landmark point in the first image corresponding to the selected landmark point to assist the user in positioning the selected landmark point.
-
-
126. A method of operating a processor, which method comprises causing the processor to:
-
cause a display to display in succession on a display screen different images of an image sequence with each image having specific positions therein marked by landmark points;
cause the display to provide on the display screen an error display area for displaying to the user a visual representation of an error value relating to a difference between that image and a reconstructed version of that image as to show to the user how the error varies for difference images; and
cause a visual representation of the error value for the image to be added to the error display area when that image is displayed.
-
-
127. An apparatus for generating an animated sequence of images of an object from an image of the object, the apparatus comprising:
-
means for storing model data defining a relationship between a set of parameter values and a set of shape and texture deviations from a set of shape and texture values;
means for receiving the image of the object to be animated;
means for processing the image of the object to obtain shape data defining the shape of the object in the image and texture data defining the texture of the object in the image;
means for receiving a plurality of sets of parameter values representative of an image sequence;
means for determining sets of shape and texture deviations from the received plurality of sets of parameter values and said stored model data;
means for applying each set of determined shape and texture deviations to said obtained shape data and texture data to generate respective modified shape data and texture data for each set; and
means for generating a modified image of the object for each set of deviations from the corresponding modified shape data and texture data to generate said animated sequence of images. - View Dependent Claims (128, 129, 130, 131, 132, 133, 134, 135)
-
-
136. An apparatus for determining the lighting conditions of an object in an image, comprising:
-
means for receiving data defining a lighting model for an object under a plurality of different lighting conditions, the lighting model relating lighting conditions to a corresponding image of the object under those lighting conditions;
means for determining an inverse of said lighting model; and
means for applying an image of an object with unknown lighting conditions to said inverse lighting model to determine said unknown lighting conditions.
-
-
138. An apparatus for changing the lighting conditions of a first object in a first image to correspond to the lighting conditions of a second object in a second image, the apparatus comprising the steps of:
-
means for receiving data defining a lighting model for a third object under a plurality of different lighting conditions, the lighting model relating lighting conditions to a corresponding image of the third object under those lighting conditions;
means for determining an inverse of said lighting model;
means for applying said first image of said first object to said inverse lighting model to determine the lighting conditions of said first object in said first image;
means for applying said second image of said second object to said inverse lighting model to determine the lighting conditions of said second object in said second image;
means for applying the lighting conditions determined for said first object to said lighting model to generate an image of said third object under the lighting conditions of said first object;
means for determining a ratio image of said first image of said first object and said image of said third object under the lighting conditions of said first object;
means for applying the lighting conditions of said second image to said lighting model to generate an image of said third object under the lighting conditions of said second object; and
means for generating an image of said first object under the lighting conditions of said second object from said ratio image and said image of the third object under the lighting conditions of said second object.
-
-
139. A method of determining the lighting conditions of an object in an image, the method comprising the steps of:
-
receiving data defining a lighting model for an object under a plurality of different lighting conditions, the lighting model relating lighting conditions to a corresponding image of the object under those lighting conditions;
determining an inverse of said lighting model; and
applying an image of an object with unknown lighting conditions to said inverse lighting model to determine said unknown lighting conditions. - View Dependent Claims (137)
-
-
140. A method of changing the lighting conditions of a first object in a first image to correspond to the lighting conditions of a second object in a second image, the method comprising the steps of:
-
receiving data defining a lighting model for a third object under a plurality of different lighting conditions, the lighting model relating lighting conditions to a corresponding image of the third object under those lighting conditions;
determining an inverse of said lighting model;
applying said first image of said first object to said inverse lighting model to determine the lighting conditions of said first object in said first image;
applying said second image of said second object to said inverse lighting model to determine the lighting conditions of said second object in said second image;
applying the lighting conditions determined for said first object to said lighting model to generate an image of said third object under the lighting conditions of said first object;
determining a ratio image of said first image of said first object and said image of said third object under the lighting conditions of said first object;
applying the lighting conditions of said second image to said lighting model to generate an image of said third object under the lighting conditions of said second object; and
generating an image of said first object under the lighting conditions of said second object from said ratio image and said image of the third object under the lighting conditions of said second object.
-
-
141. A method of animating a sequence of images of an object from an image of the object, the method comprising the steps of:
-
storing model data defining a relationship between a set of parameter values and a set of shape and texture deviations from a set of shape and texture values;
receiving the image of the object to be animated;
processing the image of the object to obtain shape data defining the shape of the object in the image and texture data defining the texture of the object in the image;
receiving a plurality of sets of parameter values representative of an image sequence;
determining sets of shape and texture deviations from the received plurality of sets of parameter values and said stored model data;
applying each set of determined shape and texture deviations to said obtained shaped data and texture data to generate respective modified shape data and texture data for each set; and
generating a modified image of the object for each set of deviations from the corresponding modified shape data and texture data to generate said animated sequence of images.
-
-
142. A method of adapting a model used for modelling the appearance of a first object to generate an adapted model for modelling the appearance of a second different object, the method comprising the steps of:
-
receiving an image of the second object;
means for processing the image of the second object to obtain shape data defining the shape of the object in the image and texture data defining the texture of the object in the image; and
modifying said model using said shape and texture data.
-
Specification