Artificially rendering images using interpolation of tracked control points
First Claim
1. A method comprising:
- tracking a set of control points between a first frame and a second frame, wherein the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location, the first and second locations corresponding to real world location positions; and
generating an artificially rendered image as a third frame corresponding to a third location, the third location being a real world location position on a trajectory between the first location and the second location, wherein generating the artificially rendered image includes;
interpolating a transformation using at least one of homography, affine, similarity, translation, rotation, and scale, including interpolating individual control points for the third location using IMU data and the set of control points, the IMU data corresponding to the first and second locations, and interpolating pixel locations using the individual control points, wherein the individual control points are used to transform image data, wherein interpolating the transformation includes using depth information to reduce occurrence of artifacts resulting from mismatched pixels;
gathering weighted image information by transferring first image information from the first frame to the third frame based on the interpolated transformation and transferring second image information from the second frame to the third, wherein the image information is weighted by 1-x for the first image information and x for the second image information; and
combining the first image information and the second image information to form the artificially rendered image.
1 Assignment
0 Petitions
Accused Products
Abstract
Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
68 Citations
20 Claims
-
1. A method comprising:
-
tracking a set of control points between a first frame and a second frame, wherein the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location, the first and second locations corresponding to real world location positions; and generating an artificially rendered image as a third frame corresponding to a third location, the third location being a real world location position on a trajectory between the first location and the second location, wherein generating the artificially rendered image includes; interpolating a transformation using at least one of homography, affine, similarity, translation, rotation, and scale, including interpolating individual control points for the third location using IMU data and the set of control points, the IMU data corresponding to the first and second locations, and interpolating pixel locations using the individual control points, wherein the individual control points are used to transform image data, wherein interpolating the transformation includes using depth information to reduce occurrence of artifacts resulting from mismatched pixels; gathering weighted image information by transferring first image information from the first frame to the third frame based on the interpolated transformation and transferring second image information from the second frame to the third, wherein the image information is weighted by 1-x for the first image information and x for the second image information; and combining the first image information and the second image information to form the artificially rendered image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A non-transitory computer readable medium comprising:
-
computer code for tracking a set of control points between a first frame and a second frame, wherein the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location, the first and second locations corresponding to real world location positions; and computer code for generating an artificially rendered image as a third frame corresponding to a third location, the third location being a real world location position on a trajectory between the first location and the second location, wherein generating the artificially rendered image includes; interpolating a transformation using at least one of homography, affine, similarity, translation, rotation, and scale, including interpolating individual control points for the third location using IMU data and the set of control points, the IMU data corresponding to the first and second locations, and interpolating pixel locations using the individual control points, wherein the individual control points are used to transform image data, wherein interpolating the transformation includes using depth information to reduce occurrence of artifacts resulting from mismatched pixels; gathering weighted image information by transferring first image information from the first frame to the third frame based on the interpolated transformation and transferring second image information from the second frame to the third, wherein the image information is weighted by 1-x for the first image information and x for the second image information; and combining the first image information and the second image information to form the artificially rendered image. - View Dependent Claims (10, 11)
-
-
12. A method comprising:
-
tracking a set of control points between a plurality of frames and generating a panoramic representation from the plurality of frames, wherein the plurality of frames includes a first frame corresponding to a first image captured from a first location and a second frame corresponding to a second image captured from a second location, wherein the plurality of frames is associated with a first layer, the first and second locations corresponding to real world location positions; and generating an artificially rendered image as a third frame corresponding to a third location, the third location being a real world location position on a trajectory between the first location and the second location, wherein generating the artificially rendered image includes; interpolating a transformation using at least one of homography, affine, similarity, translation, rotation, and scale, including interpolating individual control points for the third location using IMU data and the set of control points, the IMU data corresponding to the first and second locations, and interpolating pixel locations using the individual control points, wherein the individual control points are used to transform image data, wherein interpolating the transformation includes using depth information to reduce occurrence of artifacts resulting from mismatched pixels; gathering weighted image information by transferring first image information from the first frame to the third frame based on the interpolated transformation and transferring second image information from the second frame to the third, wherein the image information is weighted by 1-x for the first image information and x for the second image information; and combining the first image information and the second image information to form the artificially rendered image. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
Specification