System and method of image rendering
First Claim
1. A method of rendering an image based upon a first stereoscopic image comprising a pair of images, the method comprising the steps of:
- generating a virtual three-dimensional model of a scene depicted in the first stereoscopic image responsive to distances derived from the first stereoscopic image;
detecting one or more free edges in the virtual three dimensional model;
generating, by one or more processors, one or more textures for the virtual three-dimensional model from at least one of the pair of images of the first stereoscopic image;
applying, by the one or more processors, at least one of the one or more textures to a respective part of the virtual three dimensional model; and
rendering, by the one or more processors, the virtual three dimensional model from a different viewpoint to that of the first stereoscopic image;
wherein rendering the virtual three dimensional model comprises modifying a transparency of rendered pixels of an applied texture as a function of each pixel'"'"'s distance from a given one of the free edges; and
wherein the rendering further includes generation of a texture for at least one polygon comprising color information from foreground and background elements of the scene;
in which generating a virtual three-dimensional model of the scene depicted in the first stereoscopic image comprises;
generating a disparity map from the pair of images of the first stereoscopic image, the disparity map being indicative of distances in the first stereoscopic image;
defining a series of value ranges corresponding to disparity values of the disparity map, each value range in the series having an end point corresponding to a greater disparity than an end point of preceding value ranges in the series;
selecting points in the disparity map falling within the respective value range;
generating a respective mesh responsive to those selected points; and
merging generated meshes to form the 3D model of the scene.
3 Assignments
0 Petitions
Accused Products
Abstract
A method of rendering an image based upon a first stereoscopic image comprising a pair of images is provided. The method includes generating a virtual three-dimensional model of the scene depicted in the first stereoscopic image responsive to distances derived from the first stereoscopic image, detecting one or more free edges in the three dimensional model, and generating one or more textures for the virtual three-dimensional model from at least one of the pair of images of the first stereoscopic image. The method also includes applying at least one texture to a respective part of the three dimensional model, and rendering the virtual three dimensional model from a different viewpoint to that of the first stereoscopic image. Rendering the virtual three dimensional model comprises modifying a transparency of rendered pixels of an applied texture as a function of the pixel'"'"'s distance from that free edge.
32 Citations
13 Claims
-
1. A method of rendering an image based upon a first stereoscopic image comprising a pair of images, the method comprising the steps of:
-
generating a virtual three-dimensional model of a scene depicted in the first stereoscopic image responsive to distances derived from the first stereoscopic image; detecting one or more free edges in the virtual three dimensional model; generating, by one or more processors, one or more textures for the virtual three-dimensional model from at least one of the pair of images of the first stereoscopic image; applying, by the one or more processors, at least one of the one or more textures to a respective part of the virtual three dimensional model; and rendering, by the one or more processors, the virtual three dimensional model from a different viewpoint to that of the first stereoscopic image; wherein rendering the virtual three dimensional model comprises modifying a transparency of rendered pixels of an applied texture as a function of each pixel'"'"'s distance from a given one of the free edges; and wherein the rendering further includes generation of a texture for at least one polygon comprising color information from foreground and background elements of the scene; in which generating a virtual three-dimensional model of the scene depicted in the first stereoscopic image comprises; generating a disparity map from the pair of images of the first stereoscopic image, the disparity map being indicative of distances in the first stereoscopic image; defining a series of value ranges corresponding to disparity values of the disparity map, each value range in the series having an end point corresponding to a greater disparity than an end point of preceding value ranges in the series; selecting points in the disparity map falling within the respective value range; generating a respective mesh responsive to those selected points; and merging generated meshes to form the 3D model of the scene. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A non-transitory computer program product comprising computer implementable instructions that when run cause a computer to implement a method of rendering an image based upon a first stereoscopic image comprising a pair of images, the method comprising the steps of:
-
generating a virtual three-dimensional model of a scene depicted in the first stereoscopic image responsive to distances derived from the first stereoscopic image; detecting one or more free edges in the virtual three dimensional model; generating one or more textures for the virtual three-dimensional model from at least one of the pair of images of the first stereoscopic image; applying at least one of the one or more textures to a respective part of the virtual three dimensional model; and rendering the virtual three dimensional model from a different viewpoint to that of the first stereoscopic image; wherein rendering the virtual three dimensional model comprises modifying a transparency of rendered pixels of an applied texture as a function of each pixel'"'"'s distance from a given one of the free edges; and wherein the rendering further includes generation of a texture for at least one polygon comprising color information from foreground and background elements of the scene; in which generating a virtual three-dimensional model of the scene depicted in the first stereoscopic image comprises; generating a disparity map from the pair of images of the first stereoscopic image, the disparity map being indicative of distances in the first stereoscopic image; defining a series of value ranges corresponding to disparity values of the disparity map, each value range in the series having an end point corresponding to a greater disparity than an end point of preceding value ranges in the series; selecting points in the disparity map falling within the respective value range; generating a respective mesh responsive to those selected points; and merging generated meshes to form the 3D model of the scene.
-
-
9. An entertainment device for rendering an image based upon a first stereoscopic image comprising a pair of images, the entertainment device comprising:
-
virtual modelling means for generating a virtual three-dimensional model of a scene depicted in the first stereoscopic image, responsive to distances derived from the first stereoscopic image; model edge detection means for detecting one or more free edges in the virtual three dimensional model; texture generation means for generating one or more textures for the virtual three-dimensional model from at least one of the pair of images of the first stereoscopic image; texture application means for applying at least one of the textures to a respective part of the virtual three dimensional model; and rendering means for rendering the virtual three dimensional model from a different viewpoint to that of the first stereoscopic image; wherein the rendering means is operable to modify a transparency of rendered pixels of an applied texture as a function of each pixel'"'"'s distance from a given one of the free edges; and wherein the rendering means is further operable to generate a texture for at least one polygon comprising color information from foreground and background elements of the scene in which the virtual modelling means comprises; disparity map generating means for generating a disparity map from the pair of images of the first stereoscopic image, the disparity map being indicative of distances in the first stereoscopic image; range setting means for defining a series of value ranges corresponding to disparity values of the disparity map, each value range in the series having an end point corresponding to a greater disparity than an end point of preceding value ranges in the series; selection means for selecting points in the disparity map falling within the respective value range; mesh generating means for generating a respective mesh responsive to those selected points; and mesh merging means for merging generated meshes to form the 3D model of the scene. - View Dependent Claims (10, 11, 12, 13)
-
Specification