Method for interactively viewing full-surround image data and apparatus therefor
DC CAFCFirst Claim
Patent Images
1. A method for modeling the visible world, comprising:
- texture mapping full-surround image data onto a p-surface to generate a model of the visible world substantially equivalent to projecting the image data onto the p-surface from a point of projection;
allowing a user to select a direction of view from a view point on the model; and
allowing a portion of the model mapped on p-surface based on the view point to be displayed;
wherein the p-surface comprises polygons approximating at least a portion of a sphere.
0 Assignments
Litigations
1 Petition
Accused Products
Abstract
A method of modeling of the visible world using full-surround image data includes steps for selecting a view point within a p-surface, selecting a direction of view within the p-surface, texture mapping full-surround image data onto the p-surface such that the resultant texture map is substantially equivalent to projecting full-surround image data onto the p-surface from the view point to thereby generate a texture mapped p-surface, and displaying a predetermined portion of the texture mapped p-surface. An apparatus for implementing the method is also described.
-
Citations
28 Claims
-
1. A method for modeling the visible world, comprising:
-
texture mapping full-surround image data onto a p-surface to generate a model of the visible world substantially equivalent to projecting the image data onto the p-surface from a point of projection; allowing a user to select a direction of view from a view point on the model; and allowing a portion of the model mapped on p-surface based on the view point to be displayed; wherein the p-surface comprises polygons approximating at least a portion of a sphere. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method of modeling of the visible world using full-surround image data, the method comprising:
-
allowing selection of a view point within a p-surface, wherein the p-surface comprises polygons approximating at least a partial sphere; allowing selection of a direction of view within the p-surface; texture mapping full-surround image data onto the p-surface such that the resultant texture map is substantially equivalent to projecting full-surround image data onto the p-surface from the view point to thereby generate a texture mapped p-surface; and allowing a selected portion of the texture mapped p-surface to be displayed. - View Dependent Claims (7, 8, 9, 10, 11)
-
-
12. A method of modeling the visible world using full-surround image data, the method comprising:
-
texture mapping full-surround image data onto a p-surface such that the resultant texture map is substantially equivalent to projecting the full-surround image data onto the p-surface from a point of projection to thereby generate a texture mapped p-surface; allowing a direction of view from a view point to be selected; and allowing a portion of the texture mapped p-surface based on the selecting to be displayed; wherein the p-surface comprises polygons approximating at least a partial sphere. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A method for generating a 3D world simulator environment, comprising the steps of:
-
creating a full-surround simulator environment with a texture mapped p-surface that includes portions generated at least in part from a full-surround camera data set; maneuvering within the simulator environment; and generating imagery at least in part from the full-surround data set as a viewer space created in a 3D modeling and/or rendering system. - View Dependent Claims (23, 24, 25, 26, 27, 28)
-
Specification