Image-based rendering of real spaces
DCFirst Claim
1. A method, comprising:
- receiving image data of a plurality of spaces in a property, the image data including a plurality of images captured from a plurality of viewpoints;
creating a plurality of panoramas of the plurality of spaces using the image data;
rendering a virtual model of a selected space among the plurality of spaces using the plurality of panoramas;
causing a device to display the virtual model with a first label indicating a location of the selected space; and
defining spatial boundaries of the plurality of spaces in the property using the image data, the plurality of spaces including a plurality of rooms in the property,wherein the image data includes metadata associated with the plurality of images, the metadata indicating capture locations of the images,wherein rendering the virtual model of the selected space includes rendering a 3D scene of the selected space using the plurality of panoramas, andwherein rendering the 3D scene includes;
determining camera geometry using the plurality of images;
receiving a selected viewpoint in the selected space;
generating a point cloud of the 3D model;
determining a geometric proxy for the selected viewpoint using the determined camera geometry and the point cloud; and
generating the 3D model using the geometric proxy.
1 Assignment
Litigations
1 Petition
Accused Products
Abstract
Under an embodiment of the invention, an image capturing and processing system creates 3D image-based rendering (IBR) for real estate. The system provides image-based rendering of real property, the computer system including a user interface for visually presenting an image-based rendering of a real property to a user; and a processor to obtain two or more photorealistic viewpoints from ground truth image data capture locations; combine and process two or more instances of ground truth image data to create a plurality of synthesized viewpoints; and visually present a viewpoint in a virtual model of the real property on the user interface, the virtual model including photorealistic viewpoints and synthesized viewpoints.
34 Citations
19 Claims
-
1. A method, comprising:
-
receiving image data of a plurality of spaces in a property, the image data including a plurality of images captured from a plurality of viewpoints; creating a plurality of panoramas of the plurality of spaces using the image data; rendering a virtual model of a selected space among the plurality of spaces using the plurality of panoramas; causing a device to display the virtual model with a first label indicating a location of the selected space; and defining spatial boundaries of the plurality of spaces in the property using the image data, the plurality of spaces including a plurality of rooms in the property, wherein the image data includes metadata associated with the plurality of images, the metadata indicating capture locations of the images, wherein rendering the virtual model of the selected space includes rendering a 3D scene of the selected space using the plurality of panoramas, and wherein rendering the 3D scene includes; determining camera geometry using the plurality of images; receiving a selected viewpoint in the selected space; generating a point cloud of the 3D model; determining a geometric proxy for the selected viewpoint using the determined camera geometry and the point cloud; and generating the 3D model using the geometric proxy. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system, comprising:
-
a processor; a memory storing non-transitory program commands, which, when executed by the processor, cause the processor to; receive image data of a plurality of spaces in a property, the image data including a plurality of images captured from a plurality of viewpoints; create a plurality of panoramas of the plurality of spaces using the image data; render a virtual model of a selected space among the plurality of spaces using the plurality of panoramas; cause a device to display the virtual model with a first label indicating a location of the selected space, wherein the image data includes metadata associated with the plurality of images, the metadata indicating capture locations of the images, wherein the program commands cause the processor to render the virtual model of the selected space by rendering a 3D scene of the selected space using the plurality of panoramas, and wherein the program commands cause the processor to render the 3D scene by; determining camera geometry using the plurality of images; receiving a selected viewpoint in the selected space; generating a point cloud of the 3D model; determining a geometric proxy for the selected viewpoint using the determined camera geometry and the point cloud; and generating the 3D model using the geometric proxy. - View Dependent Claims (11, 12)
-
-
13. A method, comprising:
-
receiving image data of a plurality of spaces in a property, the image data including a plurality of images captured from a plurality of viewpoints; creating a plurality of panoramas of the plurality of spaces using the image data; rendering a virtual model of a selected space among the plurality of spaces using the plurality of panoramas; causing a device to display the virtual model with a first label indicating a location of the selected space; and obtaining depth data of the plurality of spaces, wherein creating the plurality of panoramas of the plurality of spaces includes generating a plurality of stitched panoramas from the plurality of images, the metadata, and the depth data, and wherein generating the plurality of stitched panoramas includes; generating a panorama neighborhood graph of the property; generating a panorama spanning tree of the panorama neighborhood graph; and performing panorama bundle adjustment on the panorama spanning tree. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
Specification