Image-based rendering of real spaces
DCFirst Claim
1. A computer-readable memory medium containing program commands for controlling a computer processor, when executed, to provide image-based rendering of a real property, by performing a method comprising:
- obtaining two or more views from capture locations in a plurality of spaces of the real property using ground truth image data, the ground truth image data including images of the real property captured from the capture locations;
defining a plurality of spatial boundaries of the plurality of spaces of the real property, each spatial boundary delineating a volume of one or more of the plurality of spaces, at least one of the plurality of spatial boundaries defining a parcel outline of the real property, and at least one of the plurality of spatial boundaries defining a space located within the parcel outline;
annotating the ground truth image data by annotating at least some of the captured images of the real property with information indicating a capture location of a corresponding image relative to a corresponding spatial boundary, each capture location identifying a location within the corresponding spatial boundary defined space or identifying the space delineated by the corresponding spatial boundary;
combining and processing two or more instances of the ground truth image data to create a plurality of synthesized views, wherein the synthesized views comprise one or more of composites, transitions, or projections derived from processing ground truth image data, and wherein the synthesized views are associated with corresponding spaces of the plurality of spaces of the real property;
generating and rendering a virtual model of a current space within the plurality of spaces of the real property from a perspective of a current view, the current view being one of the plurality of views obtained using ground truth image data or one of the plurality of synthesized views derived from processing ground truth data and associated with the current space;
identifying the current space in the real property where the current view is located using the capture locations indicated by the annotated ground truth image data;
visually presenting, on the user interface, at least a portion of the virtual model, a map user interface element, and a text user interface element, the map user interface element indicating a position of the current view in the current space, the text user interface element including a label identifying the current space in the real property within which the current view is located;
wherein the portion of the virtual model, the map user interface element, and the text user interface element are functionally linked based on the position of the current view in the real property.
1 Assignment
Litigations
1 Petition
Accused Products
Abstract
Under an embodiment of the invention, an image capturing and processing system creates 3D image-based rendering (IBR) for real estate. The system provides image-based rendering of real property, the computer system including a user interface for visually presenting an image-based rendering of a real property to a user; and a processor to obtain two or more photorealistic viewpoints from ground truth image data capture locations; combine and process two or more instances of ground truth image data to create a plurality of synthesized viewpoints; and visually present a viewpoint in a virtual model of the real property on the user interface, the virtual model including photorealistic viewpoints and synthesized viewpoints.
81 Citations
23 Claims
-
1. A computer-readable memory medium containing program commands for controlling a computer processor, when executed, to provide image-based rendering of a real property, by performing a method comprising:
-
obtaining two or more views from capture locations in a plurality of spaces of the real property using ground truth image data, the ground truth image data including images of the real property captured from the capture locations; defining a plurality of spatial boundaries of the plurality of spaces of the real property, each spatial boundary delineating a volume of one or more of the plurality of spaces, at least one of the plurality of spatial boundaries defining a parcel outline of the real property, and at least one of the plurality of spatial boundaries defining a space located within the parcel outline; annotating the ground truth image data by annotating at least some of the captured images of the real property with information indicating a capture location of a corresponding image relative to a corresponding spatial boundary, each capture location identifying a location within the corresponding spatial boundary defined space or identifying the space delineated by the corresponding spatial boundary; combining and processing two or more instances of the ground truth image data to create a plurality of synthesized views, wherein the synthesized views comprise one or more of composites, transitions, or projections derived from processing ground truth image data, and wherein the synthesized views are associated with corresponding spaces of the plurality of spaces of the real property; generating and rendering a virtual model of a current space within the plurality of spaces of the real property from a perspective of a current view, the current view being one of the plurality of views obtained using ground truth image data or one of the plurality of synthesized views derived from processing ground truth data and associated with the current space; identifying the current space in the real property where the current view is located using the capture locations indicated by the annotated ground truth image data; visually presenting, on the user interface, at least a portion of the virtual model, a map user interface element, and a text user interface element, the map user interface element indicating a position of the current view in the current space, the text user interface element including a label identifying the current space in the real property within which the current view is located; wherein the portion of the virtual model, the map user interface element, and the text user interface element are functionally linked based on the position of the current view in the real property. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A computer-readable memory medium containing program commands for controlling a computer processor, when executed, to provide image-based rendering of a real property, by performing a method comprising:
-
obtaining two or more views from capture locations in a plurality of spaces of the real property using ground truth image data, the ground truth image data including images of the real property captured from the capture locations; defining a plurality of spatial boundaries of the plurality of spaces of the real property, each spatial boundary delineating a volume of one or more of the plurality of spaces, at least one of the plurality of spatial boundaries defining a parcel outline of the real property, and at least one of the plurality of spatial boundaries defining a space located within the parcel outline; annotating the ground truth image data by annotating at least some of the captured images of the real property with information indicating a capture location of a corresponding image relative to a corresponding spatial boundary, each capture location identifying a location within the corresponding spatial boundary defined space or identifying the space delineated by the corresponding spatial boundary; combining and processing two or more instances of the ground truth image data to create a plurality of synthesized views, wherein the synthesized views comprise one or more composites, transitions, or projections derived from processing ground truth image data, and wherein the synthesized views are associated with corresponding spaces of the plurality of spaces of the real property by; constructing one or more point clouds using the ground truth image data; and generating one or more geometric proxies from the constructed point clouds, each geometric proxy comprising a model of geometric data with registered captured images or texture maps at associated points in the model of geometric data; and generating and rendering a virtual model of a current space within the plurality of spaces of the real property from a perspective of a current view, the current view associated with the current space and being one of the plurality of views obtained using ground truth image data or one of the plurality of synthesized views derived from processing ground truth data; identifying the current space in the real property where the current view is located using the capture locations indicated by the annotated ground truth image data; visually presenting a portion of the rendered virtual model of the current space in a user interface as a navigable first panorama; and in response to receiving a user indication to navigate to a second location within the plurality of spaces of the real property not contiguous to the current space, determining a second panorama located nearby the second location using a point in the geometric proxy that corresponds to the second location; and visually presenting a transition to the determined second panorama resulting in visual presentation of the second panorama. - View Dependent Claims (19, 20, 21, 22, 23)
-
Specification