Mobile augmented reality system
First Claim
Patent Images
1. A method comprising:
- detecting an object in a live view image captured by an image sensor;
generating a three-dimensional (3D) model generated for the object;
determining a position and orientation of the image sensor by identifying segments of the object in the live view image;
extracting visual features of the object using the position and orientation of the image sensor;
generating a first two-dimensional (2D) projection mask corresponding to a first user selected point of interest (POI) included in the 3D model;
generating a second 2D projection mask corresponding to a second user selected point POI included in the 3D model; and
augmenting the live view image with the extracted visual features of the object and the first and second projection masks to display the first and second POIs within the object.
2 Assignments
0 Petitions
Accused Products
Abstract
Embodiments of the invention relate to systems, apparatuses and methods to provide image data, augmented with related data, to be displayed on a mobile computing device. Embodiments of the invention display a live view augmented with information identifying an object amongst other objects. Embodiments of the invention may utilize other related data, such as 3D point cloud data, image data and location data related to the object, to obtain a specific location of an object within the live view. Embodiments of the invention may further display a live view with augmented data three-dimensionally consistent with the position and orientation of the image sensor of the mobile computing device.
19 Citations
26 Claims
-
1. A method comprising:
-
detecting an object in a live view image captured by an image sensor; generating a three-dimensional (3D) model generated for the object; determining a position and orientation of the image sensor by identifying segments of the object in the live view image; extracting visual features of the object using the position and orientation of the image sensor; generating a first two-dimensional (2D) projection mask corresponding to a first user selected point of interest (POI) included in the 3D model; generating a second 2D projection mask corresponding to a second user selected point POI included in the 3D model; and augmenting the live view image with the extracted visual features of the object and the first and second projection masks to display the first and second POIs within the object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. An apparatus comprising:
-
a processor; a memory; and an augmentation module included in the memory and executed via the processor to; detect an object in a live view image captured by an image sensor; generate a three-dimensional (3D) model generated for the object; determine a position and orientation of the image sensor by identifying segments of the object in the live view image; extract visual features of the object using the position and orientation of the image sensor; generate a first two-dimensional (2D) projection mask corresponding to a first user selected point of interest (POI) included in the 3D model; generate a second 2D projection mask corresponding to a second user selected point POI included in the 3D model; and augment the live view image with the extracted visual features of the object and the first and second projection masks to display the first and second POIs within the object. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. An article of manufacture comprising a non-transitory machine-readable storage medium that provides instructions that, if executed by the machine, will cause the machine to perform operations comprising:
-
detecting an object in a live view image captured by an image sensor; generating a three-dimensional (3D) model generated for the object; determining a position and orientation of the image sensor by identifying segments of the object in the live view image; extracting visual features of the object using the position and orientation of the image sensor; generating a first two-dimensional (2D) projection mask corresponding to a first user selected point of interest (POI) included in the 3D model; generating a second 2D projection mask corresponding to a second user selected point POI included in the 3D model; and augmenting the live view image with the extracted visual features of the object and the first and second projection masks to display the first and second POIs within the object. - View Dependent Claims (21, 22, 23, 24, 25, 26)
-
Specification