Systems and methods for computer assisted operation
First Claim
Patent Images
1. A method for rendering reality, comprising:
- performing tracking and area learning of an environment by motion-tracking visual features of the environment using concurrent odometry and mapping (COM) including feature points and planes to estimate a pose of the camera relative to the environment and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment;
capturing images from a plurality of angles of the environment;
acquiring sensor data from sensors and optimizing features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location;
selecting a pattern or color from a plurality of product variations;
blending the pattern or color of the object and the environment;
displaying the color on the object as an augmented reality view;
scaling the 3D model of the product based on dimensions of the environment and the product;
projecting the product in the environment;
selecting a best fit from jewelry variations, shoe variations, clothing variations, apparel variations or footwear variations; and
determining content to be presented based on previous activities or selections while generating an augmented or virtual reality display of the new product in the environment.
0 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods are disclosed method for rendering reality for an object by performing motion tracking and area learning of an environment; selecting a pattern or color from a plurality of product variations; blending the pattern or color of the object and the environment; and displaying the color on the object as an augmented reality view.
-
Citations
18 Claims
-
1. A method for rendering reality, comprising:
-
performing tracking and area learning of an environment by motion-tracking visual features of the environment using concurrent odometry and mapping (COM) including feature points and planes to estimate a pose of the camera relative to the environment and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment; capturing images from a plurality of angles of the environment; acquiring sensor data from sensors and optimizing features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location; selecting a pattern or color from a plurality of product variations; blending the pattern or color of the object and the environment; displaying the color on the object as an augmented reality view; scaling the 3D model of the product based on dimensions of the environment and the product; projecting the product in the environment; selecting a best fit from jewelry variations, shoe variations, clothing variations, apparel variations or footwear variations; and determining content to be presented based on previous activities or selections while generating an augmented or virtual reality display of the new product in the environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method for selecting a product, comprising:
-
selecting one or more 3D models from a product database; motion-tracking visual features of the environment using concurrent odometry and mapping (COM) including feature points and planes to estimate a pose of the camera relative to the environment and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment; capturing images from a plurality of angles of the environment; acquiring sensor data from sensors and optimizing features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location; selecting a pattern or color from a plurality of product variations; blending the pattern or color of the product and the environment; and displaying the color on the product as an augmented reality view; scaling the 3D model of the product based on dimensions of the environment and the product; determining content to be presented based on previous activities or selections while projecting the product in the environment; selecting a best fit from jewelry variation, shoe variations, clothing variations, apparel variations or footwear variations; and generating an augmented or virtual reality display of the new product in the environment.
-
-
17. A method for fitting, comprising:
-
selecting one or more 3D models from a product database; capturing images of a user view relative to a reference frame using a mobile camera, including; motion-tracking visual features of the environment using concurrent odometry and mapping (COM) including feature points and planes to estimate a pose of the camera relative to the environment and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment;
capturing images from a plurality of angles of the environment;
acquiring sensor data from sensors and optimizing features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location;
selecting a pattern or color from a plurality of product variations;
blending the pattern or color of the object and the environment; and
displaying the color on the object as an augmented reality view;generating an updated model of the user view by selecting a closest 3D model to the user view and matching points on the closest 3D product model to one or more predetermined points on the user view; scaling the 3D model of the product based on dimensions of the environment and the product; determining content to be presented based on previous activities or selections while projecting the product in the environment; selecting a best fit from jewelry variations, shoe variations, clothing variations, apparel variations or footwear variations. - View Dependent Claims (18)
-
Specification