Augmented reality viewing of printer image processing stages
First Claim
1. A method of generating an Augmented Reality (AR) display environment, comprising:
- establishing a data connection at a mobile device, wherein the data connection connects to a printer image processing pipeline comprising printer image data, and the mobile device comprises an imaging sensor and a display;
receiving the printer image data on the mobile device from the printer image processing pipeline, wherein the printer image data comprises pixels;
generating an augmentation object of a virtual image based on the received printer image data, comprising;
calculating two-dimensional (2D) coordinates for the four corners of the virtual image, wherein the virtual image is rectangular in shape and includes four corners, comprising;
determining the distance between each corner of the virtual image to an adjacent corner of the virtual image, wherein the distance is the visible dimension in pixels of the display of the mobile device used to display the virtual image;
calculating apparent 2D coordinates of each corner of the virtual image, wherein the apparent 2D coordinates are percentages relative to the longest visible dimension of the virtual image in the AR display environment on the display; and
calculating a pixel resolution from the apparent 2D coordinates, wherein the pixel resolution includes the apparent 2D coordinates of each corner of the virtual image;
receiving live video data of the physical environment from the imaging sensor on the mobile device, wherein the physical environment includes physical objects with detectable features, and the content of the live video data collected by the imaging sensor is at least partially directed by a user input;
generating a local three-dimensional (3D) model of the physical environment utilizing the live video data;
receiving mobile device tracking data from the mobile device;
adapting the local 3D model based on the mobile device tracking data, thereby creating an adapted local 3D model;
combining the augmentation object with the adapted local 3D model to create the AR display environment; and
configuring the mobile device to display the AR display environment.
0 Assignments
0 Petitions
Accused Products
Abstract
A method of generating an Augmented Reality (AR) display environment includes establishing a data connection at a mobile device to a printer image processing pipeline and generating an augmentation object of a virtual image based on the received printer image. Live video data of the physical environment is received from an imaging sensor on the mobile device, and a local 3D model of the physical environment utilizing the live video data is generated. Device tracking data from the mobile device is used to adapt the local 3D model. The augmentation object is combined with the adapted local 3D model to create an AR display environment, followed by configuring the mobile device to display the AR display environment. A mobile device includes a processor and memory with instructions to configure the device to generate an AR display environment.
11 Citations
18 Claims
-
1. A method of generating an Augmented Reality (AR) display environment, comprising:
-
establishing a data connection at a mobile device, wherein the data connection connects to a printer image processing pipeline comprising printer image data, and the mobile device comprises an imaging sensor and a display; receiving the printer image data on the mobile device from the printer image processing pipeline, wherein the printer image data comprises pixels; generating an augmentation object of a virtual image based on the received printer image data, comprising; calculating two-dimensional (2D) coordinates for the four corners of the virtual image, wherein the virtual image is rectangular in shape and includes four corners, comprising; determining the distance between each corner of the virtual image to an adjacent corner of the virtual image, wherein the distance is the visible dimension in pixels of the display of the mobile device used to display the virtual image; calculating apparent 2D coordinates of each corner of the virtual image, wherein the apparent 2D coordinates are percentages relative to the longest visible dimension of the virtual image in the AR display environment on the display; and calculating a pixel resolution from the apparent 2D coordinates, wherein the pixel resolution includes the apparent 2D coordinates of each corner of the virtual image; receiving live video data of the physical environment from the imaging sensor on the mobile device, wherein the physical environment includes physical objects with detectable features, and the content of the live video data collected by the imaging sensor is at least partially directed by a user input; generating a local three-dimensional (3D) model of the physical environment utilizing the live video data; receiving mobile device tracking data from the mobile device; adapting the local 3D model based on the mobile device tracking data, thereby creating an adapted local 3D model; combining the augmentation object with the adapted local 3D model to create the AR display environment; and configuring the mobile device to display the AR display environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system, comprising:
-
a processor; an imaging sensor; a display; and a memory storing instructions that, when executed by the processor, configure the system to; establish a data connection at a mobile device, wherein the data connection connects to a printer image processing pipeline comprising printer image data; receive printer image data on the mobile device from the printer image processing pipeline, wherein the printer image data comprises pixels; generate an augmentation object of a virtual image based on the received printer image data, comprising; calculating two-dimensional (2D) coordinates for the four corners of the virtual image, wherein the virtual image is rectangular in shape and includes four corners, comprising; determining the distance between each corner of the virtual image to an adjacent corner of the virtual image, wherein the distance is the visible dimension in pixels of the display of the mobile device used to display the virtual image; calculating apparent 2D coordinates of each corner of the virtual image, wherein the apparent 2D coordinates are percentages relative to the longest visible dimension of the virtual image in the AR display environment on the display; and calculating a pixel resolution from the apparent 2D coordinates, wherein the pixel resolution includes the apparent 2D coordinates of each corner of the virtual image; receive live video data of the physical environment from the imaging sensor on the mobile device, wherein the physical environment includes physical objects with detectable features, and the content of the live video data collected by the imaging sensor is at least partially directed by a user input; generate a local three-dimensional (3D) model of the physical environment utilizing the live video data; receive mobile device tracking data from the mobile device; adapt the local 3D model based on the mobile device tracking data, thereby creating an adapted local 3D model; combine the augmentation object with the adapted local 3D model to create an AR environment; and configure the mobile device to display the AR environment. - View Dependent Claims (13, 14, 15, 16, 17, 18)
-
Specification