SYSTEM AND METHOD FOR AUTOMATIC ALIGNMENT AND PROJECTION MAPPING
First Claim
1. A system comprising:
- a computing device;
a projector; and
at least two cameras, each of the projector and the at least two cameras mounted relative to a three-dimensional environment with respective fields of view at least partially overlapping a projection area of the projector on the three-dimensional environment;
the computing device configured to;
control the projector to sequentially project one or more structured light patterns configured to uniquely illuminate different portions of the three-dimensional environment;
acquire one or more respective images from each of the at least two cameras while the projector is projecting the one or more structured light patterns, each of the one or more respective images correlated with a given respective structured light pattern;
generate a two-dimensional mapping of the different portions of the three-dimensional environment between a projector space and a camera space by processing the respective images and correlated given respective structured light patterns;
generate a cloud of points representing the three-dimensional environment using the two-dimensional mapping and given positions of the at least two cameras relative to the three-dimensional environment;
determine a location, an orientation and lens characteristics of the projector relative to the three-dimensional environment, from the cloud of points;
position a virtual camera relative to a virtual three-dimensional environment, corresponding to the three-dimensional environment, a virtual location, a virtual orientation and virtual lens characteristics of the virtual camera respectively matching the location, the orientation and the lens characteristics of the projector; and
,control the projector to project based on the virtual location, the virtual orientation and the virtual lens characteristics of the virtual camera.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for automatic alignment and projection mapping are provided. A projector and at least two cameras are mounted with fields of view that overlap a projection area on a three-dimensional environment. A computing device: controls the projector to project structured light patterns that uniquely illuminate portions of the environment; acquires images of the patterns from the cameras; generates a two-dimensional mapping of the portions between projector and camera space and by processing the images and correlated patterns; generates a cloud of points representing the environment using the mapping and camera positions; determines a projector location, orientation and lens characteristics from the cloud; positions a virtual camera relative to a virtual three-dimensional environment, corresponding to the environment, parameters of the virtual camera respectively matching parameters of the projector; and, controls the projector to project based on a virtual location, orientation and characteristics of the virtual camera.
15 Citations
20 Claims
-
1. A system comprising:
a computing device;
a projector; and
at least two cameras, each of the projector and the at least two cameras mounted relative to a three-dimensional environment with respective fields of view at least partially overlapping a projection area of the projector on the three-dimensional environment;
the computing device configured to;control the projector to sequentially project one or more structured light patterns configured to uniquely illuminate different portions of the three-dimensional environment; acquire one or more respective images from each of the at least two cameras while the projector is projecting the one or more structured light patterns, each of the one or more respective images correlated with a given respective structured light pattern; generate a two-dimensional mapping of the different portions of the three-dimensional environment between a projector space and a camera space by processing the respective images and correlated given respective structured light patterns; generate a cloud of points representing the three-dimensional environment using the two-dimensional mapping and given positions of the at least two cameras relative to the three-dimensional environment; determine a location, an orientation and lens characteristics of the projector relative to the three-dimensional environment, from the cloud of points; position a virtual camera relative to a virtual three-dimensional environment, corresponding to the three-dimensional environment, a virtual location, a virtual orientation and virtual lens characteristics of the virtual camera respectively matching the location, the orientation and the lens characteristics of the projector; and
,control the projector to project based on the virtual location, the virtual orientation and the virtual lens characteristics of the virtual camera. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
12. A method comprising:
-
in a system comprising;
a computing device;
a projector; and
at least two cameras, each of the projector and the at least two cameras mounted relative to a three-dimensional environment with respective fields of view at least partially overlapping a projection area of the projector on the three-dimensional environment, controlling the projector, using the computing device, to sequentially project one or more structured light patterns configured to uniquely illuminate different portions of the three-dimensional environment;acquiring one or more respective images from each of the at least two cameras, using the computing device, while the projector is projecting the one or more structured light patterns, each of the one or more respective images correlated with a given respective structured light pattern; generating, using the computing device, a two-dimensional mapping of the different portions of the three-dimensional environment between a projector space and a camera space by processing the respective images and correlated given respective structured light patterns; generating, using the computing device, a cloud of points representing the three-dimensional environment using the two-dimensional mapping and given positions of the at least two cameras relative to the three-dimensional environment; determining, using the computing device, a location, an orientation and lens characteristics of the projector relative to the three-dimensional environment, from the cloud of points; positioning, using the computing device, a virtual camera relative to a virtual three-dimensional environment, corresponding to the three-dimensional environment, a virtual location, a virtual orientation and virtual lens characteristics of the virtual camera respectively matching the location, the orientation and the lens characteristics of the projector; and
,controlling the projector, using the computing device, to project based on the virtual location, the virtual orientation and the virtual lens characteristics of the virtual camera. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory computer-readable medium storing a computer program, wherein execution of the computer program is for:
-
in system comprising;
a computing device;
a projector; and
at least two cameras, each of the projector and the at least two cameras mounted relative to a three-dimensional environment with respective fields of view at least partially overlapping a projection area of the projector on the three-dimensional environment, controlling the projector, using the computing device, to sequentially project one or more structured light patterns configured to uniquely illuminate different portions of the three-dimensional environment;acquiring one or more respective images from each of the at least two cameras, using the computing device, while the projector is projecting the one or more structured light patterns, each of the one or more respective images correlated with a given respective structured light pattern; generating, using the computing device, a two-dimensional mapping of the different portions of the three-dimensional environment between a projector space and a camera space by processing the respective images and correlated given respective structured light patterns; generating, using the computing device, a cloud of points representing the three-dimensional environment using the two-dimensional mapping and given positions of the at least two cameras relative to the three-dimensional environment; determining, using the computing device, a location, an orientation and lens characteristics of the projector relative to the three-dimensional environment, from the cloud of points; positioning, using the computing device, a virtual camera relative to a virtual three-dimensional environment, corresponding to the three-dimensional environment, a virtual location, a virtual orientation and virtual lens characteristics of the virtual camera respectively matching the location, the orientation and the lens characteristics of the projector; and
,controlling the projector, using the computing device, to project based on the virtual location, the virtual orientation and the virtual lens characteristics of the virtual camera.
-
Specification