Indoor navigation via multi beam laser projection

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
2
Assignments
First Claim
1. An indoor navigation system comprising:
 a laser projector configured to emit four or more laser beams in four or more different predetermined directions, no more than two laser beams of the four or more laser beams extending along a same plane, and an angle between each pair of the four or more laser beams being different from an angle between any other pair of the four or more laser beams, wherein the four or more different predetermined directions and the angle between each pair of the four or more laser beams corresponds to a reference coordinate system associated with the laser projector;
two or more cameras configured to capture images of at least four spots formed by the four or more laser beams on surfaces of an indoor space, the two or more cameras each disposed in a fixed position and having a known location and pose in a world coordinate system; and
one or more processors in communication with the two or more cameras, the one or more processors configured to;
estimate threedimensional locations of the at least four spots using the images captured by at least two of the two or more cameras, wherein each of the threedimensional locations of the at least four spots corresponds to the world coordinate system, and wherein the world coordinate system is associated with the indoor space; and
estimate a position and an orientation of the laser projector in the world coordinate system in the indoor space by space resection using the four or more different directions corresponding to the reference coordinate system and the threedimensional locations corresponding to the world coordinate system of the at least four spots.
2 Assignments
0 Petitions
Accused Products
Abstract
An indoor navigation system is based on a multibeam laser projector, a set of calibrated cameras, and a processor that uses knowledge of the projector design and data on laser spot locations observed by the cameras to solve the space resection problem to find the location and orientation of the projector.
20 Citations
No References
Optical laser guidance system and method  
Patent #
US 7,899,618 B2
Filed 04/09/2009

Current Assignee
The Boeing Co.

Sponsoring Entity
The Boeing Co.

Methods and systems for position sensing  
Patent #
US 7,774,083 B2
Filed 10/24/2007

Current Assignee
The Boeing Co.

Sponsoring Entity
The Boeing Co.

Methods and apparatus for position estimation using reflected light sources  
Patent #
US 7,720,554 B2
Filed 03/25/2005

Current Assignee
iRobot Corporation

Sponsoring Entity
Evolution Robotics Inc.

Method and System for Determining the Position of a Receiver Unit  
Patent #
US 20080204699A1
Filed 05/16/2006

Current Assignee
Leica Geosystems Holdings AG

Sponsoring Entity
Leica Geosystems Holdings AG

Methods and systems for position sensing of components in a manufacturing operation  
Patent #
US 7,305,277 B2
Filed 03/31/2005

Current Assignee
The Boeing Co.

Sponsoring Entity
The Boeing Co.

Laser measuring method and laser measuring system  
Patent #
US 20050211882A1
Filed 03/03/2005

Current Assignee
Topcon KK

Sponsoring Entity
Topcon KK

POSITION MEASUREMENT APPARATUS AND METHOD USING LASER  
Patent #
US 20040001197A1
Filed 11/07/2002

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Robotic manufacturing and assembly with relative radio positioning using radio based location determination  
Patent #
US 20030208302A1
Filed 05/01/2002

Current Assignee
Tracy D. Blake, Robert D. Pedersen, Jerome H. Lemelson, Dorothy Lemelson

Sponsoring Entity
Tracy D. Blake, Robert D. Pedersen, Jerome H. Lemelson, Dorothy Lemelson

VERSATILE STEREOTACTIC DEVICE AND METHODS OF USE  
Patent #
US 20010020127A1
Filed 06/04/1997

Current Assignee
Brigham and Womens Hospital Incorporated

Sponsoring Entity
Brigham and Womens Hospital Incorporated

Plotter for construction sites and method  
Patent #
US 6,064,940 A
Filed 05/15/1996

Current Assignee
APPALOS CORPORATION THE

Sponsoring Entity
APPALOS CORPORATION THE

Position measuring system for vehicle  
Patent #
US 5,255,195 A
Filed 01/23/1992

Current Assignee
Yamaha Hatsudoki Kabushiki Kaisha

Sponsoring Entity
Yamaha Hatsudoki Kabushiki Kaisha

Computer aided three dimensional positioning sensing system and method  
Patent #
US 5,137,354 A
Filed 08/01/1991

Current Assignee
Trimble Navigation Limited

Sponsoring Entity
SpectraPhysics Incorporated

Method for laserbased twodimensional navigation system in a structured environment  
Patent #
US 4,796,198 A
Filed 10/17/1986

Current Assignee
United States Of America As Represented By The Department Of Energy

Sponsoring Entity
United States Of America As Represented By The Department Of Energy

DIMENSIONAL MEASUREMENT THROUGH A COMBINATION OF PHOTOGRAMMETRY AND OPTICAL SCATTERING  
Patent #
US 20120050528A1
Filed 05/31/2011

Current Assignee
University of North Carolina At Charlotte

Sponsoring Entity
University of North Carolina At Charlotte

MultiLevel Digital Modulation for Time of Flight Method and System  
Patent #
US 20110299059A1
Filed 04/07/2011

Current Assignee
Ams Sensors Singapore Pte. Ltd.

Sponsoring Entity
MESA IMAGING AG

SYSTEM AND METHOD FOR FINDING CORRESPONDENCE BETWEEN CAMERAS IN A THREEDIMENSIONAL VISION SYSTEM  
Patent #
US 20120148145A1
Filed 12/08/2010

Current Assignee
Cognex Corporation

Sponsoring Entity
Cognex Corporation

Position measuring method and position measuring instrument  
Patent #
US 8,224,030 B2
Filed 06/29/2010

Current Assignee
Kabushiki Kaisha Topcon

Sponsoring Entity
Kabushiki Kaisha Topcon

OBJECT TRACKING WITH PROJECTED REFERENCE PATTERNS  
Patent #
US 20120262365A1
Filed 04/12/2011

Current Assignee
Sony Interactive Entertainment Inc.

Sponsoring Entity
Sony Computer Entertainment Incorporated

MANAGEMENT OF RESOURCES FOR SLAM IN LARGE ENVIRONMENTS  
Patent #
US 20130138246A1
Filed 11/09/2012

Current Assignee
iRobot Corporation

Sponsoring Entity
Dhiraj Goel, Mario E. Munich, JensSteffen Gutmann

SCANNING OPTICAL POSITIONING SYSTEM WITH SPATIALLY TRIANGULATING RECEIVERS  
Patent #
US 20140098379A1
Filed 10/04/2013

Current Assignee
Gerard Dirk Smits

Sponsoring Entity
Gerard Dirk Smits

14 Claims
 1. An indoor navigation system comprising:
a laser projector configured to emit four or more laser beams in four or more different predetermined directions, no more than two laser beams of the four or more laser beams extending along a same plane, and an angle between each pair of the four or more laser beams being different from an angle between any other pair of the four or more laser beams, wherein the four or more different predetermined directions and the angle between each pair of the four or more laser beams corresponds to a reference coordinate system associated with the laser projector; two or more cameras configured to capture images of at least four spots formed by the four or more laser beams on surfaces of an indoor space, the two or more cameras each disposed in a fixed position and having a known location and pose in a world coordinate system; and one or more processors in communication with the two or more cameras, the one or more processors configured to; estimate threedimensional locations of the at least four spots using the images captured by at least two of the two or more cameras, wherein each of the threedimensional locations of the at least four spots corresponds to the world coordinate system, and wherein the world coordinate system is associated with the indoor space; and estimate a position and an orientation of the laser projector in the world coordinate system in the indoor space by space resection using the four or more different directions corresponding to the reference coordinate system and the threedimensional locations corresponding to the world coordinate system of the at least four spots.  View Dependent Claims (2, 3, 4, 5, 6)
 7. A method for indoor navigation comprising:
emitting, using a laser projector, four or more laser beams in four or more different predetermined directions, the four or more laser beams being noncoplanar, and an angle between each pair of the four or more laser beams being different from an angle between any other pair of the four or more laser beams, wherein the four or more different predetermined directions and the angle between each pair of the four or more laser beams corresponds to a reference coordinate system associated with the laser projector; capturing, with two or more cameras each disposed in a fixed position and at a known location and pose in a world coordinate system associated with an indoor space, images of at least four spots formed by the four or more laser beams on surfaces of the indoor space;
thereafterestimating threedimensional locations of the at least four spots in the world coordinate system using the images and the known location and pose of each of the two or more cameras; and estimating a position and an orientation of the laser projector in the world coordinate system by space resection using the four or more different directions corresponding to the reference coordinate system and the threedimensional locations of the at least four spots corresponding to the world coordinate system.  View Dependent Claims (8, 9, 10, 11, 12, 13, 14)
1 Specification
The disclosure is generally related to indoor navigation.
Conventional indoor navigation techniques include ultrasonic or laser ranging, tracking marked objects with cameras, and interpreting video scenes as captured by a camera. This last method, navigating as a person would by interpreting his visual surroundings, is an outstanding problem in computer vision research.
A variety of challenges are associated with these and other indoor navigation techniques. Occlusion, for example, occurs when a camera or detector'"'"'s view is blocked. Lack of sensitivity can be an issue when objecttracking cameras are located too close to one another, leading to small angle measurements. Some visionbased navigation systems depend on surface texture which may not always be available in an image. Finally, incremental positioning methods may accumulate errors which degrade positioning accuracy.
Building construction is one scenario in which indoor navigation is a valuable capability. Robots that lay out construction plans or install fixtures need accurate position and orientation information to do their jobs. Assembly of large aircraft parts offers another example. Precisely mating airplane fuselage or wing sections is helped by keeping track of the position and orientation of each component. In scenarios like these, as a practical matter, it is helpful for indoor navigation solutions to be expressed in the same coordinates as locations of building structures such as walls, floors, ceilings, doorways and the like.
Many visionbased indoor navigation systems cannot run in real time because the computational requirements are too great. Finally, a navigation system for a small robot is impractical if it consumes too much power or weighs or costs too much. What is needed is an indoor navigation system that permits accurate tracking of the location and orientation of objects in an indoor space while overcoming the challenges mentioned above and without requiring excessive computational capacity, electrical power or weight.
The indoor navigation systems and methods described below involve solving a problem known in computer vision as “perspective pose estimation” and in photogrammetry as “space resection”, namely: Determine the position of each of the vertices of a known triangle in three dimensional space given a perspective projection of the triangle. Haralick, et al. show how this problem was first solved by the German mathematician Grunert in 1841 and solved again by others later (“Review and Analysis of Solutions of the Three Point Perspective Pose Estimation Problem,” International Journal of Computer Vision, 13, 3, 331356 (1994), incorporated herein by reference).
Space resection has been used in the past to find the position and orientation of a camera based on the appearance of known landmarks in a camera image. Here, however, space resection is used to find the position and orientation of a laser projector that creates landmarks on the walls of an indoor space. In contrast to traditional space resection, in the present case angles to the landmarks are set by the laser projector rather than measured. When the projector is attached to an object, the position and orientation of the object may be estimated and tracked.
Navigation based on this new technique is well suited to indoor spaces such as office buildings, aircraft hangars, underground railway stations, etc. Briefly, a laser projector is attached to a robot, machine tool or other item whose position and orientation are to be estimated in an indoor space. The projector emits laser beams in four or more different directions and these beams are seen as spots on the walls of the indoor space. (“Walls” is defined to include walls, ceiling, floor and other surfaces in an indoor space upon which laser spots may be formed.) Multiple fixed cameras view the spots and provide data used to estimate their positions in three dimensions. Finally, the space resection problem is solved to estimate the position and orientation of the laser projector given the location of the spots on the walls and the relative directions of the laser beams transmitted from the object.
Indoor navigation based on multibeam laser projection minimizes occlusion and sensitivity concerns through the use of a set of several laser beams spread out over a large solid angle. Multiple beams provide redundancy in cases such as a beam striking a wall or other surface at such an oblique angle that the center of the resulting spot is hard to determine, or half the beam landing on one surface and half landing on another. Having several beams pointed in various directions spread out over a halfsphere or greater solid angle, for example, largely eliminates sensitivity to unlucky geometries—small angles may be avoided. Each new measurement of laser projector position and orientation is directly referenced to building coordinates so measurement errors do not accumulate over time. Finally the computational requirements are modest and computations may be performed in a fixed unit separate from a tracked object.
The major components of a multibeam laser projection indoor navigation system are: a laser projector, a set of observation cameras and a processor that solves space resection and other geometrical tasks.
In
When properly calibrated, cameras 130 and 131 may be used to estimate the three dimensional position of any point that both can see. For example if both cameras can see spots 1, 2, 3 and 4, then the three dimensional coordinates of each spot can be estimated in a coordinate system used to locate the walls and other features of the room. Meanwhile, laser projector emits laser beams 121124 at known azimuths and elevations as measured with respect to the robot. The angle between each pair of laser beams is therefore also known. As discussed in detail below, this provides enough information to estimate the position and orientation of the laser projector, and the object to which it is attached, in room coordinates.
Cameras, such as cameras 130 and 131 in
If only one camera is available, but it is aimed at a scene with known geometry (e.g. a flat wall at a known location), then that is enough to locate laser spots. This situation may be hard to guarantee in practice, however. Using two or more cameras eliminates issues that arise when spots fall on surfaces at unknown locations. As described below, one known surface may be used during system calibration.
If the laser beams used in an indoor navigation system are near infrared, then corresponding filters may be used with the cameras to remove background room light. Similarly, if the laser beams are modulated or encoded, then the cameras may be equipped with corresponding demodulators or decoders. Finally, as used here, a “camera” includes processors or other components to demodulate or decode laser spots and report their two dimensional position in an image. Cameras may thus report a set of timestamped twodimensional spot coordinates to a central computer (e.g. 140 in
Calibration is done to estimate the pose of each camera in room coordinates before navigation commences.
The first steps 505 and 510 in the calibration procedure of
An example of indoor navigation using multibeam laser projection is now presented using
After some introductory comments on notation, the example proceeds as follows. A set of reference unit vectors corresponding to the directions of laser beams projected from a laser projector are defined. Next, distances are defined from the projector to laser spots that appear on walls, ceilings or other surfaces. These distances are scalar numbers that multiply the reference unit vectors. The unit vectors and the distance scalars therefore define the position of observed laser spots in the reference (i.e. laser projector) coordinate system.
The next step in the example is to assume a transformation matrix that defines the relationship between reference and world coordinate systems. This matrix is used to find the position of observed laser spots in the world (i.e. room) coordinate system. The task of the navigation system is to find the transformation matrix given the reference unit vectors (from the design of the laser projector) and the laser spot locations in world coordinates (as observed by a set of calibrated cameras).
The mathematics of space resection has been worked out several times by various researchers independently over the last 170 years. Here we follow Haralick et al., “Review and Analysis of Solutions of the Three Point Perspective Pose Estimation Problem,” International Journal of Computer Vision, 13, 3, 331356 (1994); see, especially, p. 332334. Other valid solutions to the space resection problem work just as well. It turns out that space resection based on three observed points often leads to more than one solution. Two solutions are common, but as many as four are possible. Thus, the next part of the example shows a way to determine which solution is correct. Finally, as an optional step, the four by four transformation matrix between reference and world coordinate systems expressed in homogenous coordinates is decomposed into Euler angles and a translation vector.
Two functions are used to transform homogeneous coordinates to nonhomogeneous coordinates and vice versa. (⋅) transforms nonhomogeneous coordinates to homogenous coordinates while ^{−1}(⋅) transforms homogeneous coordinates to nonhomogeneous coordinates. Both functions operate on column vectors such that if v=[v_{1 }v_{2 }. . . v_{n}]^{T }then:
(v)=[v_{1 }v_{2 }. . . v_{n }1]^{T }
^{−1}(v)=[v_{1}/v_{n }v_{2}/v_{n }. . . v_{n 1}/v_{n}]^{T} (1)
The pose in world coordinates of the object to be tracked can be defined as the coordinate transform between reference and world coordinate systems. The transform can be carried out by leftmultiplying a 4 by 1 vector, describing a homogeneous threedimensional coordinate, by a 4 by 4 matrix X_{R→W }to give another homogeneous, threedimensional coordinate.
Let p_{1}, p_{2}, p_{3}, p_{4 }denote nonhomogeneous coordinates on a unit sphere for reference rays in the reference coordinate system. (See, e.g. p_{1}, p_{2 }and p_{3 }in
P_{1}^{R}=(m_{1}p_{1})
P_{2}^{R}=(m_{2}p_{2})
P_{3}^{R}=(m_{3}p_{3})
P_{4}^{R}=(m_{4}p_{4}) (2)
where m_{1}, m_{2}, m_{3}, m_{4 }are positive scalars that describe how far along each ray light is intercepted by a surface to create a detected spot. The homogeneous coordinates of the 3D detected spots in the world coordinate system are denoted by P_{1}^{W}, P_{2}^{W}, P_{3}^{W}, P_{4}^{W }where:
P_{1}^{W}=X_{R→W}P_{1}^{R }
P_{2}^{W}=X_{R→W}P_{2}^{R }
P_{3}^{W}=X_{R→W}P_{3}^{R }
P_{4}^{W}=X_{R→W}P_{4}^{R} (3)
The following reference unit vectors are defined for purposes of example:
p_{1}=[−0.71037 −0.2867 0.64279]^{T }
p_{2}=[0.71037 0.2867 0.64279]^{T }
p_{3}=[−0.88881 0.45828 0]^{T }
p_{4}=[0.56901 −0.37675 0.73095]^{T} (4)
The angle θ_{ij }between p_{i }and p_{j }is given by θ_{ij}=cos^{−1}(p_{i}^{T}p_{j}); therefore,
θ_{12}=100°, θ_{13}=60°, θ_{14}=80°
θ_{23}=120°, θ_{24}=40°, θ_{34}=132.7° (5)
The set of reference vectors p_{i }has been chosen in this example such that the angle between each pair of vectors is different. This property helps avoid ambiguities in pose estimation but is not required. For purposes of illustration, m_{i }are chosen as follows: m_{1}=1, m_{2}=4, m_{3}=7 and m_{4}=10. Then, using equation (2), we have:
P_{1}^{R}=[−0.71037 −0.2867 0.64279 1]^{T }
P_{2}^{R}=[2.8415 1.1468 2.5712 1]^{T }
P_{3}^{R}=[−6.2217 3.2079 0 1]^{T }
P_{4}^{R}=[5.6901 −3.7675 7.3095 1]^{T} (6)
Let us assume the following transformation matrix:
Then, using equation (3),
P_{1}^{W}=[6.5162 10.423 0.75829 1]^{T }
P_{2}^{W}=[9.3834 13.039 2.5826 1]^{T }
P_{3}^{W}=[0.046048 11.489 0.73487 1]^{T }
P_{4}^{W}=[14.321 9.4065 6.7226 1]^{T} (8)
We now have the required inputs, p_{1}, p_{2}, p_{3}, P_{1}^{W}, P_{2}^{W}, P_{3}^{W}, for a space resection algorithm such as the one described in Haralick. The algorithm determines X_{R→W }up to a possible fourfold ambiguity. To resolve the ambiguity each real solution may be checked to see whether or not it projects P_{4}^{W }to p_{4}. The space resection method detailed in Haralick first determines distances from the origin to each of the reference points. These distances are called s_{1}, s_{2}, s_{3}, by Haralick and if correctly calculated should be equal to m_{1}, m_{2}, m_{3 }respectively. Given these distances we can then calculate P_{1}^{R}, P_{2}^{R}, P_{3}^{R}.
Given the coordinates of three 3D points expressed in both the reference coordinate system and the world coordinate system one can find the transformation matrix between the two coordinate systems. This may be done by Procrustes analysis; see, for example Peter H. Schoenemann, “A Generalized Solution of the Orthogonal Procrustes Problem”, Psychometrika, 1, 31, 110 (1966). A simpler method is presented below, however.
If a, b, c, α, β, γ take the meanings described in Haralick then they can be calculated as:
a=∥P_{2}^{W}−P_{3}^{W}∥=9.6436
b=∥P_{1}^{W}−P_{3}^{W}∥=6.5574
c=∥P_{1}^{W}−P_{2}^{W}∥=4.2882 (9)
cos α=p_{2}^{T}·p_{3}=−0.5000
cos β=p_{1}^{T}·p_{3}=0.5000
cos γ=p_{1}^{T}·p_{2}=−0.1736 (10)
Inserting these values into Haralick'"'"'s equation (9) gives:
A_{4}=0.1128, A_{3}=−1.5711, A_{2}=6.5645, A_{1}=−8.6784, A_{0}=7.2201 (11)
The quartic function in v with these coefficients has the following roots:
v=7.0000 or v=5.4660 or v=0.7331−1.065i or v=0.7331+1.066i (12)
The complex roots may be ignored and the real roots substituted into Haralick'"'"'s equation (8) to give corresponding values for u:
u=4.0000,v=7.0000 or u=2.9724,v=5.4660 (13)
Substituting u and v into Haralick'"'"'s equations (4) and (5) leads to:
s_{1}=1.0000, s_{2}=4.0000, s_{3}=7.0000
or
s_{1}=1.3008, s_{2}=3.8666, s_{3}=7.1104 (14)
One can see that the first solution corresponds to the values picked for m_{1}, m_{2}, m_{3 }above. Of course, at this point we know this only because we know how the problem was constructed. We will now recover the transformation, X_{R→W}, for each solution and then determine which solution is correct.
It is noted in Haralick and elsewhere that the transformation has 12 parameters but the point correspondences only give 9 equations. The conventional solution to this problem is to enforce orthogonality constraints in the rotation part of the transform to reduce the number of parameters. However there is an easier and somewhat surprising method; we can manufacture a virtual fourth point whose location is linearly independent from those of the first three points. This virtual point is consistent with a rigid transformation, so its coordinates and those of the three actual points, as expressed in the reference and world coordinate systems, give the transformation directly.
The fourth point is found by considering the vectors from one actual point to each of the other two. If we take a point that is separated from the first point by the cross product of the two vectors then we can be sure that it is not coplanar with the three actual points and is therefore linearly independent. Since in a Euclidean transform the vectors are simply rotated, so is their cross product. Hence we have a fourth point correspondence which is linearly independent but enforces the orthogonality constraint.
We call this point P_{5}^{R }in the reference coordinate system and P_{5}^{W }in the world coordinate system. Formally it may be defined as:
P_{5}^{R}=(^{−1}(P_{1}^{R})+(^{−1}(P_{2}^{R})−^{−1}(P_{1}^{R}))×(^{−1}(P_{3}^{R})−^{−1}(P_{1}^{R})))
P_{5}^{w}=(^{−1}(P_{1}^{w})+(^{−1}(P_{2}^{w})−^{−1}(P_{1}^{w}))×(^{−1}(P_{3}^{w})−^{−1}(P_{1}^{w}))) (15)
We first consider the solution where s_{1}=1.0000, s_{2}=4.0000, s_{3}=7.0000. Calculated values are indicated using a ‘hat’. For example:
{circumflex over (P)}_{1}^{R}=(s_{1}p_{1})=[−0.7104 −0.2867 0.6428 1]^{T }
{circumflex over (P)}_{2}^{R}=(s_{2}p_{2})=[2.8415 1.1468 2.5712 1]^{T }
{circumflex over (P)}_{3}^{R}=(s_{3}p_{3})=[−6.2217 3.2079 0 1]^{T} (16)
Using equation (15) we find:
{circumflex over (P)}_{5}^{R}=[−8.3707 −8.6314 20.9556 1]^{T }
{circumflex over (P)}_{5}^{W}=[4.5101 −1.3129 20.7373 1]^{T} (17)
Stacking 3D point correspondences gives:
Comparison with equation (7) shows that this is the correct solution. This may be verified independently by transforming the fourth world point into reference coordinates, projecting it onto the unit sphere, and comparing to the corresponding reference unit vector:
{circumflex over (P)}_{4}^{R}={circumflex over (X)}_{R→W}^{−1}P_{4}^{W}=[5.6901 −3.7675 7.3095 1]^{T }
{circumflex over (p)}_{4}=^{−1}({circumflex over (P)}_{4}^{R})/∥^{−1}({circumflex over (P)}_{4}^{R})∥=[0.5690 −0.3767 0.7310]^{T } (19)
Comparing this to equation (4) shows that the fourth world point does indeed agree with the fourth reference coordinate and we can therefore conclude that the calculated transform is correct.
Now consider the second solution where s_{1}=1.3008, s_{2}=3.8666, s_{3}=7.1104. Plugging these values into equation (2) gives:
{circumflex over (P)}_{1}^{R}=[−0.9241 −0.3729 0.8362 1]^{T }
{circumflex over (P)}_{2}^{R}=[2.7467 1.1085 2.4854 1]^{T }
{circumflex over (P)}_{3}^{R}=[−6.3198 3.2585 0.0000 1]^{T }
{circumflex over (P)}_{5}^{R}=[−8.1519 −6.2022 22.1599 1]^{T} (20)
Stacking these points as we did in equation (18) leads to the transform matrix:
Testing this with the fourth world point leads to:
{circumflex over (P)}_{4}^{R}={circumflex over (X)}_{R→W}^{−1}P_{4}^{W}=[5.5783 −3.3910 7.6286 1]^{T }
{circumflex over (p)}_{4}=^{−1}({circumflex over (P)}_{4}^{R})/∥^{−1}({circumflex over (P)}_{4}^{R})∥=[0.5556 −0.3377 0.7598]^{T} (22)
Here the elements of {circumflex over (p)}_{4 }differ from those of p_{4 }(see equation (4)) indicating that this is not a correct solution.
For many purposes it is unnecessary to decompose the transformation matrix, {circumflex over (X)}_{R→W}; however we present the decomposition here for completeness. The transform describes directly how the basis vectors of one coordinate system relate to the basis vectors of another. The coordinate system is defined by the point at infinity on the xaxis, the point at infinity on the yaxis, the point at infinity on the zaxis, and the origin. We denote the basis vectors of the reference coordinate system in world coordinates as B_{R}^{W}, and the basis vectors of the reference coordinate system in reference coordinates as B_{R}^{R}. If we stack the basis vectors we get the four by four identity matrix:
Since,
B_{R}^{W}={circumflex over (X)}_{R→W}B_{R}^{R}={circumflex over (X)}_{R→W} (24)
the transformation can be read as the basis vectors of the reference coordinate system in the world coordinate system. Thus the question “What is the position of the reference system (i.e. the laser projector)?” is equivalent to asking “Where is the origin of the reference coordinate frame in the world coordinate system?” This is given by the fourth column of {circumflex over (X)}_{R→W}; the column that corresponds to [0 0 0 1]^{T }in B_{R}^{R}. Likewise the other columns tell us how the reference frame has rotated (i.e. the orientation of the laser projector). However, those unfamiliar with projective geometry often prefer to consider the rotation in terms of Euler angles. For a zyx Euler sequence we can consider the transformation to be composed as:
In this convention θ_{z }(yaw) is a counterclockwise rotation about the zaxis, θ_{y }(pitch) is a counterclockwise rotation about the new yaxis, θ_{x }(roll) is a counterclockwise rotation about the new xaxis. To avoid singularities in the inversion of the transform θ_{y }is restricted to the open interval −90°<θ_{y}<90°. When θ_{y}=±90° gimbal lock occurs and Euler angles are inadequate for describing the rotation. With this caveat the transform can be decomposed as:
Applying this to the transformation of equation (18) we get:
Thus the position of the origin of the reference coordinate system (i.e. the position of the laser projector) expressed in the world coordinate system is (7, 11, 0.1) and the orientation of the laser projector in the world coordinate system is described by Euler angles 3°, 5° and 23°.
To recap: Knowledge of the location of laser spots on the walls of a room, combined with knowledge of the relative directions of laser beams emitted by a laser projector, leads to the location and orientation of the laser projector expressed in room coordinates. The location of the spots is determined with a calibrated set of cameras and the relative directions of the projected laser beams are set during manufacture and/or setup of the laser projector.
A few subtleties of the system and methods described above are worth mentioning or revisiting at this point. For example, in an embodiment the directions of each laser beam coincide at a point, P. If this is not the case the mathematics of the space resection problem becomes more complicated.
Correspondences between laser beams and their spots may be accomplished by trial and error until a solution to the space resection problem is found. This process is made more robust when the angles between pairs of laser beams are different.
Alternatively, each laser beam may be modulated or encoded to facilitate identification. Each beam may be modulated with its own frequency sine wave or its own pseudo random code, as examples. Demodulating cameras may be used to identify beams or demodulation may be done later using a separate microprocessor. Unique beam identification becomes even more helpful when multiple laser projectors (e.g. on multiple robots or other objects) are tracked at once.
The use of four, five or even more beams per laser projector helps make the system more robust in the face of potential geometric ambiguities. Furthermore, once an ambiguity has been resolved, such as finding that the first rather than the second solution is correct in the example above, it will tend to stay resolved in the same way as a tracked object makes incremental movements from one location and pose to the next.
In light of the detailed example given above and the subtleties just mentioned,
The next step 715 is to identify the observed points based on unique modulation signals applied to each laser beam. This step is not required if no laser modulation is used. Given the observed location of laser spots as determined from data supplied by two or more cameras and knowledge of the geometry of the laser projector, the space resection problem is now solved in step 720. The solution may proceed in analogy to the example provided above or it may use another method. The solution may include resolving geometric ambiguities which may arise.
The solution includes comparing the coordinates of known points (e.g. laser spots) as expressed in reference and world coordinates to find a matrix describing a coordinate transform between the two coordinate systems. This may be done though Procrustes analysis or using the method of manufacturing a virtual, linearly independent point as described above.
A system including a multibeam laser projector attached to an object to be tracked, a set of calibrated cameras that observe laser spots on the walls of a room, and a processor that solves the space resection problem is thus able to provide an indoor navigation solution. The system avoids many difficulties associated with traditional camerabased navigation including issues such as occlusion and geometric insensitivity while requiring neither extraordinary processing power nor highbandwidth data transfer.
The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.