Method and apparatus for combining data to construct a floor plan
First Claim
Patent Images
1. A method of perceiving a spatial model of an environment, the method comprising:
- capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein;
respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;
respective images are captured from different positions within the environment through which the robot moves; and
respective images correspond to respective fields of view;
aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises;
determining a first area of overlap between a first image and a second image among the plurality of images by at least;
detecting a feature in the first image;
detecting the feature in the second image;
determining a first value indicative of a difference in position of the feature in the first and second images in a first frame of reference of the one or more sensors;
obtaining a second value indicative of a difference in pose of the one or more sensors between when data from which the first image is obtained and when data from which the second image is obtained; and
determining the first area of overlap based on the first value and the second value; and
determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment.
0 Assignments
0 Petitions
Accused Products
Abstract
Provided is a method including capturing a plurality of images by at least one sensor of a robot; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images; and determining, with the processor of the robot, based on alignment of the data, a spatial model of the environment.
40 Citations
34 Claims
-
1. A method of perceiving a spatial model of an environment, the method comprising:
-
capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a feature in the first image; detecting the feature in the second image; determining a first value indicative of a difference in position of the feature in the first and second images in a first frame of reference of the one or more sensors; obtaining a second value indicative of a difference in pose of the one or more sensors between when data from which the first image is obtained and when data from which the second image is obtained; and determining the first area of overlap based on the first value and the second value; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
-
-
29. A robot for perceiving a spatial model of an environment, comprising:
-
an actuator configured to move the robot through the environment; at least one sensor mechanically coupled to the robot; a processor configured to receive sensed data from the at least one sensor and control the actuator; and memory storing instructions that when executed by the processor effectuates operations comprising; capturing a plurality of images by at least one sensor of a robot moving within an environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a feature in the first image; detecting the feature in the second image; determining a first value indicative of a difference in position of the feature in the first and second images in a first frame of reference of the one or more sensors; obtaining a second value indicative of a difference in pose of the one or more sensors between when data from which the first image is obtained and when data from which the second image is obtained; and determining the first area of overlap based on the first value and the second value; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment.
-
-
30. A method of perceiving a spatial model of an environment, the method comprising:
-
capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a first edge at a first position in the first image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a second edge at a second position in the first image based on the derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a third edge in a third position in the second image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the second image; determining that the third edge is not the same edge as the second edge based on shapes of the third edge and the second edge not matching; determining that the third edge is the same edge as the first edge based on shapes of the first edge and the third edge at least partially matching; and determining the first area of overlap based on a difference between the first position and the third position; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment.
-
-
31. A method of perceiving a spatial model of an environment, the method comprising:
-
capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a characteristic in the first image; detecting the same characteristic in the second image; determining the first area of overlap based on at least a position of the characteristic in the first and second images; determining an approximate alignment between a reduced resolution version of the first image and a reduced resolution version of the second image; and refining the approximate alignment by; determining aggregate amounts of difference between overlapping portions of the first image and the second image at candidate alignments displaced from the approximate alignment; and selecting a candidate alignment that produces a lowest aggregate amount of difference among the candidate alignments or selecting a candidate alignment that produces an aggregate amount of difference less than a threshold; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment.
-
-
32. A method of perceiving a spatial model of an environment, the method comprising:
-
capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a feature in the first image; detecting the feature in the second image; determining a first value indicative of a difference in position of the feature in the first and second images in a first frame of reference of the one or more sensors; obtaining a second value indicative of a difference in pose of the one or more sensors between when data from which the first image is obtained and when data from which the second image is obtained; and determining the first area of overlap based on the first value and the second value; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment, wherein at least some data processing of the spatial model is offloaded from the robot to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the environment.
-
-
33. A method of perceiving a spatial model of an environment, the method comprising:
-
capturing a plurality of images by at least one sensor of a robot moving within the environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a first edge at a first position in the first image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a second edge at a second position in the first image based on the derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a third edge in a third position in the second image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the second image; determining that the third edge is not the same edge as the second edge based on shapes of the third edge and the second edge not matching; determining that the third edge is the same edge as the first edge based on shapes of the first edge and the third edge at least partially matching; and determining the first area of overlap based on a difference between the first position and the third position; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment, wherein at least some data processing of the spatial model is offloaded from the robot to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the environment.
-
-
34. A robot for perceiving a spatial model of an environment, comprising:
-
an actuator configured to move the robot through the environment; at least one sensor mechanically coupled to the robot; a processor configured to receive sensed data from the at least one sensor and control the actuator; and memory storing instructions that when executed by the processor effectuates operations comprising; capturing a plurality of images by at least one sensor of a robot moving within an environment, wherein; respective images comprise data comprising at least one of;
pixel data indicative of features of the environment captured in the respective images and depth data indicative of depth from respective sensors of the robot to objects in the environment captured in the respective images;respective images are captured from different positions within the environment through which the robot moves; and respective images correspond to respective fields of view; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images, wherein aligning comprises; determining a first area of overlap between a first image and a second image among the plurality of images by at least; detecting a first edge at a first position in the first image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a second edge at a second position in the first image based on the derivative of depth with respect to one or more spatial coordinates of depth data in the first image; detecting a third edge in a third position in the second image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the second image; determining that the third edge is not the same edge as the second edge based on shapes of the third edge and the second edge not matching; determining that the third edge is the same edge as the first edge based on shapes of the first edge and the third edge at least partially matching; and determining the first area of overlap based on a difference between the first position and the third position; and determining, with the processor of the robot, based on alignment of the data, the spatial model of the environment.
-
Specification