Method and apparatus for combining data to construct a floor plan
First Claim
1. A method of perceiving a spatial model of a working environment, the method comprising:
- capturing data by one or more sensors of a robot moving within a working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses;
obtaining, with one or more processors of the robot, a plurality of depth images based on the captured data, wherein;
respective depth images are based on data captured from different positions within the working environment through which the robot moves,respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, anddepth data of respective depth images correspond to respective fields of view;
aligning, with one or more processors of the robot, depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein aligning comprises;
determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by;
detecting a feature in the first depth image;
detecting the feature in the second depth image;
determining a first value indicative of a difference in position of the feature in the first and second depth images in a first frame of reference of the one or more sensors;
obtaining a second value indicative of a difference in pose of the one or more sensors between when depth data from which the first depth image is obtained and when depth data from which the second depth image is obtained; and
determining the first area of overlap based on the first value and the second value; and
determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and
determining, with one or more processors of the robot, based on alignment of the depth data, a spatial model of the working environment,wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud,wherein the spatial model is further processed to identify rooms in a floor plan, andwherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment.
0 Assignments
0 Petitions
Accused Products
Abstract
Provided is a method and apparatus for combining perceived depths to construct a floor plan using cameras, such as depth cameras. The camera(s) perceive depths from the camera(s) to objects within a first field of view. The camera(s) is rotated to observe a second field of view partly overlapping the first field of view. The camera(s) perceives depths from the camera(s) to objects within the second field of view. The depths from the first and second fields of view are compared to find the area of overlap between the two fields of view. The depths from the two fields of view are then merged at the area of overlap to create a segment of a floor plan. The method is repeated wherein depths are perceived within consecutively overlapping fields of view and are combined to construct a floor plan of the environment as the camera is rotated.
-
Citations
57 Claims
-
1. A method of perceiving a spatial model of a working environment, the method comprising:
-
capturing data by one or more sensors of a robot moving within a working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining, with one or more processors of the robot, a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning, with one or more processors of the robot, depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by; detecting a feature in the first depth image; detecting the feature in the second depth image; determining a first value indicative of a difference in position of the feature in the first and second depth images in a first frame of reference of the one or more sensors; obtaining a second value indicative of a difference in pose of the one or more sensors between when depth data from which the first depth image is obtained and when depth data from which the second depth image is obtained; and determining the first area of overlap based on the first value and the second value; and determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, with one or more processors of the robot, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A robot, comprising:
-
an actuator configured to move a robot through a working environment; one or more sensors mechanically coupled to the robot; one or more processors configured to receive sensed data from the one or more sensors and control the actuator; and memory storing instructions that when executed by at least some of the processors effectuate operations comprising; capturing data by one or more sensors of a robot moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by; detecting a feature in the first depth image; detecting the feature in the second depth image; determining a first value indicative of a difference in position of the feature in the first and second depth images in a first frame of reference of the one or more sensors; obtaining a second value indicative of a difference in pose of the one or more sensors between when depth data from which the first depth image is obtained and when depth data from which the second depth image is obtained; and determining the first area of overlap based on the first value and the second value; and determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33)
-
-
34. A method of perceiving a spatial model of a working environment, the method comprising:
-
capturing data by one or more sensors of a robot moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein the aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by detecting a first edge at a first position in the first image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the first depth image; detecting a second edge at a second position in the first image based on the derivative of depth with respect to one or more spatial coordinates of depth data in the first depth image; detecting a third edge in a third position in the second image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the second depth image; deter mining that the third edge is not the same edge as the second edge based on shapes of the third edge and the second edge not matching; determining that the third edge is the same edge as the first edge based on shapes of the first edge and the third edge at least partially matching; and determining the first area of overlap based on a difference between the first position and the third position; and determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (35, 36, 37, 38, 39)
-
-
40. A robot, comprising:
-
an actuator configured to move a robot through a working environment; one or more sensors mechanically coupled to the robot; one or more processors configured to receive sensed data from the one or more sensors and control the actuator; and memory storing instructions that when executed by at least some of the processors effectuate operations comprising; capturing data by one or more sensors of a robot moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein the aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by detecting a first edge at a first position in the first image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the first depth image; detecting a second edge at a second position in the first image based on the derivative of depth with respect to one or more spatial coordinates of depth data in the first depth image; detecting a third edge in a third position in the second image based on a derivative of depth with respect to one or more spatial coordinates of depth data in the second depth image; determining that the third edge is not the same edge as the second edge based on shapes of the third edge and the second edge not matching; determining that the third edge is the same edge as the first edge based on shapes of the first edge and the third edge at least partially matching; and determining the first area of overlap based on a difference between the first position and the third position; and determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (41, 42, 43, 44, 45)
-
-
46. A method of perceiving a spatial model of a working environment, the method comprising:
-
capturing data by one or more sensors of a robot moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein the aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by determining an approximate alignment between a reduced resolution version of the first depth image and a reduced resolution version of the second depth image; and refining the approximate alignment by; determining aggregate amounts of difference between overlapping portions of the first depth image and the second depth image at candidate alignments displaced from the approximate alignment; and selecting a candidate alignment that produces a lowest aggregate amount of difference among the candidate alignments or selecting a candidate alignment that produces an aggregate amount of difference less than a threshold; and determining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (47, 48, 49, 50, 51)
-
-
52. A robot, comprising:
-
an actuator configured to move a robot through a working environment; one or more sensors mechanically coupled to the robot; one or more processors configured to receive sensed data from the one or more sensors and control the actuator; and memory storing instructions that when executed by at least some of the processors effectuate operations comprising; capturing data by one or more sensors of a robot moving within the working environment, the data being indicative of depth within the working environment from respective sensors of the robot to objects in the working environment at a plurality of different sensor poses; obtaining a plurality of depth images based on the captured data, wherein; respective depth images are based on data captured from different positions within the working environment through which the robot moves, respective depth images comprise a plurality of depth data, the depth data indicating distance from respective sensors to objects within the working environment at respective sensor poses, and depth data of respective depth images correspond to respective fields of view; aligning depth data of respective depth images based on an area of overlap between the fields of view of the plurality of depth images, wherein the aligning comprises; determining a first area of overlap between a first depth image and a second depth image among the plurality of depth images by; determining an approximate alignment between a reduced resolution version of the first depth image and a reduced resolution version of the second depth image; and refining the approximate alignment by;
determining aggregate amounts of difference between overlapping portions of the first depth image and the second depth image at candidate alignments displaced from the approximate alignment; and
selecting a candidate alignment that produces a lowest aggregate amount of difference among the candidate alignments or selecting a candidate alignment that produces an aggregate amount of difference less than a threshold; anddetermining a second area of overlap between the second depth image and a third depth image among the plurality of depth images, the first area of overlap being at least partially different from the second area of overlap; and determining, based on alignment of the depth data, a spatial model of the working environment, wherein at least some data processing of the spatial model is offloaded from the robotic device to the cloud, wherein the spatial model is further processed to identify rooms in a floor plan, and wherein the spatial model is stored in memory accessible to the robot during a subsequent operational session for use in autonomously navigating the working environment. - View Dependent Claims (53, 54, 55, 56, 57)
-
Specification