Obstacle recognition method for autonomous robots
First Claim
Patent Images
1. A method for operating a robot, comprising:
- capturing, by an image sensor disposed on a robot, images of a workspace;
obtaining, by a processor of the robot or via the cloud, the captured images;
comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary;
identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit;
instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified;
capturing, by at least one sensor of the robot, movement data of the robot; and
generating, by the processor of the robot or via the cloud, a spatial representation of the workspace based on the captured images and the movement data, wherein the captured images are indicative of the position of the robot relative to objects within the workspace and the movement data is indicative of movement of the robot.
0 Assignments
0 Petitions
Accused Products
Abstract
Provided is a method including capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by a processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; and instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified.
55 Citations
30 Claims
-
1. A method for operating a robot, comprising:
-
capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by a processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified; capturing, by at least one sensor of the robot, movement data of the robot; and generating, by the processor of the robot or via the cloud, a spatial representation of the workspace based on the captured images and the movement data, wherein the captured images are indicative of the position of the robot relative to objects within the workspace and the movement data is indicative of movement of the robot. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27)
-
-
28. An apparatus, comprising:
a tangible, non-transitory, machine-readable medium storing instructions that when executed by a processor effectuate operations comprising; capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by a processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified; determining, by the processor of the robot or via the cloud, a navigation path of the robot based on a spatial representation of the workspace, wherein the navigation path is based on a set of the most desired trajectories to navigate the robot from a first location to a second location; and controlling, by the processor of the robot, an actuator of the robot to cause the robot to move along the determined navigation path.
-
29. A method for operating a robot, comprising:
-
capturing, by a camera disposed on a robot, images of a workspace of the robot, wherein images are captured from different locations as the robot moves within the workspace; capturing, by at least one sensor, movement data indicative of movement of the robot; generating, by a processor of the robot or via the cloud, a first iteration of a spatial representation of the workspace, comprising; spatially aligning, by the processor of the robot or via the cloud, a first image captured at a first location of the robot with a second image captured at a second location of the robot, comprising; detecting, by the processor of the robot or via the cloud, a first feature at a first position in the first image based on a derivative of pixel values in the first image; detecting, by the processor of the robot or via the cloud, a second feature at a second position in the first image based on the derivative of pixel values in first image; detecting, by the processor of the robot or via the cloud, a third feature at a third position in the second image based on a derivative of pixel values in the second image; determining, by the processor of the robot or via the cloud, that the third feature of the second image is not the same feature as the second feature of the first image based on the characteristics of the third feature and the second feature not matching; determining, by the processor of the robot or via the cloud, that the third feature of the second image is the same feature as the first feature of the first image based on characteristics of the first feature and the third feature at least partially matching; and determining, by the processor of the robot or via the cloud, a first translation vector that associates the first image with the second image, the first translation vector corresponding with the displacement of robot from the first location to the second location; and combining, by the processor of the robot or via the cloud, the first image and the second image based on the alignment of the second image with the first image; correcting, by the processor of the robot or via the cloud, the movement data of the robot corresponding to the robot moving from the first location to the second location based on the first translation vector; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; and instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified.
-
-
30. A method for operating a robot, comprising:
-
capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by a processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified; receiving, by an application of a communication device paired with the robot, at least one input designating at least one of;
an operation of the robot;
a movement of the robot;
a deletion, addition, or modification of a schedule of the robot;
a deletion, addition, or modification to a map of the workspace;
a deletion, addition, or modification of a subarea;
a deletion, addition, or modification of a keep-out zone;
a deletion, addition, or modification of a navigation path of the robot;
information or instruction required in pairing the robot with a Wi-Fi router; and
information for programming the robot; anddisplaying, by the application of the communication device paired with the robot, at least one of;
the map of the workspace;
the navigation path of the robot; and
a camera view of the robot.
-
Specification