Methods and apparatus for autonomous robotic control
First Claim
Patent Images
1. A method comprising:
- acquiring a plurality of images of at least a portion of an environment with an image sensor;
constructing a spatial shroud fitting a form of an object in a first image of the plurality of images from the first image;
identifying the object in the first image based at least in part on the spatial shroud;
translating and/or transforming the spatial shroud based at least in part on a change in a position and/or an orientation of the image sensor;
determining if the spatial shroud fits the form of the object in a second image in the plurality of images; and
in response to determining that the spatial shroud fits the form of the object, identifying a position of the object in the environment based at least in part on the spatial shroud.
2 Assignments
0 Petitions
Accused Products
Abstract
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on “stovepiped,” or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot'"'"'s environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
-
Citations
23 Claims
-
1. A method comprising:
-
acquiring a plurality of images of at least a portion of an environment with an image sensor; constructing a spatial shroud fitting a form of an object in a first image of the plurality of images from the first image; identifying the object in the first image based at least in part on the spatial shroud; translating and/or transforming the spatial shroud based at least in part on a change in a position and/or an orientation of the image sensor; determining if the spatial shroud fits the form of the object in a second image in the plurality of images; and in response to determining that the spatial shroud fits the form of the object, identifying a position of the object in the environment based at least in part on the spatial shroud. - View Dependent Claims (2, 3, 4, 5, 6, 7, 15, 16, 19, 20, 21, 22, 23)
-
-
8. A system comprising:
-
an image sensor to acquire a plurality of images of at least a portion of an environment; a processor communicably coupled to the image sensor, the processor to; construct a spatial shroud fitting a form of an object in a first image of the plurality of images from the first image; identify the object in the first image based at least in part on the spatial shroud; translate and/or transform the spatial shroud based at least in part on a change in a position and/or an orientation of the image sensor; determine if the spatial shroud fits the form of the object in a second image in the plurality of images; and in response to determining that the spatial shroud fits the form of the object in the second image, identify a position of the object in the environment based at least in part on the spatial shroud. - View Dependent Claims (9, 10, 11, 12, 13, 14, 17, 18)
-
Specification