Methods and Apparatus for Autonomous Robotic Control
First Claim
1. A system for automatically locating and identifying an object in an environment, the system comprising:
- at least one sensor to acquire sensor data representing of at least a portion of the object;
a spatial attention module, operably coupled to the at least one sensor, to produce a foveated representation of the object based at least in part on the sensor data, to track a position of the object within the environment based at least in part on the foveated representation, and to select another portion of the environment to be sensed by the at least one sensor based at least in part on the foveated representation of the object; and
a semantics module, operably coupled to the spatial attention module, to determine an identity of the object based at least in part on the foveated representation of the object.
1 Assignment
0 Petitions
Accused Products
Abstract
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on “stovepiped,” or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot'"'"'s environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
96 Citations
23 Claims
-
1. A system for automatically locating and identifying an object in an environment, the system comprising:
-
at least one sensor to acquire sensor data representing of at least a portion of the object; a spatial attention module, operably coupled to the at least one sensor, to produce a foveated representation of the object based at least in part on the sensor data, to track a position of the object within the environment based at least in part on the foveated representation, and to select another portion of the environment to be sensed by the at least one sensor based at least in part on the foveated representation of the object; and a semantics module, operably coupled to the spatial attention module, to determine an identity of the object based at least in part on the foveated representation of the object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method of automatically locating and identifying an object in an environment, the method comprising:
-
(A) estimating a position and/or an orientation of at least one sensor with respect to the environment; (B) acquiring, with the at least one sensor, sensor data representing at least a portion of the object; (C) producing a foveated representation of the object based at least in part on the sensor data acquired in (B); (D) determining an identity of the object based at least in part on the foveated representation of the object produced in (C); and (E) selecting another portion of the environment to be sensed by the at least one sensor based at least in part on the foveated representation of the object produced in (C) and the position and/or the orientation estimated in (A); (F) acquiring additional sensor data, with the at least one sensor, in response to selection of the other portion of the environment in (D). - View Dependent Claims (14, 15, 16, 17, 18, 19, 20)
-
-
21. A system for automatically locating and identifying an object in an environment, the system comprising:
-
a plurality of sensors to acquire multi-sensory data representative of the environment; a plurality of first spatial attention modules, each first spatial attention module in the plurality of first spatial attention modules operably coupled to a respective sensor in the plurality of sensors and configured to produce a respective representation of the object based on a respective portion of the multi-sensory data; a plurality of first semantics modules, each first semantics module in the plurality of first spatial attention modules operably coupled to a respective first spatial attention module in the plurality of first spatial attention modules and configured to produce respective identifications of the object based on an output from the respective first spatial attention module; a second spatial attention module, operably coupled to the plurality of first spatial attention modules, to estimate a location of the object based at least in part on the multi-sensory representation of the environment; and a second semantics module, operably coupled to the plurality of first semantics modules and the second spatial attention module, to identify the at least one object based in part of the respective identifications produced by the plurality of first semantics modules and the location of the object estimated by the second spatial attention module. - View Dependent Claims (22, 23)
-
Specification