METHODS AND APPARATUS FOR AUTONOMOUS ROBOTIC CONTROL
First Claim
1. A system comprising:
- an image sensor to acquire a plurality of images of at least a portion of an environment surrounding a robot; and
a processor, operably coupled to the image sensor, to;
translate each image in the plurality of images from a frame of reference of the image sensor to an allocentric frame of reference;
identify a position, in the allocentric frame of reference, of an object appearing in at least one image in the plurality of images; and
determine if the object appears in at least one other image in the plurality of images based on the position, in the allocentric frame of reference, of the object.
5 Assignments
0 Petitions
Accused Products
Abstract
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on “stovepiped,” or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot'"'"'s environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
-
Citations
16 Claims
-
1. A system comprising:
-
an image sensor to acquire a plurality of images of at least a portion of an environment surrounding a robot; and a processor, operably coupled to the image sensor, to; translate each image in the plurality of images from a frame of reference of the image sensor to an allocentric frame of reference; identify a position, in the allocentric frame of reference, of an object appearing in at least one image in the plurality of images; and determine if the object appears in at least one other image in the plurality of images based on the position, in the allocentric frame of reference, of the object. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method of locating an object with respect to a robot, the method comprising:
-
(A) acquiring, with a image sensor coupled to the robot, a plurality of images of at least a portion of an environment surrounding the robot; (B) automatically translating each image in the plurality of images from a frame of reference of the image sensor to an allocentric frame of reference; (C) identifying a position, in the allocentric frame of reference, of an object appearing in at least one image in the plurality of images; and (D) determining if the object appears in at least one other image in the plurality of images based on the position, in the allocentric frame of reference, of the object. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14, 15, 16)
-
Specification