Systems and Methods for Performing Occlusion Detection
First Claim
1. A mobile robot configured to navigate an operating environment, comprising:
- a body containing;
a drive configured to translate the robot in a direction of motion;
a machine vision system comprising a camera that captures images of the operating environment of the mobile robot;
a processor;
memory containing a simultaneous localization and mapping (SLAM) application and a behavioral control application;
wherein the behavioral control application directs the processor to;
capture images using the machine vision system;
detect the presence of an occlusion obstructing a portion of the field of view of the camera based on the captured images; and
generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected; and
wherein the behavioral control application further directs the processor to maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
4 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
69 Citations
20 Claims
-
1. A mobile robot configured to navigate an operating environment, comprising:
a body containing; a drive configured to translate the robot in a direction of motion; a machine vision system comprising a camera that captures images of the operating environment of the mobile robot; a processor; memory containing a simultaneous localization and mapping (SLAM) application and a behavioral control application; wherein the behavioral control application directs the processor to; capture images using the machine vision system; detect the presence of an occlusion obstructing a portion of the field of view of the camera based on the captured images; and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected; and wherein the behavioral control application further directs the processor to maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
19. A mobile robot configured to navigate an operating environment, comprising:
-
a body containing; a drive mechanism configured to translate the robot in a direction of motion; an odometry sensor system; a machine vision system comprising a camera that captures images of the operating environment of the mobile robot; and at least one processor; memory containing a behavioral control application and a visual simultaneous localization and mapping (VSLAM) application, a pre-loaded map of the operating environment, and an estimated robot pose; wherein the behavioral control application directs a processor to; actuate the drive and capture; odometry data using the odometry sensor system; and images using the machine vision system; determine updated robot poses and update the map of the operating environment by providing at least the captured images and the odometry data as inputs to the VSLAM application; update occlusion detection data describing portions of images being used by the VSLAM application to determine updated robot poses and update the map of the operating environment; detect the presence of an occlusion obstructing the camera based on the occlusion detection data; and provide a notification when the occlusion detection data indicates the presence of the occlusion obstructing the camera, wherein the VSLAM application directs a processor to; estimate the location of the mobile robot within the map of the environment based upon images captured by the machine vision sensor system and odometry data captured by the odometry sensor system, and update the map based upon the images captured by the machine vision sensor system. - View Dependent Claims (20)
-
Specification