×

Visual-inertial positional awareness for autonomous and non-autonomous mapping

  • US 10,402,663 B1
  • Filed: 08/29/2016
  • Issued: 09/03/2019
  • Est. Priority Date: 08/29/2016
  • Status: Active Grant
First Claim
Patent Images

1. A system including:

  • a mobile platform having disposed thereon;

    at least one camera;

    a multi-axis inertial measuring unit (IMU); and

    an interface to a host including one or more processors coupled to memory, the memory loaded with computer instructions to guide the mobile platform that includes the camera with distance calculation and the multi-axis inertial measuring unit (IMU), the instructions, when executed on the processors, implement actions comprising;

    receiving sets of image data including feature points and pose information, the pose information including a location of the mobile device and view of the camera that captured the image data, the sets referred to collectively as keyrigs;

    reviewing the keyrigs to select keyrig content to include in a point cloud of features, based upon comparisons of keyrig content with content of other selected keyrigs subject to one or more intelligent thresholds;

    for selected keyrigs, (a) triangulating new feature points in the keyrig using feature points of keyrigs previously added to the point cloud of features to obtain feature points in a coordinate system of the device, and (b) aligning coordinates of the feature points in the point cloud of features to a coordinate system having a z-axis aligned with gravity;

    creating a multilayered hybrid point grid from the feature points selected for the point cloud of features, using at least one layer of a multilayered 2D occupancy grid, including;

    initializing a 2D occupancy grid corresponding to one selected from a plurality of x-y layers covering the feature points in the point cloud of features;

    populating at least one layer of the occupancy grid with points from the point cloud of features within a height range using ray tracing from an observed location of a point in the keyrig aligned to a corresponding point in the occupancy grid and a location of a corresponding point reprojected on the layer of the occupancy grid; and

    finding cells along a ray between the aligned observed point and the corresponding point reprojected on the layer and marking the found cells as empty; and

    responsive to receiving a command to travel to a location, using the occupancy grid to plan a path of travel to a location commanded and contemporaneously using the descriptive point cloud while traveling the planned path to avoid colliding with obstructions.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×