×

Deep machine learning methods and apparatus for robotic grasping

  • US 9,914,213 B2
  • Filed: 03/02/2017
  • Issued: 03/13/2018
  • Est. Priority Date: 03/03/2016
  • Status: Active Grant
First Claim
Patent Images

1. A method implemented by one or more processors, comprising:

  • generating a candidate end effector motion vector defining motion to move a grasping end effector of a robot from a current pose to an additional pose;

    identifying a current image captured by a vision sensor associated with the robot, the current image capturing the grasping end effector and at least one object in an environment of the robot;

    applying the current image and the candidate end effector motion vector as input to a trained grasp convolutional neural network;

    generating, over the trained grasp convolutional neural network, a measure of successful grasp of the object with application of the motion, the measure being generated based on the application of the image and the end effector motion vector to the trained grasp convolutional neural network;

    identifying a desired object semantic feature;

    applying, as input to a semantic convolutional neural network, a spatial transformation of the current image or of an additional image captured by the vision sensor;

    generating, over the semantic convolutional neural network based on the spatial transformation, an additional measure that indicates whether the desired object semantic feature is present in the spatial transformation;

    generating an end effector command based on the measure of successful grasp and the additional measure that indicates whether the desired object semantic feature is present; and

    providing the end effector command to one or more actuators of the robot.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×