×

Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects

  • US 9,298,266 B2
  • Filed: 08/12/2013
  • Issued: 03/29/2016
  • Est. Priority Date: 04/02/2013
  • Status: Active Grant
First Claim
Patent Images

1. A method of rendering a user interface on a computing device, comprising:

  • rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone within the user interface;

    detecting a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface using the computing device by;

    identifying a 3D interaction zone within the captured image data that maps to the user interface;

    determining the location of at least a portion of a human hand within the 3D interaction zone;

    identifying a first pose of the at least a portion of a human hand corresponding to a targeting 3D gesture;

    mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface;

    determining that the mapped location within the user interface falls within the target zone of a specific interface object within the user interface; and

    identifying the specific interface object as the targeted interface object in response to an identification of the first pose as a targeting gesture and a determination that the mapped location of at least a portion of the human hand falls with the target zone of the specific interface object;

    enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object;

    changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object using the computing device;

    detecting an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in additional captured image data that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device, where the detection of the interaction 3D gesture comprises;

    tracking the motion of at least a portion of a human hand within the 3D interaction zone in the additional captured image data;

    identifying a change in pose of at least a portion of a human hand within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human hand irrespective of the location of the at least a portion of a human hand within the 3D interaction zone during the motion; and

    identifying the second pose of the at least a portion of the human hand as corresponding to one of the 3D interaction gestures from the set of one or more interactive gestures to control the targeted interface object, the identifying the second pose being independent of a mapping between the user interface and a location of the human hand in the 3D interaction zone;

    modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and

    rendering the modified user interface using the computing device.

View all claims
  • 5 Assignments
Timeline View
Assignment View
    ×
    ×