Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
First Claim
1. A method of rendering a user interface on a computing device, comprising:
- rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone within the user interface;
detecting a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface using the computing device by;
identifying a 3D interaction zone within the captured image data that maps to the user interface;
determining the location of at least a portion of a human hand within the 3D interaction zone;
identifying a first pose of the at least a portion of a human hand corresponding to a targeting 3D gesture;
mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface;
determining that the mapped location within the user interface falls within the target zone of a specific interface object within the user interface; and
identifying the specific interface object as the targeted interface object in response to an identification of the first pose as a targeting gesture and a determination that the mapped location of at least a portion of the human hand falls with the target zone of the specific interface object;
enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object;
changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object using the computing device;
detecting an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in additional captured image data that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device, where the detection of the interaction 3D gesture comprises;
tracking the motion of at least a portion of a human hand within the 3D interaction zone in the additional captured image data;
identifying a change in pose of at least a portion of a human hand within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human hand irrespective of the location of the at least a portion of a human hand within the 3D interaction zone during the motion; and
identifying the second pose of the at least a portion of the human hand as corresponding to one of the 3D interaction gestures from the set of one or more interactive gestures to control the targeted interface object, the identifying the second pose being independent of a mapping between the user interface and a location of the human hand in the 3D interaction zone;
modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and
rendering the modified user interface using the computing device.
5 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods in accordance with embodiments of the invention implement three-dimensional (3D) gesture based graphical user interfaces (GUI) using gesture reactive interface objects. One embodiment includes using a computing device to render an initial user interface comprising a set of interface objects, detect a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface, change the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object, detect an interaction 3D gesture in additional captured image data that identifies a specific interaction with a targeted interface object, modify the user interface in response to the interaction with the targeted interface object identified by the interaction 3D gesture, and render the modified user interface.
300 Citations
21 Claims
-
1. A method of rendering a user interface on a computing device, comprising:
-
rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone within the user interface; detecting a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface using the computing device by; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone; identifying a first pose of the at least a portion of a human hand corresponding to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of a specific interface object within the user interface; and identifying the specific interface object as the targeted interface object in response to an identification of the first pose as a targeting gesture and a determination that the mapped location of at least a portion of the human hand falls with the target zone of the specific interface object; enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object using the computing device; detecting an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in additional captured image data that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device, where the detection of the interaction 3D gesture comprises; tracking the motion of at least a portion of a human hand within the 3D interaction zone in the additional captured image data; identifying a change in pose of at least a portion of a human hand within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human hand irrespective of the location of the at least a portion of a human hand within the 3D interaction zone during the motion; and identifying the second pose of the at least a portion of the human hand as corresponding to one of the 3D interaction gestures from the set of one or more interactive gestures to control the targeted interface object, the identifying the second pose being independent of a mapping between the user interface and a location of the human hand in the 3D interaction zone; modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and rendering the modified user interface using the computing device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A method of rendering a user interface on a real-time gesture based interactive system comprising an image capture system including at least two cameras, an image processing system and a display device, the method comprising:
-
rendering an initial user interface comprising a set of interface objects using the image processing system, where each interface object comprises; a graphical element that is rendered when the interface object is rendered for display; a target zone that defines at least one region in the user interface in which a targeting three-dimensional (3D) gesture targets the interface object; and a description of a set of permitted interactions; displaying the rendered user interface using the display; capturing image data using the image capture system; detecting an input via a 3D gesture input modality from the captured image data using the image processing system; changing the manner in which the initial user interface is rendered in response to detection of an input via a 3D gesture input modality using the image processing device; displaying the rendered user interface using the display; detecting a targeting 3D gesture that targets a targeted interface object within the user interface using the image processing system by; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone from the captured image data; identifying a first pose of the at least a portion of a human hand within the target zone that corresponds to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of a specific interface object in the user interface; identifying the specific interface object as the targeted interface object in response to an identification of the first pose as the targeting gesture and a determination that the mapped location of the at least a portion of the human hand falls within the target zone of the specific interface object in the user interface; and changing the rendering of at least the targeted interface object within the user interface in response to the 3D gesture targeting the interface object using the image processing system; displaying the user interface via the display; capturing additional image data using the image capture system; determining that the targeting 3D gesture targets the targeted interface object for a predetermined period of time, where the determination considers the targeting 3D gesture to be targeting the targeted interface object during any period of time in which the targeting 3D gesture does not target the targeted interface object that is less than a hysteresis threshold; enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; displaying an interaction element indicating the time remaining to interact with the targeted interface object in response to a determination that the targeting 3D gesture has targeted the interface object for a predetermined period of time using the image processing system; detecting an interaction 3D gesture for the set of one or more interaction gestures in additional captured image data within a predetermined time period from the detection of the targeting 3D gesture input, where the interaction 3D gesture identifies a specific interaction with the targeted interface object using the image processing system and is detected by; tracking the motion of at least a portion of a human hand within the 3D interaction zone; identifying a change in pose of at least a portion of a human hand within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human hand has changed and corresponds to an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object irrespective of the location of the at least a portion of a human hand within the 3D interaction zone during the motion; identifying the second pose of the at least a portion of the human hand as corresponding to one of the 3D interaction gestures from the set of one or more interactive gestures to control the targeted interface object, the identifying the second pose being independent of a mapping between the user interface and a location of the human hand in the 3D interaction zone; verifying that the interaction gesture is associated with a specific interaction within the set of permitted interactions for the interface object using the image processing system; modifying the user interface in response to the specific interaction with the targeted interface object identified by the interaction 3D gesture using the image processing system; rendering the modified user interface using the image processing system; and displaying the rendered user interface using the display.
-
-
21. A real-time gesture based interactive system configured to display a user interface and receive three-dimensional (3D) gesture based input, comprising:
-
a processor; an image capture system configured to capture image data and provide the captured image data to the processor; memory containing; an operating system; an interactive application; and a 3D gesture tracking application; wherein the interactive application and the operating system configure the processor to; generate and render an initial user interface comprising a set of interface objects, where each interface object includes a graphical element that is rendered when the interface object is rendered for display and a target zone that defines at least one region in the user interface in which the interface object is to be targeted; and modify an initial user interface in response to a detected interaction with a targeted interface object and render an updated user interface; and wherein the 3D gesture tracking application and the operating system configure the processor to; capture image data using the image capture system; detect a targeting 3D gesture in captured image data that identifies a targeted interface object within a user interface by; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone; identifying a first pose of the at least a portion of a human hand that corresponds to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within a target zone of a specific interface object; and identifying the specific interface object as the targeted interface object in response to an identification of the first pose as a targeting gesture and a determination that the mapped location of at least a portion of the human hand falls with the target zone of the specific interface object; enable a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; change the rendering of at least the targeted interface object within a user interface in response to detection of a targeting 3D gesture that targets the interface object; detect an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in captured image data that identifies a specific interaction with a targeted interface object, where the detection of the interaction 3D gesture comprises; tracking the motion of at least a portion of a human hand within the 3D interaction zone in the additional captured image data; identifying a change in pose of at least a portion of a human hand within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human hand irrespective of the location of the at least a portion of a human hand within the 3D interaction zone during the motion; and identifying the second pose of the at least a portion of the human hand as corresponding to one of the 3D interaction gestures from the set of one or more interactive gestures to control the targeted interface object, the identifying the second pose being independent of a mapping between the user interface and a location of the human hand in the 3D interaction zone; and provide events corresponding to specific interactions with targeted interface objects to the interactive application.
-
Specification