Systems and Methods for Implementing Three-Dimensional (3D) Gesture Based Graphical User Interfaces (GUI) that Incorporate Gesture Reactive Interface Objects
First Claim
1. A method of rendering a user interface on a computing device, comprising:
- rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone;
detecting a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface using the computing device by;
identifying a 3D interaction zone within the captured image data that maps to the user interface;
determining the location of at least a portion of a human hand within the 3D interaction zone;
identifying a pose of the at least a portion of a human hand corresponding to a targeting 3D gesture;
mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface;
determining that the mapped location within the user interface falls within the target zone of a specific interface object; and
detecting the occurrence of a targeting 3D gesture targeting the specific interface object; and
enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object;
changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object using the computing device;
detecting an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in additional captured image data that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device, where the detection of the interaction 3D gesture comprises;
tracking the motion of at least a portion of a human hand within the 3D interaction zone; and
determining that the pose of at least a portion of a human hand within the 3D interaction zone has changed and corresponds to an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object irrespective of the location of the at least a portion of a human hand within the 3D interaction zone; and
modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and
rendering the modified user interface using the computing device.
5 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods in accordance with embodiments of the invention implement three-dimensional (3D) gesture based graphical user interfaces (GUI) using gesture reactive interface objects. One embodiment includes using a computing device to render an initial user interface comprising a set of interface objects, detect a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface, change the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object, detect an interaction 3D gesture in additional captured image data that identifies a specific interaction with a targeted interface object, modify the user interface in response to the interaction with the targeted interface object identified by the interaction 3D gesture, and render the modified user interface.
32 Citations
30 Claims
-
1. A method of rendering a user interface on a computing device, comprising:
-
rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone; detecting a targeting 3D gesture in captured image data that identifies a targeted interface object within the user interface using the computing device by; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone; identifying a pose of the at least a portion of a human hand corresponding to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of a specific interface object; and detecting the occurrence of a targeting 3D gesture targeting the specific interface object; and enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D gesture that targets the interface object using the computing device; detecting an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in additional captured image data that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device, where the detection of the interaction 3D gesture comprises; tracking the motion of at least a portion of a human hand within the 3D interaction zone; and determining that the pose of at least a portion of a human hand within the 3D interaction zone has changed and corresponds to an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object irrespective of the location of the at least a portion of a human hand within the 3D interaction zone; and modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and rendering the modified user interface using the computing device. - View Dependent Claims (2, 3, 6, 7, 8, 9, 11, 12, 15, 21, 22, 23, 24, 25, 26, 27, 28)
-
-
4-5. -5. (canceled)
-
10. (canceled)
-
13-14. -14. (canceled)
-
16-17. -17. (canceled)
-
19-20. -20. (canceled)
-
29. A method of rendering a user interface on a real-time gesture based interactive system comprising an image capture system including at least two cameras, an image processing system and a display device, the method comprising:
-
rendering an initial user interface comprising a set of interface objects using the image processing system, where each interface object comprises; a graphical element that is rendered when the interface object is rendered for display; a target zone that defines at least one region in the user interface in which a targeting three-dimensional (3D) gesture targets the interface object; and a description of a set of permitted interactions; displaying the rendered user interface using the display; capturing image data using the image capture system; detecting an input via a 3D gesture input modality from the captured image data using the image processing system; changing the manner in which the initial user interface is rendered in response to detection of an input via a 3D gesture input modality using the image processing device; displaying the rendered user interface using the display; detecting a targeting 3D gesture that targets the target zone of one of the interface objects within the user interface using the image processing system by; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone; identifying a pose of the at least a portion of a human hand corresponding to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of an interface object; and detecting the occurrence of a targeting 3D gesture targeting the specific interface object; and changing the rendering of at least the targeted interface object within the user interface in response to the 3D gesture targeting the interface object using the image processing system; displaying the user interface via the display; capturing additional image data using the image capture system; determining that the targeting 3D gesture targets the interface object for a predetermined period of time, where the determination considers the targeting 3D gesture to be targeting the interface object during any period of time in which the targeting 3D gesture does not target the interface object that is less than a hysteresis threshold; enabling a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; displaying an interaction element indicating the time remaining to interact with the targeted interface object in response to a determination that the targeting 3D gesture has targeted the interface object for a predetermined period of time using the image processing system; detecting an interaction 3D gesture for the set of one or more interaction gestures in additional captured image data within a predetermined time period from the detection of the targeting 3D gesture input, where the interaction 3D gesture identifies a specific interaction with the targeted interface object using the image processing system and is detected by; tracking the motion of at least a portion of a human hand within the 3D interaction zone; and determining that the pose of at least a portion of a human hand within the 3D interaction zone has changed and corresponds to an interaction 3D gesture irrespective of the location of the at least a portion of a human hand within the 3D interaction zone; verifying that the interaction gesture is associated with a specific interaction within the set of permitted interactions for the interface object using the image processing system; modifying the user interface in response to the specific interaction with the targeted interface object identified by the interaction 3D gesture using the image processing system; rendering the modified user interface using the image processing system; and displaying the rendered user interface using the display.
-
-
30. A real-time gesture based interactive system configured to display a user interface and receive three-dimensional (3D) gesture based input, comprising:
-
a processor; an image capture system configured to capture image data and provide the captured image data to the processor; memory containing; an operating system; an interactive application; and a 3D gesture tracking application; wherein the interactive application and the operating system configure the processor to; generate and render an initial user interface comprising a set of interface objects, where each interface object includes a graphical element that is rendered when the interface object is rendered for display and a target zone that defines at least one region in the user interface in which the interface object is to be targeted; and modify an initial user interface in response to a detected interaction with a targeted interface object and render an updated user interface; and wherein the 3D gesture tracking application and the operating system configure the processor to; capture image data using the image capture system; detect a targeting 3D gesture in captured image data that identifies a targeted interface object within a user interface by identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human hand within the 3D interaction zone; identifying a pose of the at least a portion of a human hand corresponding to a targeting 3D gesture; mapping the location of the at least a portion of a human hand within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of a specific interface object; and detecting the occurrence of a targeting 3D gesture targeting the specific interface object; and enable a set of one or more interaction gestures for the targeted interface object in response to the detection of the targeting 3D gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; change the rendering of at least the targeted interface object within a user interface in response to detection of a targeting 3D gesture that targets the interface object; detect an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object in captured image data that identifies a specific interaction with a targeted interface object, where the detection of the interaction 3D gesture comprises; tracking the motion of at least a portion of a human hand within the 3D interaction zone; and determining that the pose of at least a portion of a human hand within the 3D interaction zone has changed and corresponds to an interaction 3D gesture from the set of one or more interaction gestures for the targeted interface object irrespective of the location of the at least a portion of a human hand within the 3D interaction zone; and provide events corresponding to specific interactions with targeted interface objects to the interactive application.
-
Specification