Systems and Methods for Implementing Head Tracking Based Graphical User Interfaces (GUI) that Incorporate Gesture Reactive Interface Objects
First Claim
1. A method of rendering a user interface on a computing device, comprising:
- rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone within the user interface;
receiving captured image data;
detecting a targeting gesture in captured image data that identifies a targeted interface object within the user interface using the computing device;
enabling a set of one or more interaction 3D head gestures for the targeted interface object in response to the detection of the targeting gesture using the computing device wherein each of the one or more interaction head gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object;
receiving additional captured image data;
determining the motion of the at least a portion of the human head in the additional data corresponds to an interaction 3D head gesture from the set of one or more interaction 3D head gestures enabled for the targeted interface object that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device;
modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and
rendering the modified user interface using the computing device.
4 Assignments
0 Petitions
Accused Products
Abstract
Embodiments in accordance with this invention disclose systems and methods for implementing head tracking based graphical user interfaces that incorporate gesture reactive interface objects. The disclosed embodiments perform a method in which a GUI includes interface objects is rendered and displayed. Image data of an interaction zone is captured. A targeting gestured targeting a targeted interface object is detected in the captured image data and a set of 3D head interaction gestures are enabled. Additional image data is captured. Motion of at least a portion of a human head is detected and one of the 3D head interactions is identified. The rendering of the interface is modified in response to the detection of one of the 3D head interactions and the modified interface is displayed.
28 Citations
26 Claims
-
1. A method of rendering a user interface on a computing device, comprising:
-
rendering an initial user interface comprising a set of interface objects using a computing device, where each interface object in the set of interface objects includes a graphical element that is rendered when the interface object is rendered for display and a target zone within the user interface; receiving captured image data; detecting a targeting gesture in captured image data that identifies a targeted interface object within the user interface using the computing device; enabling a set of one or more interaction 3D head gestures for the targeted interface object in response to the detection of the targeting gesture using the computing device wherein each of the one or more interaction head gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; receiving additional captured image data; determining the motion of the at least a portion of the human head in the additional data corresponds to an interaction 3D head gesture from the set of one or more interaction 3D head gestures enabled for the targeted interface object that identifies a specific interaction from the set of permitted interactions with the targeted interface object using the computing device; modifying the user interface in response to the specific interaction with the targeted interface object identified by the detected interaction 3D gesture using the computing device; and rendering the modified user interface using the computing device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24)
-
-
25. A method of rendering a user interface on a real-time gesture based interactive system comprising an image capture system including at least two cameras, an image processing system and a display device, the method comprising:
-
rendering an initial user interface comprising a set of interface objects using the image processing system, where each interface object comprises; a graphical element that is rendered when the interface object is rendered for display; a target zone that defines at least one region in the user interface in which a targeting three-dimensional (3D) gesture targets the interface object; and a description of a set of permitted interactions; displaying the rendered user interface using the display; capturing image data using the image capture system; detecting an input via a 3D head gesture input modality from the captured image data using the image processing system; changing the manner in which the initial user interface is rendered in response to detection of an input via a 3D head gesture input modality using the image processing device; displaying the rendered user interface using the display; identifying a 3D interaction zone within the captured image data that maps to the user interface; determining the location of at least a portion of a human head within the 3D interaction zone from the captured image data; identifying a first pose of the at least a portion of a human head within the target zone that corresponds to a targeting 3D head gesture; mapping the location of the at least a portion of a human head within the 3D interaction zone to a location within the user interface; determining that the mapped location within the user interface falls within the target zone of a specific interface object in the user interface; identifying the specific interface object as a targeted interface object in response to an identification of the first pose as a targeting head gesture and a determination that the mapped location of the at least a portion of the human head falls within the target zone of the specific interface object in the user interface; and changing the rendering of at least the targeted interface object within the user interface in response to the targeting 3D head gesture using the image processing system; displaying the user interface via the display; capturing additional image data using the image capture system; determining that the targeting 3D head gesture targets the targeted interface object for a predetermined period of time, where the determination considers the targeting 3D head gesture to be targeting the targeted interface object during any period of time in which the targeting 3D head gesture does not target the targeted interface object that is less than a hysteresis threshold; enabling a set of one or more interaction 3D head gestures for the targeted interface object in response to the detection of the targeting 3D head gesture using the computing device wherein each of the one or more interaction gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; displaying an interaction element indicating the time remaining to interact with the targeted interface object in response to a determination that the targeting 3D head gesture has targeted the interface object for a predetermined period of time using the image processing system; tracking the motion of at least a portion of a human head within the 3D interaction zone in additional captured image data captured within a predetermined time period from the detection of the targeting 3D head gesture input using the image processing system; identifying a change in pose for the at least a portion of a human head within the 3D interaction zone from the first pose to a second pose during the motion of the at least a portion of the human head within the 3D interaction zone during the motion using the image processing system; determining the motion of the at least a portion of the human head corresponds to a specific interaction 3D head gesture from the set of one or more interaction gestures enabled for the targeted interface object that identifies a specific interaction with the targeted interface object using the image processing system; verifying that the specific interaction 3D head gesture is associated with a specific interaction within the set of permitted interactions for the interface object using the image processing system; modifying the user interface in response to the specific interaction with the targeted interface object identified by the specific interaction 3D head gesture using the image processing system; rendering the modified user interface using the image processing system; and displaying the rendered user interface using the display.
-
-
26. A real-time gesture based interactive system configured to display a user interface and receive three-dimensional (3D) gesture based input, comprising:
-
a processor; an image capture system configured to capture image data and provide the captured image data to the processor; memory containing; an operating system; an interactive application; and a 3D gesture tracking application; wherein the interactive application and the operating system configure the processor to; generate and render an initial user interface comprising a set of interface objects, where each interface object includes a graphical element that is rendered when the interface object is rendered for display and a target zone that defines at least one region in the user interface in which the interface object is to be targeted; and modify an initial user interface in response to a detected interaction with a targeted interface object and render an updated user interface; and wherein the 3D gesture tracking application and the operating system configure the processor to; capture image data using the image capture system; detect a targeting gesture from the captured image data; identify a specific interface object as the targeted interface object in response to a detection of the targeting gesture; enable a set of one or more interaction 3D head gestures for the targeted interface object in response to the identifying of the targeting gesture wherein each of the one or more interaction 3D head gestures is associated with a permitted interaction in a set of permitted interactions allowed for the targeted interface object and each permitted interaction is an action performed via the user interface to manipulate the targeted interface object; change the rendering of at least the targeted interface object within a user interface in response to detection of a targeting 3D gesture that targets the interface object; capture additional image data; track the motion of at least a portion of a human head in additional captured image data captured by the image capture system within a predetermined time period from the detection of the targeting gesture input using the image processing system; determine the motion of the at least a portion of the human head corresponds to a specific interaction 3D head gesture from the set of one or more interaction gestures enabled for the targeted interface object that identifies a specific interaction with the targeted interface object using the image processing system; and provide events corresponding to the specific interaction with targeted interface object to the interactive application.
-
Specification