Method for selecting interactivity mode
First Claim
1. A method comprising:
- selecting a mode of interaction between a user and a digital rendering device from among at least two available modes, wherein selecting comprises the following acts performed by the digital rendering device;
obtaining, by a gesture capturing interface of said digital rendering device, at least one piece of information representing a gesture performed by the user, said gesture being performed by said user by using at least one limb or at least one part of a limb;
counting from the obtained information a number of limbs or parts of limbs used to carry out said gesture;
selecting a mode of interaction of the digital rendering device from among said at least two available modes as a function of the number, wherein selecting comprises;
when said number exceeds a first predetermined value, selecting a first mode of interaction from among said at least two available modes, called an indirect handing mode in which the digital rendering device interprets the gesture performed as a command;
when said number is smaller than or equal to a second predetermined value, selecting a second mode of interaction from among said at least two available modes, called a direct handling mode in which the digital rendering device interprets the gesture performed by the user as an interaction with elements displayed by the digital rendering device; and
operating the digital rendering device in the selected first or second mode in response to the gesture, said operating comprising, in said indirect handing mode, the following acts;
recognizing a shape drawn by said limbs or parts of limbs used to carry out said gesture;
displaying a piece of data representing the recognized shape in an area at an end of the shape drawn by the user, called a piece of visual feedback data, said visual feedback data comprising at least a text caption;
initializing a validation time counter for a predetermined period of time;
validating the recognized shape when no action takes place during the period of time defined by said counter, said validating comprising performing a command associated with said recognized shape;
when an action takes place during the period of time defined by said counter, initiating a corrective mechanism comprising;
initializing the corrective mechanism;
displaying a list of possible gestures;
selecting, by the user, a gesture in the list of gestures;
displaying actions to be performed by the recognition engine;
selecting, by the user, said action to perform among said displayed actions to be performed by the recognition engine;
performing said selected action to perform by the recognition engine.
1 Assignment
0 Petitions
Accused Products
Abstract
A method is provided for selecting, from at least two available modes, a mode of interaction between a user and a digital playback device. The digital playback device is capable of obtaining at least one item of information representative of a gesture performed by a user, the gesture being performed with the aid of at least one limb or of at least one part of a limb by the user. The method includes: obtaining a number of limbs or of parts of limbs used to perform the gesture; and when the number exceeds a first predetermined value, selecting a first mode of interaction from among the at least two available modes; and when the number is less than or equal to a second predetermined value, selecting a second mode of interaction from among the at least two available modes.
20 Citations
7 Claims
-
1. A method comprising:
-
selecting a mode of interaction between a user and a digital rendering device from among at least two available modes, wherein selecting comprises the following acts performed by the digital rendering device; obtaining, by a gesture capturing interface of said digital rendering device, at least one piece of information representing a gesture performed by the user, said gesture being performed by said user by using at least one limb or at least one part of a limb; counting from the obtained information a number of limbs or parts of limbs used to carry out said gesture; selecting a mode of interaction of the digital rendering device from among said at least two available modes as a function of the number, wherein selecting comprises; when said number exceeds a first predetermined value, selecting a first mode of interaction from among said at least two available modes, called an indirect handing mode in which the digital rendering device interprets the gesture performed as a command; when said number is smaller than or equal to a second predetermined value, selecting a second mode of interaction from among said at least two available modes, called a direct handling mode in which the digital rendering device interprets the gesture performed by the user as an interaction with elements displayed by the digital rendering device; and operating the digital rendering device in the selected first or second mode in response to the gesture, said operating comprising, in said indirect handing mode, the following acts; recognizing a shape drawn by said limbs or parts of limbs used to carry out said gesture; displaying a piece of data representing the recognized shape in an area at an end of the shape drawn by the user, called a piece of visual feedback data, said visual feedback data comprising at least a text caption; initializing a validation time counter for a predetermined period of time; validating the recognized shape when no action takes place during the period of time defined by said counter, said validating comprising performing a command associated with said recognized shape; when an action takes place during the period of time defined by said counter, initiating a corrective mechanism comprising; initializing the corrective mechanism; displaying a list of possible gestures; selecting, by the user, a gesture in the list of gestures; displaying actions to be performed by the recognition engine; selecting, by the user, said action to perform among said displayed actions to be performed by the recognition engine; performing said selected action to perform by the recognition engine. - View Dependent Claims (2, 3, 4, 7)
-
-
5. A non-transitory computer-readable support comprising a computer program product stored thereon, which comprises program code instructions for executing a method of selecting a mode of interaction between a user and a digital rendering device from among at least two available modes, when the instructions are executed by a processor of the digital rendering device, said method comprising:
-
selecting the mode of interaction between the user and the digital rendering device from among at least two available modes, wherein selecting comprises the following acts performed by the digital rendering device; obtaining, by a gesture capturing interface of said digital rendering device, at least one piece of information representing a gesture performed by the user, said gesture being performed by said user by using at least one limb or at least one part of a limb; counting from the obtained information a number of limbs or parts of limbs used to carry out said gesture; selecting a mode of interaction of the digital rendering device from among said at least two available modes as a function of the number, wherein selecting comprises; when said number exceeds a first predetermined value, selecting a first mode of interaction from among said at least two available modes, called an indirect handing mode in which the digital rendering device interprets the gesture performed by the user as a command; when said number is smaller than or equal to a second predetermined value, selecting a second mode of interaction from among said at least two available modes, called a direct handling mode in which the digital rendering device interprets the gesture performed by the user as an interaction with elements displayed by the digital rendering device; and operating the digital rendering device in the selected mode in response to the gesture, said operating comprising, in said indirect handing mode, the following acts; recognizing a shape drawn by said limbs or parts of limbs used to carry out said gesture; displaying of a piece of data representing the recognized shape in an area at an end of the shape drawn by the user, called a piece of visual feedback data, said visual feedback data comprising at least a text caption; initializing of a validation time counter for a predetermined period of time; validating the recognized shape when no action takes place during the period of time defined by said counter, said validating comprising performing a command associated with said recognized shape; when an action takes place during the period of time defined by said counter, initiating a corrective mechanism comprising; initializing the corrective mechanism; displaying a list of possible gestures; selecting, by the user, a gesture in the list of gestures; displaying actions to be performed by the recognition engine; selecting, by the user, said action to perform among said displayed actions to be performed by the recognition engine; performing said selected action to perform by the recognition engine. - View Dependent Claims (6)
-
Specification