DEVICE FOR INTERACTING WITH REAL-TIME STREAMS OF CONTENT
First Claim
1. A user interface (30) for interacting with a device that receives and transforms streams of content into a presentation to be output, comprising:
- at least one sensor (32) for detecting a movement made by a user positioned in an interaction area (36) proximate to a location at which the presentation is output, wherein said sensor (32) is arranged to be aimed towards said interaction area;
wherein a type of movement corresponding to said detected movement is determined by analyzing a detection signal from said at least one sensor (32);
wherein the type of movement is different facial expressions or hand gestures made by the user, a gesture that imitates the use of a device or a tool, or an amount of force or speed with which the user makes a gesture; and
wherein the presentation is controlled by manipulating one or more streams of content based on said determined type of movement and a received stream of content is activated or deactivated in the presentation based on the determined type of movement.
0 Assignments
0 Petitions
Accused Products
Abstract
An end-user system (10) for transforming real-time streams of content into an output presentation includes a user interface (30) that allows a user to interact with the streams. The user interface (30) includes sensors (32a-f) that monitor an interaction area (36) to detect movements and/or sounds made by a user. The sensors (32a-f) are distributed among the interaction area (36) such that the user interface (30) can determine a three-dimensional location within the interaction area (36) where the detected movement or sound occurred. Different streams of content can be activated in a presentation based on the type of movement or sound detected, as well as the determined location. The present invention allows a user to interact with and adapt the output presentation according to his/her own preferences, instead of merely being a spectator.
39 Citations
12 Claims
-
1. A user interface (30) for interacting with a device that receives and transforms streams of content into a presentation to be output, comprising:
-
at least one sensor (32) for detecting a movement made by a user positioned in an interaction area (36) proximate to a location at which the presentation is output, wherein said sensor (32) is arranged to be aimed towards said interaction area; wherein a type of movement corresponding to said detected movement is determined by analyzing a detection signal from said at least one sensor (32); wherein the type of movement is different facial expressions or hand gestures made by the user, a gesture that imitates the use of a device or a tool, or an amount of force or speed with which the user makes a gesture; and wherein the presentation is controlled by manipulating one or more streams of content based on said determined type of movement and a received stream of content is activated or deactivated in the presentation based on the determined type of movement. - View Dependent Claims (4, 5, 8)
-
-
2-3. -3. (canceled)
-
6-7. -7. (canceled)
-
9. A process in a system for transforming streams of content into a presentation to be output, comprising:
-
Detecting by means of at least one sensor a movement made by a user which is positioned in an interaction area (36) proximate to a location at which the presentation is output, wherein said sensor (32) is arranged to be aimed towards said interaction area (36); wherein a type of movement corresponding to said detected movement is determined by analyzing a detection signal; wherein the type of movement is different facial expressions or hand gestures made by the user, a gesture that imitates the use of a device or a tool, or an amount of force or speed with which the user makes the gesture; wherein the presentation is controlled by manipulating one or more streams of content based on said determined movement, and a received stream of content is activated or deactivated in the presentation based on the determined type of movement.
-
-
10. A system comprising:
-
an end-user device (10) for receiving and transforming streams of content into a presentation; an output device (15) for outputting said presentation; a user interface (30) including at least one sensor (32) for detecting a movement made by a user which is positioned in an interaction area (36) proximate to the output device (15), wherein said sensor (32) is arranged to be aimed towards said interaction area; wherein a type of movement corresponding to said detected movement is determined by analyzing a detection signal from said sensor (32); wherein the type of movement is different facial expressions or had gestures made by the user, a gesture that imitates the use of a device or a tool, or an amount of force or speed with which the user makes a gesture; and wherein said end-user device (10) manipulates said transformed streams of content based on said determined type of movement, thereby controlling said presentation, and a received stream of content is activated or deactivated in the presentation based on the determined type of movement. - View Dependent Claims (11, 12)
-
Specification