INTERACTIVE AND SHARED VIEWING EXPERIENCE
First Claim
1. A processor-implemented method for providing an interactive and shared viewing experience, comprising the processor-implemented steps of:
- receiving streaming video content at a first electronic media device, associated with a first user, the streaming video content is received via at least one network from at least one server;
displaying the streaming video content at a user interface of the first electronic media device in synchronicity with a display of the streaming video content at a second electronic media device, associated with a second user;
displaying a first avatar, which represents the first user, emoting on the user interface, based on data which describes an appearance of the first avatar, and based on control inputs from the first user;
receiving data from the at least one network which describes an appearance of a second avatar, with represents the second user;
receiving control inputs via the at least one network for causing the second avatar to emote on the user interface, the control inputs for causing the second avatar to emote are provided by the second user via the second electronic media device; and
displaying the second avatar emoting on the user interface based on the received data which describes the appearance of the second avatar, and the control inputs provided by the second user, where the streaming video content, and the first and second avatars are displayed on the user interface in a shared environment, and the first and second users interact, while viewing the streamed video content, by causing the first and second avatars, respectively, to emote.
2 Assignments
0 Petitions
Accused Products
Abstract
A shared environment is provided for electronic media devices such as game consoles which have Internet connectivity for receiving streaming video content. The shared environment is a virtual world which includes the video content as well as avatars which represent a group of users who have agreed to watch the content together, and scene elements which provide a realistic and eye-catching appearance. The users can enter commands to cause their avatars to emote, such as by cheering. The users can also explore the shared environment from different camera angles. A voice and/or video chat channel can also be enabled to allow the users to speak to and see one another. The video content is synchronized among the electronic media devices of the users so that the users'"'"' interactions are more meaningful as they can react to the same portion of the video content in real time.
194 Citations
20 Claims
-
1. A processor-implemented method for providing an interactive and shared viewing experience, comprising the processor-implemented steps of:
-
receiving streaming video content at a first electronic media device, associated with a first user, the streaming video content is received via at least one network from at least one server; displaying the streaming video content at a user interface of the first electronic media device in synchronicity with a display of the streaming video content at a second electronic media device, associated with a second user; displaying a first avatar, which represents the first user, emoting on the user interface, based on data which describes an appearance of the first avatar, and based on control inputs from the first user; receiving data from the at least one network which describes an appearance of a second avatar, with represents the second user; receiving control inputs via the at least one network for causing the second avatar to emote on the user interface, the control inputs for causing the second avatar to emote are provided by the second user via the second electronic media device; and displaying the second avatar emoting on the user interface based on the received data which describes the appearance of the second avatar, and the control inputs provided by the second user, where the streaming video content, and the first and second avatars are displayed on the user interface in a shared environment, and the first and second users interact, while viewing the streamed video content, by causing the first and second avatars, respectively, to emote. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A processor-implemented method for providing an interactive and shared viewing experience, comprising the processor-implemented steps of:
-
receiving video content at a first electronic media device, associated with a first user; displaying the video content at a user interface of the first electronic media device in synchronicity with a display of the video content at a second electronic media device, associated with a second user; displaying a first avatar, which represents the first user, on the user interface; displaying at least one other avatar, which represents at least one other user, on the user interface, where the video content, the first avatar and the at least one other avatar are displayed on the user interface in a shared environment; providing a selectable icon on the user interface which represents a directed emote; receiving a command from the first user which selects the selectable icon; and based on the command, causing the first avatar to perform the directed emote, which is directed at the at least one other avatar, the directed emote provides a visual indicia on the user interface indicating that the directed emote is directed from the first avatar to the at least one other avatar. - View Dependent Claims (13, 14, 15)
-
-
16. Tangible computer readable storage having computer readable software embodied thereon for programming at least one processor to perform a method at a first electronic media device, associated with a first user, the method comprising:
-
receiving an invitation from the at least one server to join an interactive and shared viewing experience, the invitation is initiated by a second user via a second electronic media device; providing a response to the at least one server, accepting the invitation; receiving streaming video content, the streaming video content is received via at least one network from at least one server; receiving timing information via the at least one network; displaying the streaming video content at a user interface, synchronized according to the timing information; displaying a first avatar, which represents the first user, emoting on the user interface, based on data which describes an appearance of the first avatar, and based on control inputs from the first user; receiving data via the at least one network which describes an appearance of a second avatar, with represents a second user; receiving control inputs via the at least one network for causing the second avatar to emote on the user interface, the control inputs for causing the second avatar to emote are provided by the second user via a second electronic media device, associated with the second user; and displaying the second avatar emoting on the user interface based on the received data which describes the appearance of the second avatar, and the control inputs provided by the second user, where the streaming video content, and the first and second avatars are displayed on the user interface in a shared environment, and the first and second users interact by causing the first and second avatars, respectively, to emote while viewing the streamed video content. - View Dependent Claims (17, 18, 19, 20)
-
Specification