Sharing television and video programming through social networking
First Claim
Patent Images
1. A method, comprising:
- by one or more computer systems, identifying video content being viewed by a first user;
by the one or more computer systems, determining that a second user is viewing the same identified video content at a same location as the first user by comparing first environmental sounds captured by a first device of the first user with second environmental sounds captured by a second device of the second user;
by the one or more computer systems, generating a story according to the identified video content, the story comprising;
an indication of the identified video content; and
an indication of the second user who is viewing the same identified video content at the same location as the first user; and
by the one or more computer systems, publishing the story to a server.
1 Assignment
0 Petitions
Accused Products
Abstract
In particular embodiments a social networking system captures data associated with video content provided to a first user of a social-networking system, identifies, using the captured data, the video content provided to the first user, and updates a graph of the social-networking system to associate the first user with the identified video content. The graph of the social-networking system has a plurality of nodes and edges connecting the nodes. The nodes of the graph include user nodes that are each associated with a particular user of the social-networking system.
202 Citations
20 Claims
-
1. A method, comprising:
-
by one or more computer systems, identifying video content being viewed by a first user; by the one or more computer systems, determining that a second user is viewing the same identified video content at a same location as the first user by comparing first environmental sounds captured by a first device of the first user with second environmental sounds captured by a second device of the second user; by the one or more computer systems, generating a story according to the identified video content, the story comprising; an indication of the identified video content; and an indication of the second user who is viewing the same identified video content at the same location as the first user; and by the one or more computer systems, publishing the story to a server.
-
-
2. The method of claim 1, wherein the first and second devices comprise mobile devices selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
-
3. The method of claim 1, wherein determining that the second user is viewing the same identified video content at the same location as the first user further comprises comparing first location data sent by the first device of the first user with second location data sent by the second device of the second user.
-
4. The method of claim 3, wherein the location data comprises GPS data.
-
5. The method of claim 1, further comprising analyzing privacy preferences of both the first and second user, wherein the story is automatically published to the server subject to the privacy preferences of both the first and second user.
-
6. The method of claim 1, wherein comparing the first environmental sounds captured by the first device of the first user with the second environmental sounds captured by the second device of the second user comprises utilizing audio fingerprinting.
-
7. The method of claim 1, wherein at least one of the first and second devices comprises an internet-connected device that is communicatively coupled with or integrated within a television.
-
8. A system, comprising:
-
one or more memory devices; and a processor communicatively coupled to the one or more memory devices, the processor operable to; identify video content being viewed by a first user; determine that a second user is viewing the same identified video content at a same location as the first user by comparing first environmental sounds captured by a first device of the first user with second environmental sounds captured by a second device of the second user; generate a story according to the identified video content, the story comprising; an indication of the identified video content; and an indication of the second user who is viewing the same identified video content at the same location as the first user; and publish the story to a server.
-
-
9. The system of claim 8, wherein the first and second devices comprise mobile devices selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
-
10. The system of claim 8, wherein determining that the second user is viewing the same identified video content at the same location as the first user further comprises comparing first location data sent by the first device of the first user with second location data sent by the second device of the second user.
-
11. The system of claim 10, wherein the location data comprises GPS data.
-
12. The system of claim 8, the processor further operable to analyze privacy preferences of both the first and second user, wherein the story is automatically published to the server subject to the privacy preferences of both the first and second user.
-
13. The system of claim 8, wherein comparing the first environmental sounds captured by the first device of the first user with the second environmental sounds captured by the second device of the second user comprises utilizing audio fingerprinting.
-
14. The system of claim 8, wherein at least one of the first and second devices comprises an internet-connected device that is communicatively coupled with or integrated within a television.
-
15. One or more computer-readable non-transitory storage media in one or more computing systems, the media embodying logic that is operable when executed to:
-
identify video content being viewed by a first user; determine that a second user is viewing the same identified video content at a same location as the first user by comparing first environmental sounds captured by a first device off the first user with second environmental sounds captured by a second device of the second user; generate a story according to the identified video content, the story comprising; an indication of the identified video content; and an indication of the second user who is viewing the same identified video content at the same location as the first user; and publish the story to a server.
-
-
16. The media of claim 15, wherein the first and second devices comprise mobile devices selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
-
17. The media of claim 15, wherein determining that the second user is viewing the same identified video content at the same location as the first user further comprises comparing first location data sent by the first device of the first user with second location data sent by the second device of the second user.
-
18. The media of claim 15, the logic further operable when executed to analyze privacy preferences of both the first and second user, wherein the story is automatically published to the server subject to the privacy preferences of both the first and second user.
-
19. The media of claim 15, wherein comparing the first environmental sounds captured by the first device of the first user with the second environmental sounds captured by the second device of the second user comprises utilizing audio fingerprinting.
-
20. The media of claim 15, wherein at least one of the first and second devices comprises an internet-connected device that is communicatively coupled with or integrated within a television.
Specification