Methods and apparatus for filtering and inserting content into a presentation stream using signature data
First Claim
1. A method for processing an audio/video stream, the method comprising:
- providing a first audio/video stream including at least one segment of a show, at least one interstitial of the show and closed captioning data;
receiving location information for the first audio/video stream, the location information including a text string associated with a particular video location within the first audio/video stream, and the location information including search boundary offsets relative to the particular video location;
receiving a signature of a portion of the first audio/video stream, wherein the signature refers to waveform characteristics of the portion of the first audio/video stream;
processing the closed captioning data to locate an instance of the text string in the closed captioning data, and to locate a beginning of the instance of the text string in the closed captioning data;
identifying an intermediate video location in the first audio/video stream, the identified intermediate video location corresponding to the beginning of the text string located in the closed captioning data;
identifying search boundaries within the first audio/video stream by applying the search boundary offsets to the identified intermediate video location;
processing content of the first audio/video stream within the identified search boundaries, wherein the processing searches for the signature to identify a signature-based video location in the first audio/video stream;
locating boundaries of a segment of the show by applying segment boundary offsets to the identified signature-based video location;
identifying supplemental content for presentation in association with the segment of the show; and
outputting a second audio/video stream for presentation by a display device, the second audio/video stream including the segment of the show and the supplemental content, wherein the outputting uses the identified boundaries of the segment of the show.
3 Assignments
0 Petitions
Accused Products
Abstract
Described herein are methods and apparatus for the identification of locations in a presentation stream based on metadata associated with the presentation stream. Locations within a presentation stream are identified using signature data associated with the presentation stream. The identified locations within a presentation stream may be utilized to identify boundaries of segments within the presentation stream, such as segments of a show and interstitials (e.g., commercials) of the show. The identified portions of a presentation stream may then be utilized for filtering segments of content during presentation. Additionally, supplemental content is identified and inserted into the presentation stream during presentation.
352 Citations
12 Claims
-
1. A method for processing an audio/video stream, the method comprising:
-
providing a first audio/video stream including at least one segment of a show, at least one interstitial of the show and closed captioning data; receiving location information for the first audio/video stream, the location information including a text string associated with a particular video location within the first audio/video stream, and the location information including search boundary offsets relative to the particular video location; receiving a signature of a portion of the first audio/video stream, wherein the signature refers to waveform characteristics of the portion of the first audio/video stream; processing the closed captioning data to locate an instance of the text string in the closed captioning data, and to locate a beginning of the instance of the text string in the closed captioning data; identifying an intermediate video location in the first audio/video stream, the identified intermediate video location corresponding to the beginning of the text string located in the closed captioning data; identifying search boundaries within the first audio/video stream by applying the search boundary offsets to the identified intermediate video location; processing content of the first audio/video stream within the identified search boundaries, wherein the processing searches for the signature to identify a signature-based video location in the first audio/video stream; locating boundaries of a segment of the show by applying segment boundary offsets to the identified signature-based video location; identifying supplemental content for presentation in association with the segment of the show; and outputting a second audio/video stream for presentation by a display device, the second audio/video stream including the segment of the show and the supplemental content, wherein the outputting uses the identified boundaries of the segment of the show. - View Dependent Claims (2, 3, 4)
-
-
5. A method for processing a stream of data, the method comprising:
-
providing a first presentation stream of video data including at least one segment of a show and at least one interstitial of the show; receiving location information referencing a location within the first presentation stream, the location information including a text string corresponding to closed captioning data for the first presentation stream; receiving a signature of a portion of the first presentation stream corresponding with the location, the signature identifying a transition in the video data from a first luminance value for a first frame of the video data to a second luminance value for a second frame of the video data; receiving search boundary offsets specified relative to the location referenced by the received location information; processing the closed captioning data to locate an instance of the text string in the closed captioning data; identifying an intermediate video location within the first presentation stream, the identified intermediate video location corresponding to the instance of the text string located in the closed captioning data; identifying search boundaries within the first presentation stream by applying the search boundary offsets to the identified intermediate video location; computing average luminance values for a plurality of frames of the video data of the first presentation stream, wherein the plurality of frames are within the search boundaries; processing the average luminance values to identify the transition from the first luminance value to the second luminance value based on the signature, the transition corresponding with a signature-based video location within the first presentation stream; processing the first presentation stream to identify boundaries of the segment of the show based on the signature-based video location and the at least one segment boundary offset; identifying supplemental content to present in association with the segment of the show; and outputting a second presentation stream for presentation on a presentation device, the second presentation stream including the segment of the show and the supplemental content, wherein the outputting uses the identified boundaries of the segment of the show. - View Dependent Claims (6)
-
-
7. An apparatus comprising:
-
a communication interface that receives a first presentation stream of video data including a segment of a show, an interstitial of the show, and that receives location information referencing a location within the first presentation stream, a signature of a portion of the first presentation stream corresponding with the location, and search boundary offsets specified relative to the location referenced by the received location information, the signature identifying a transition in the video data from a first luminance value for a first frame of the video data to a second luminance value for a second frame of the video data, wherein the location information also includes a text string corresponding to closed captioning data for the first presentation stream; control logic communicatively coupled to the communication interface configured to; process the first presentation stream to identify search boundaries within the first presentation stream based on the closed captioning data, the location information, and the search boundary offsets; compute average luminance values for a plurality of frames of the video data of the first presentation stream, wherein the plurality of frames are within the identified search boundaries; process the average luminance values to identify the transition from the first luminance value to the second luminance value based on the signature, the transition corresponding with a signature-based video location within the first presentation stream; process the first presentation stream to identify boundaries of the segment of the show based on the signature-based video location and the at least one segment boundary offset; identify supplemental content to present in association with the segment of the show; and an audio/video interface that outputs a second presentation stream for presentation by a presentation device, the second presentation stream including the segment of the show and the supplemental content, wherein the audio/video interface uses the identified boundaries of the segment of the show to output the second presentation stream. - View Dependent Claims (8)
-
-
9. A digital video recorder comprising:
-
a communication interface that receives a first audio/video stream including a segment of a show, an interstitial of the show, and closed captioning data, and that receives location information for the first audio/video stream, the location information including a text string associated with a particular video location within the first audio/video stream, and the location information including search boundary offsets relative to the particular video location, and that receives a signature of a portion of the first audio/video stream, wherein the signature refers to waveform characteristics of the portion of the first audio/video stream; a storage medium; control logic communicatively coupled to the communication interface and the storage medium that; processes the closed captioning data to locate an instance of the text string in the closed captioning data, and to locate a beginning of the instance of the text string in the closed captioning data; identifies an intermediate video location in the first audio/video stream, the identified intermediate video location corresponding to the beginning of the text string located in the closed captioning data; identifies search boundaries within the first audio/video stream by applying the search boundary offsets to the identified intermediate video location; processes content of the first audio/video stream within the identified search boundaries, wherein the content of the first audio/video stream is processed to search for the signature to identify a signature-based video location in the first audio/video stream; locates boundaries of the segment of the show by applying segment boundary offsets to the identified signature-based video location; and identifies supplemental content for presentation in association with the segment of the show; and an audio/video interface communicatively coupled to the control logic that outputs a second audio/video stream for presentation by a display device, the second audio/video stream including the segment of the show and the supplemental content, wherein the audio/video interface uses the identified boundaries of the segment of the show to output the second audio/video stream. - View Dependent Claims (10, 11, 12)
-
Specification