Geographically independent determination of segment boundaries within a video stream
First Claim
Patent Images
1. A method for creating an announcement stream for a geographic region, comprising:
- receiving, at a designated computer system, characterizing metadata for a first audio/video stream;
analyzing a second audio/video stream to obtain characterizing metadata for the second video stream, wherein the characterizing metadata of the first and second audio/video streams comprises average luminance values corresponding to individual video frames in the first and second video streams;
comparing, with the computer system, the characterizing metadata for the first video stream to the characterizing metadata for the second video stream to generate offset data; and
calculating timing information corresponding to segment boundaries for the second video stream using the offset data;
wherein the comparing of the characterizing metadata for the first video stream to the characterizing metadata for the second video stream comprises;
subtracting the average luminance values for each of the individual video frames of the second video stream from the average luminance values for each of the individual video frames of the first video stream to obtain a plurality of results;
summing an absolute value of each of the plurality of results, for each of the individual video frames, to obtain a summed value;
dividing the summed value by a total number of individual video frames for the second video stream to obtain a result; and
when the result is lower than a threshold value, generating the offset data by subtracting timing information for the second video stream from timing information for the first video stream.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for creating an announcement stream for a geographic region is provided. The method receives, at a designated computer system, characterizing metadata for a first audio/video stream; analyzes a second audio/video stream to obtain characterizing metadata for the second video stream; compares, with the computer system, the characterizing metadata for the first video stream to the characterizing metadata for the second video stream to generate offset data; and calculates timing information corresponding to segment boundaries for the second video stream using the offset data.
24 Citations
19 Claims
-
1. A method for creating an announcement stream for a geographic region, comprising:
-
receiving, at a designated computer system, characterizing metadata for a first audio/video stream; analyzing a second audio/video stream to obtain characterizing metadata for the second video stream, wherein the characterizing metadata of the first and second audio/video streams comprises average luminance values corresponding to individual video frames in the first and second video streams; comparing, with the computer system, the characterizing metadata for the first video stream to the characterizing metadata for the second video stream to generate offset data; and calculating timing information corresponding to segment boundaries for the second video stream using the offset data; wherein the comparing of the characterizing metadata for the first video stream to the characterizing metadata for the second video stream comprises; subtracting the average luminance values for each of the individual video frames of the second video stream from the average luminance values for each of the individual video frames of the first video stream to obtain a plurality of results; summing an absolute value of each of the plurality of results, for each of the individual video frames, to obtain a summed value; dividing the summed value by a total number of individual video frames for the second video stream to obtain a result; and when the result is lower than a threshold value, generating the offset data by subtracting timing information for the second video stream from timing information for the first video stream. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A system for processing video stream data, comprising:
-
a communication module, configured to receive characterizing metadata for a first video stream; and a processor architecture coupled to the communication module, wherein the processor architecture comprises; a video stream analysis module, configured to analyze a second video stream to obtain characterizing metadata for the second video stream, wherein the characterizing metadata of the first and second video streams comprises average luminance values corresponding to video frames in the first and second video streams; and a comparison module, configured to compare the characterizing metadata for the first video stream to the characterizing metadata for the second video stream to generate offset data; wherein the video stream analysis module is further configured to calculate timing information corresponding to segment boundaries for the second video stream using the offset data; wherein the comparing of the characterizing metadata for the first video stream to the characterizing metadata for the second video stream comprises; calculating difference values between the average luminance values for each of the individual video frames of the second video stream and the average luminance values for each of the individual video frames of the first video stream; summing an absolute value of each of the difference values, for each of the individual video frames, to obtain a summed value; and when the summed value is lower than a threshold value, generating the offset data by subtracting timing information for the second video stream from timing information for the first video stream. - View Dependent Claims (16)
-
-
17. A method for processing video stream data, comprising:
-
receiving, at a computer system having a processor, a first waveform depicting data comprising average luminance values for each of a plurality of video frames contained within a first video stream; calculating, at the computer system, average luminance values for each of a plurality of video frames contained within a second video stream; generating a second waveform, corresponding to the second video stream, using the calculated average luminance values; obtaining offset data by comparing the first waveform to the second waveform; and generating timing information regarding the beginning and ending of at least one portion of the video stream using the offset data; wherein the comparing of the first waveform to the second waveform comprises; subtracting the average luminance values for each of the video frames of the second video stream from the average luminance values for each of the video frames of the first video stream to obtain a plurality of results; summing an absolute value of each of the plurality of results, for each of the video frames, to obtain a summed value; dividing the summed value by a total number of individual video frames for the second video stream to obtain a result; and when the result is lower than a threshold value, generating the offset data by subtracting timing information for the second video stream from timing information for the first video stream. - View Dependent Claims (18, 19)
-
Specification