Sound alignment user interface
First Claim
1. A method implemented by one or more computing devices, the method comprising:
- outputting a user interface having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal;
identifying, automatically and without user input, a first portion of the first representation that contains one or more features that correspond to one or more features in a second portion of the second representation;
receiving one or more manual inputs via interaction with the user interface, the manual inputs being selection of a first point in time within the first representation and selection of a corresponding second point in time within the second representation; and
generating aligned sound data from the sound data from the first and second sound signals by at least;
aligning the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal, the aligning effective to anchor the first point and the second point to each other in the aligned sound data; and
adjusting, automatically and without user input, the second portion of the second representation containing the one or more features to have a slower speed or a faster speed, the adjustment effective to align in time the one or more features in the second portion with the one or more features in the first portion without changing the alignment of the first point with the second point.
2 Assignments
0 Petitions
Accused Products
Abstract
Sound alignment user interface techniques are described. In one or more implementations, a user interface is output having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal. One or more inputs are received, via interaction with the user interface, that indicate that a first point in time in the first representation corresponds to a second point in time in the second representation. Aligned sound data is generated from the sound data from the first and second sound signals based at least in part on correspondence of the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal.
167 Citations
20 Claims
-
1. A method implemented by one or more computing devices, the method comprising:
-
outputting a user interface having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal; identifying, automatically and without user input, a first portion of the first representation that contains one or more features that correspond to one or more features in a second portion of the second representation; receiving one or more manual inputs via interaction with the user interface, the manual inputs being selection of a first point in time within the first representation and selection of a corresponding second point in time within the second representation; and generating aligned sound data from the sound data from the first and second sound signals by at least; aligning the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal, the aligning effective to anchor the first point and the second point to each other in the aligned sound data; and adjusting, automatically and without user input, the second portion of the second representation containing the one or more features to have a slower speed or a faster speed, the adjustment effective to align in time the one or more features in the second portion with the one or more features in the first portion without changing the alignment of the first point with the second point. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system comprising:
-
at least one module implemented at least partially in hardware and configured to output a user interface that is usable to manually select points within a plurality of representations of sound data generated from a plurality of sound signals, each selected point in a representation corresponding to a selected point in each other representation and the selected points within each representation defining a plurality of corresponding intervals within the representations of sound data; and one or more modules implemented at least partially in hardware and configured to generate aligned sound data from the sound data generated from the plurality of sound signals using the defined plurality of corresponding intervals by; aligning each selected point within a representation to the corresponding selected point within each other representation, the aligning effective to anchor the corresponding points to each other in the aligned sound data; automatically and without user intervention, dividing an alignment task for aligning the sound data generated from the plurality of sound signals into a plurality of interval alignment tasks, the interval alignment tasks being alignment of each defined interval within each representation with the corresponding defined interval within each other representation to produce aligned intervals without changing the alignment of the corresponding selected points; combining the aligned intervals from the plurality of representations; and using the combination of the aligned intervals as the aligned sound data. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. One or more non-transitory computer-readable storage media having instructions stored thereon that, responsive to execution on a computing device, causes the computing device to perform operations comprising:
-
outputting a user interface having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal; identifying, automatically and without user input, a first portion of the first representation that contains one or more features that correspond to one or more features in a second portion of the second representation; receiving one or more manual inputs via interaction with the user interface, the manual inputs being selection of a first point in time within the first representation and selection of a corresponding second point in time within the second representation; and generating aligned sound data from the sound data generated from the first and second sound signals at least in part by; aligning the first manually selected point in time in the sound data generated from the first sound signal to the second manually selected point in time in the sound data generated from the second sound signal, the aligning effective to anchor the first point and the second point to each other in the aligned sound data; and adjusting, automatically and without user input, the second portion to have a slower speed or a faster speed, the adjustment effective to align in time the one or more features in the second portion with the one or more features in the first portion without changing the alignment of the first and second manually selected points. - View Dependent Claims (18, 19, 20)
-
Specification