Systems and Methods for Portable Audio Synthesis
First Claim
1. A method for generating broadcast music comprising the steps of:
- generating a music data file;
broadcasting the music data file from a base station to one or more of a plurality of nodes;
receiving the music data file at one or more of the plurality of nodes;
extracting musical definition data from the music data file, wherein the musical definition data provides information regarding a song data structure and data for musical parameters in accordance with the song data structure;
processing the musical definition data, wherein a song in accordance with the song data structure and the musical parameters is generated by the one or more of the plurality of nodes; and
playing the generated song at the one or more of the plurality of nodes.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for creating, modifying, interacting with and playing music are provided, particularly systems and methods employing a top-down process, where the user is provided with a musical composition that may be modified and interacted with and played and/or stored (for later play). The system preferably is provided in a handheld form factor, and a graphical display is provided to display status information, graphical representations of musical lanes or components which preferably vary in shape as musical parameters and the like are changed for particular instruments or musical components such as a microphone input or audio samples. An interactive auto-composition process preferably is utilized that employs musical rules and preferably a pseudo random number generator, which may also incorporate randomness introduced by timing of user input or the like, the user may then quickly begin creating desirable music in accordance with one or a variety of musical styles, with the user modifying the auto-composed (or previously created) musical composition, either for a real time performance and/or for storing and subsequent playback. In addition, an analysis process flow is described for using pre-existing music as input(s) to an algorithm to derive music rules that may be used as part of a music style in a subsequent auto-composition process. In addition, the present invention makes use of node-based music generation as part of a system and method to broadcast and receive music data files, which are then used to generate and play music. By incorporating the music generation process into a node-subscriber unit, the bandwidth-intensive systems of conventional techniques can be avoided. Consequently, the bandwidth can preferably be also used of additional features such as node-to-node and node to base music data transmission. The present invention is characterized by the broadcast of relatively small data files that contain various parameters sufficient to describe the music to the node/subscriber music generator. In addition, problems associated with audio synthesis in a portable environment are addressed in the present invention by providing systems and methods for performing audio synthesis in a manner that simplifies design requirements and/or minimizes cost, while still providing quality audio synthesis features targeted for a portable system (e.g., portable telephone). In addition, problems associated with the tradeoff between overall sound quality and memory requirements in a MIDI sound bank are addressed in the present invention by providing systems and methods for a reduced memory size footprint MIDI sound bank.
-
Citations
9 Claims
-
1. A method for generating broadcast music comprising the steps of:
-
generating a music data file; broadcasting the music data file from a base station to one or more of a plurality of nodes; receiving the music data file at one or more of the plurality of nodes; extracting musical definition data from the music data file, wherein the musical definition data provides information regarding a song data structure and data for musical parameters in accordance with the song data structure; processing the musical definition data, wherein a song in accordance with the song data structure and the musical parameters is generated by the one or more of the plurality of nodes; and playing the generated song at the one or more of the plurality of nodes.
-
-
2. A system for generating a musical composition based on received music data file, comprising:
-
a transmitter/receiver, wherein the transmitter/receiver transmits and receives data from/to one or more second systems remote from the system, wherein the data received by the system includes at least a music data file; a music generation device, wherein the music generation device executes at least a music generation algorithm, wherein musical rules are applied to musical data in accordance with the music generation algorithm to generate music output for one or more musical compositions; a memory, wherein at least the received music data file is stored in the memory; wherein, musical data is generated based on data from the received music data file, wherein the music generation device generates the musical composition based on the received music data file. - View Dependent Claims (3, 4, 5, 6)
-
-
7. A method of performing audio synthesis in a portable environment, wherein source sample data is processed by a processing unit to generate synthesized audio samples, the method comprising the steps of:
-
providing an interpolation function wherein source monaural sample data is accessed and interpolated to generate one or more interpolated monaural samples based on the source monaural sample data; providing a filter function wherein at least one of the interpolated monaural samples is filtered to generate a filtered interpolated monaural sample; providing a gain function wherein the filtered interpolated monaural sample is processed to generate at least a left and a right sample;
wherein the left and the right sample together may subsequently process to create a stereophonic field.
-
-
8. A method of performing MIDI-based synthesis in a portable environment, wherein a MIDI synthesis function is called to process MIDI events by accessing a reduced-footprint soundbank to generate audio output, the method comprising the steps of:
-
providing a DLS-compatible soundbank comprised of two levels for a first desired sound; wherein a first level is associated with a first sample comprised of the initial sound of impact, and a second level is associated with at least a second sample comprised of a looping period of a stable waveform; providing parameter data associated with the DLS-compatible soundbank relating the first sample to the first desired sound and to a plurality of additional sounds; and wherein the DLS-compatible soundbank and associated parameter data occupy a smaller footprint than otherwise would be occupied if the first sample were not related to the additional plurality of additional sounds.
-
-
9. A method for generating music comprising the steps of:
-
generating a music data file at a first node; extracting musical definition data from the music data file, wherein the musical definition data provides information regarding a song data structure and data for musical parameters in accordance with the song data structure; processing the musical definition data, wherein a song in accordance with the song data structure and the musical parameters is generated by the first node;
wherein the one or more of the plurality of nodes executes at least a music generation algorithm, wherein musical rules are applied to musical data in accordance with the music generation algorithm to generate music output for a musical composition;
wherein a sequence of MIDI events is provided to a digital processing resource, wherein at least one of the MIDI events includes delta time parameter data, further wherein audio stream events are processed, wherein one or more of the audio stream events has associated therewith audio sample data, wherein the audio sample data is provided to the digital signal processing resource, wherein the audio sample data is not provided from a MIDI sound bank;
further wherein a first MIDI event is provided that is configured to include delta time parameter data associated with the intended playback timing of at least one audio stream event;
further wherein the audio stream event is rhythmically synchronized with the sequence of MIDI events using the first MIDI event;playing the generated song at the first node comprising a multi-mode music generation device operating in at least a first mode and a second mode, wherein the first mode comprises an autocomposition of music process;
wherein a multi-mode memory resource is provided, wherein the multi-mode memory resource stores first information when the first node operates in the first mode of operation and second information when the first node operates in the second mode of operation;
further wherein the first information is stored in the multi-mode memory resource at a first point in time and the second information is stored in the multi-mode memory resource at a second point in time, wherein the multi-mode memory resource selectively contains the first information or the second information depending upon whether the autocomposition of music process is being performed; andtransmitting a modified data file associated with the generated song for reception by one or more remote systems, wherein the one or more remote systems may generate a modified musical composition based on the modified data file; providing a display device integrated in the first node, with a visual representation for each of a plurality of musical instruments, wherein the visual representation comprises a plurality of icons, wherein an icon is displayed for each of the plurality of musical instruments, wherein the displayed icons provide a first level of visual display;
wherein, in the first level of visual display, if the particular musical instrument is active in the music output, then the icon for the particular musical instrument visually changes on the display device synchronized with the music output;wherein the first node comprises a PBX accessible by a user via a telephone, and wherein based on a detected one or more user commands, selectively controlling the music generation algorithm to automatically compose on-hold music that is audibly provided to the user. providing the music data file in a file format comprised of a plurality of slots, wherein the data length of an individual slot can be enlarged or shrunk without affecting the compatibility of other slots; wherein the music generation algorithm comprises the steps of;
accessing at least one parameter value representing a range of note pitch values associated with a musical instrument;
executing program instructions to generate a musical note data unit associated with the musical instrument;
comparing the musical note data unit to the parameter value to determine whether the musical note data unit is within the range of note pitch values; and
in the event that the musical data unit is not within the range of note pitch values, modifying the musical data unit to be within the range of note pitch values;providing an audio synthesis function, wherein source sample data is processed by a processing unit to generate synthesized audio samples; providing an interpolation function wherein source monaural sample data is accessed and interpolated to generate one or more interpolated monaural samples based on the source monaural sample data; providing a filter function wherein at least one of the interpolated monaural samples is filtered to generate a filtered interpolated monaural sample; providing a gain function wherein the filtered interpolated monaural sample is processed to generate at least a left and a right sample;
wherein the left and the right sample together may subsequently process to create a stereophonic field;providing a reduced-footprint soundbank; providing a DLS-compatible soundbank comprised of two levels for a first desired sound; wherein a first level is associated with a first sample comprised of the initial sound of impact, and a second level is associated with at least a second sample comprised of a looping period of a stable waveform; providing parameter data associated with the DLS-compatible soundbank relating the first sample to the first desired sound and to a plurality of additional sounds; and wherein the DLS-compatible soundbank and associated parameter data occupy a smaller footprint than otherwise would be occupied if the first sample were not related to the additional plurality of additional sounds.
-
Specification