Machines, systems, processes for automated music composition and generation employing linguistic and/or graphical icon based musical experience descriptions
First Claim
1. An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine controlled by emotion/style-indexed music-theoretic system operating parameters (SOP) produced for each digital piece of music being composed and generated in response to a set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user during an automated music composition and generation process, said automated music composition and generation system comprising:
- a system user interface for enabling system users to (i) create a project for a digital piece of music to be composed and generated, and (ii) review and select one or more emotion-type musical experience descriptors, one or more style-type musical experience descriptors, as well as time and/or space parameters; and
an automated music composition and generation engine, operably connected to said system user interface, for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user at said system user interface;
wherein said automated music composition and generation engine includes a plurality of function-specific subsystems cooperating together to automatically compose and generate one or more digital pieces of music in response to said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user at said system user interface;
wherein each said digital piece of music to be composed and generated has a rhythmic landscape and a pitch landscape and contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for the digital piece of music;
wherein said digital piece of music has musical elements including (i) structure, form and phrase, (ii) tempo, meter and length, and (iii) key and tonality;
wherein said plurality of function-specific subsystems include a rhythmic landscape subsystem, a pitch landscape subsystem, and a controller code creation subsystem;
wherein each said function-specific subsystem supports and employs emotion/style-indexed music-theoretic system operating parameter tables for performing specific music theoretic operations during said automated music composition and generation process;
a parameter transformation subsystem for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters from said system user interface, and processing and transforming said emotion-type and style-type musical experience descriptors and time and/or space parameters and producing emotion/style-indexed music-theoretic system operating parameters for use by said function-specific subsystems employing emotion/style-indexed music-theoretic system operating parameter tables during said automated music composition and generation process;
a parameter storage subsystem for persistent storage and archiving of system user accounts and music composition projects, and all emotion/style-indexed music-theoretic system operating parameters generated by said parameter transformation subsystem for said music composition projects created by system users;
a parameter handling and processing subsystem operably connected to said parameter storage subsystem and said parameter transformation subsystem for (i) receiving emotion/style-specific music-theoretic system operating parameters produced by said parameter transformation subsystem, and (ii) loading said emotion/style-indexed music-theoretic system operating parameters within said function-specific subsystems employing said emotion/style-indexed musical-theoretic system operating parameter tables;
wherein said rhythmic landscape subsystem is configured to generate and manage the rhythmic landscape of the digital piece of music being composed, including the arrangement in time of all events in the digital piece of music being composed, and organizable at a high level by the tempo, meter, and length of the digital piece of music, at a middle level by the structure, form, and phrase of the digital piece of music, and at a low level by a specific organization of events of each musical instrument and/or other components of the digital piece of music being composed;
wherein said pitch landscape subsystem is configured to generate and manage the pitch landscape of the digital piece of music being composed, including the arrangement in space of all events in the digital piece of music being composed, and organizable at a high level by the key and tonality of the digital piece of the music, at a middle level by the structure, form, and phrase of the digital piece of music, and at a low level by a specific organization of events of each musical instrument and/or other components of the digital piece of music being composed;
wherein said controller code creation subsystem is configured to create controller code to control the expression of the musical notes, rhythms, and musical instruments orchestrated in said digital piece of music being composed;
a digital piece creation subsystem for creating the digital piece of music, employing one or more automated virtual-instrument music synthesis techniques;
wherein during said automated music composition and generation process, said function-specific subsystems are controlled by the emotion/style-indexed music-theoretic system operating parameters loaded within said emotion/style-indexed music-theoretic system operating parameters (SOP) tables supported within said function-specific subsystems, while the digital piece of music composed and generated has the emotional and stylistic characteristics expressed throughout the rhythmic and pitch landscapes of the digital piece of music as represented by said set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by said system user; and
a music editability subsystem, interfaced with said system user interface, allowing system users to edit and modify generated digital pieces of music by (i) using said system user interface to edit the set of emotion-type and style-type musical experience descriptors and time and/or space parameters stored in said parameter storage subsystem, (ii) using said parameter transformation subsystem to transform said edited set of emotion-type and style-type musical experience descriptors and time and/or space parameters into a new set of emotion/style-indexed music-theoretic system operating parameters (SOP) for storage in said parameter storage subsystem and loading within said function-specific subsystems by said parameter handling and processing subsystem, and (iii) using said automated music composition and generation engine to generate a new digital piece of music using said new set of emotion/style-indexed music-theoretic system operating parameters.
2 Assignments
0 Petitions
Accused Products
Abstract
Automated music composition and generation machine, systems and methods, and architectures that allow anyone, without possessing any knowledge of music theory or practice, or expertise in music or other creative endeavors, to instantly create unique and professional-quality music, synchronized to any kind of media content, including, but not limited to, video, photography, slideshows, and any pre-existing audio format, as well as any object, entity, and/or event, wherein the system user only requires knowledge of ones own emotions and/or artistic concepts which are to be expressed in a piece of music that will ultimately composed by the automated composition and generation system of the present invention.
149 Citations
15 Claims
-
1. An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine controlled by emotion/style-indexed music-theoretic system operating parameters (SOP) produced for each digital piece of music being composed and generated in response to a set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user during an automated music composition and generation process, said automated music composition and generation system comprising:
-
a system user interface for enabling system users to (i) create a project for a digital piece of music to be composed and generated, and (ii) review and select one or more emotion-type musical experience descriptors, one or more style-type musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to said system user interface, for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user at said system user interface; wherein said automated music composition and generation engine includes a plurality of function-specific subsystems cooperating together to automatically compose and generate one or more digital pieces of music in response to said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user at said system user interface; wherein each said digital piece of music to be composed and generated has a rhythmic landscape and a pitch landscape and contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for the digital piece of music; wherein said digital piece of music has musical elements including (i) structure, form and phrase, (ii) tempo, meter and length, and (iii) key and tonality; wherein said plurality of function-specific subsystems include a rhythmic landscape subsystem, a pitch landscape subsystem, and a controller code creation subsystem; wherein each said function-specific subsystem supports and employs emotion/style-indexed music-theoretic system operating parameter tables for performing specific music theoretic operations during said automated music composition and generation process; a parameter transformation subsystem for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters from said system user interface, and processing and transforming said emotion-type and style-type musical experience descriptors and time and/or space parameters and producing emotion/style-indexed music-theoretic system operating parameters for use by said function-specific subsystems employing emotion/style-indexed music-theoretic system operating parameter tables during said automated music composition and generation process; a parameter storage subsystem for persistent storage and archiving of system user accounts and music composition projects, and all emotion/style-indexed music-theoretic system operating parameters generated by said parameter transformation subsystem for said music composition projects created by system users; a parameter handling and processing subsystem operably connected to said parameter storage subsystem and said parameter transformation subsystem for (i) receiving emotion/style-specific music-theoretic system operating parameters produced by said parameter transformation subsystem, and (ii) loading said emotion/style-indexed music-theoretic system operating parameters within said function-specific subsystems employing said emotion/style-indexed musical-theoretic system operating parameter tables; wherein said rhythmic landscape subsystem is configured to generate and manage the rhythmic landscape of the digital piece of music being composed, including the arrangement in time of all events in the digital piece of music being composed, and organizable at a high level by the tempo, meter, and length of the digital piece of music, at a middle level by the structure, form, and phrase of the digital piece of music, and at a low level by a specific organization of events of each musical instrument and/or other components of the digital piece of music being composed; wherein said pitch landscape subsystem is configured to generate and manage the pitch landscape of the digital piece of music being composed, including the arrangement in space of all events in the digital piece of music being composed, and organizable at a high level by the key and tonality of the digital piece of the music, at a middle level by the structure, form, and phrase of the digital piece of music, and at a low level by a specific organization of events of each musical instrument and/or other components of the digital piece of music being composed; wherein said controller code creation subsystem is configured to create controller code to control the expression of the musical notes, rhythms, and musical instruments orchestrated in said digital piece of music being composed; a digital piece creation subsystem for creating the digital piece of music, employing one or more automated virtual-instrument music synthesis techniques; wherein during said automated music composition and generation process, said function-specific subsystems are controlled by the emotion/style-indexed music-theoretic system operating parameters loaded within said emotion/style-indexed music-theoretic system operating parameters (SOP) tables supported within said function-specific subsystems, while the digital piece of music composed and generated has the emotional and stylistic characteristics expressed throughout the rhythmic and pitch landscapes of the digital piece of music as represented by said set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by said system user; and a music editability subsystem, interfaced with said system user interface, allowing system users to edit and modify generated digital pieces of music by (i) using said system user interface to edit the set of emotion-type and style-type musical experience descriptors and time and/or space parameters stored in said parameter storage subsystem, (ii) using said parameter transformation subsystem to transform said edited set of emotion-type and style-type musical experience descriptors and time and/or space parameters into a new set of emotion/style-indexed music-theoretic system operating parameters (SOP) for storage in said parameter storage subsystem and loading within said function-specific subsystems by said parameter handling and processing subsystem, and (iii) using said automated music composition and generation engine to generate a new digital piece of music using said new set of emotion/style-indexed music-theoretic system operating parameters. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
Specification