MACHINES, SYSTEMS, PROCESSES FOR AUTOMATED MUSIC COMPOSITION AND GENERATION EMPLOYING LINGUISTIC AND/OR GRAPHICAL ICON BASED MUSICAL EXPERIENCE DESCRIPTORS
2 Assignments
0 Petitions
Accused Products
Abstract
Automated music composition and generation machine, systems and methods, and architectures that allow anyone, without possessing any knowledge of music theory or practice, or expertise in music or other creative endeavors, to instantly create unique and professional-quality music, synchronized to any kind of media content, including, but not limited to, video, photography, slideshows, and any pre-existing audio format, as well as any object, entity, and/or event, wherein the system user only requires knowledge of ones own emotions and/or artistic concepts which are to be expressed in a piece of music that will ultimately composed by the automated composition and generation system of the present invention.
89 Citations
59 Claims
-
1-39. -39. (canceled)
-
40. An automated music composition and generation system driven by emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user, comprising:
-
a system user interface for enabling system users to provide emotion-type and style-type musical experience descriptors and time and/or space parameters to said automated music composition and generation system for processing; and an automated music composition and generation engine, operably connected to said system user interface, and including a plurality of function-specific subsystems cooperating together to compose and generate one or more digital pieces of music, each of which contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for the digital piece of music; wherein said automated music composition and generation engine including an arrangement of various function-specific subsystems including; a parameter transformation subsystem for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters from said system user interface, and processing and transforming said parameters and producing music-theoretic based parameters for use by one or more of said function-specific subsystems during automated music composition and generation; an orchestration subsystem for automatically orchestrating said piece of music being composed for performance by an ensemble of one or more virtual-instruments; a digital piece creation subsystem for creating a digital version of the orchestrated piece of digital music, employing one or more automated virtual-instrument music synthesis techniques; and a feedback and learning subsystem for supporting a feedback and learning cycle within said automated music composition and generation system, wherein said system user provides a rating a produced piece of orchestrated music and/or music preferences, in response to experiencing a piece of orchestrated music composed by said automated music composition and generation system, and wherein said automated music composition and generation system automatically generates an updated piece of music based on said ratings and/or preferences provided by said system user. - View Dependent Claims (41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52)
-
-
53. Automated music composition and generation process driven by emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user, comprising the steps of:
-
(a) said system user providing a set of emotion-type and style-type musical experience descriptors and time and/or space parameters, to a system user interface operably connected to an automated music composition and generation engine constructed from various function-specific subsystems that are configured for automatically composing and generating a piece of music in response to said set of emotion-type and style-type musical experience descriptors and time and/or space parameters; (b) transforming said set of emotion-type and style-type parameters and time and/or space parameters into a set of music-theoretic parameters; (c) providing said set of music-theoretic parameters to said function-specific subsystems within said automated music composition and generation engine; and (d) said function-specific subsystems processing said set of music-theoretic parameters and using one or more automated virtual-instrument music synthesis methods to automatically compose and generate a piece of digital music; and (e) delivering the piece of digital music to said system user for review and evaluation. - View Dependent Claims (54, 55, 56, 57, 58)
-
-
59. Automated music composition and generation process driven by emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user, comprises the steps of:
-
(a) the system user providing a set of emotion-type and style-type musical experience descriptors and time and/or space parameters, to a system user interface operably connected to an automated music composition and generation engine constructed from various function-specific subsystems that are configured for automatically composing and generating a piece of music in response to said set of emotion-type and style-type musical experience descriptors and time and/or space parameters; (b) transforming said set of emotion-type and style-type parameters and time and/or space parameters into a set of music-theoretic parameters; (c) providing said set of music-theoretic parameters to said function-specific subsystems within said automated music composition and generation engine; (d) said function-specific subsystems processing said set of music-theoretic parameters to automatically compose and generate a piece of digital music; (e) delivering the piece of digital music to said system user for review and evaluation; (f) said system user providing feedback to said automated music composition and generation engine relating to the system user'"'"'s rating of the produced piece of music and/or preferences; and (g) using said feedback to generate another piece of digital music for review and evaluation by said system user.
-
Specification