Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine
First Claim
1. An automated music composition and generation system for composing and generating pieces of digital music in response to a system user providing, as input, musical energy (ME) quality control parameters, said automated music composition and generation system comprising:
- an automated music composition and generation engine; and
a system user interface subsystem interfaced with said automated music composition and generation engine, and supporting spotting of digital media objects and timeline-based event markers, and employing a graphical user interface (GUI) for supporting(i) the selection of musical energy (ME) quality control parameters including emotion-type music experience descriptors (MXD) for each piece of digital music to be automatically composed and generated by said automated music composition and generation engine, style-type musical experience descriptors (MXDs) for each said piece of digital music, and timing parameters indicating the time duration characteristics of each said piece of digital music, and(ii) the selection of one or more musical energy quality (ME) control parameters from the group consisting of instrumentation, melody, dynamics, ensemble performance, orchestration, volume, tempo, rhythm, harmony, start/hit/stop markers indicating the location of start, hit and stop events in each said piece of digital music, and framing control markers indicating the location of the intro, climax, and outro of said piece of digital music; and
wherein said musical energy quality control parameters are applied along the timeline of a graphical representation of a selected digital media object or timeline-based event marker, so as to control particular musical energy qualities within the piece of digital music being composed and generated by said automated music composition and generation engine using said musical energy quality control parameters selected by the system user and supplied, as input, to said system user interface subsystem.
2 Assignments
0 Petitions
Accused Products
Abstract
An automated music composition and generation system and process for producing one or more pieces of digital music, by providing a set of musical energy (ME) quality control parameters to an automated music composition and generation engine, applying certain of the selected musical energy quality control parameters as markers to specific spots along the timeline of a selected media object or event marker by the system user during a scoring process, and providing the selected set of musical energy quality control parameters to drive the automated music composition and generation engine to automatically compose and generate one or more pieces of digital music with control over the specified qualities of musical energy embodied in and expressed by the piece of digital music to composed and generated by the automated music composition and generation engine.
889 Citations
12 Claims
-
1. An automated music composition and generation system for composing and generating pieces of digital music in response to a system user providing, as input, musical energy (ME) quality control parameters, said automated music composition and generation system comprising:
-
an automated music composition and generation engine; and a system user interface subsystem interfaced with said automated music composition and generation engine, and supporting spotting of digital media objects and timeline-based event markers, and employing a graphical user interface (GUI) for supporting (i) the selection of musical energy (ME) quality control parameters including emotion-type music experience descriptors (MXD) for each piece of digital music to be automatically composed and generated by said automated music composition and generation engine, style-type musical experience descriptors (MXDs) for each said piece of digital music, and timing parameters indicating the time duration characteristics of each said piece of digital music, and (ii) the selection of one or more musical energy quality (ME) control parameters from the group consisting of instrumentation, melody, dynamics, ensemble performance, orchestration, volume, tempo, rhythm, harmony, start/hit/stop markers indicating the location of start, hit and stop events in each said piece of digital music, and framing control markers indicating the location of the intro, climax, and outro of said piece of digital music; and wherein said musical energy quality control parameters are applied along the timeline of a graphical representation of a selected digital media object or timeline-based event marker, so as to control particular musical energy qualities within the piece of digital music being composed and generated by said automated music composition and generation engine using said musical energy quality control parameters selected by the system user and supplied, as input, to said system user interface subsystem. - View Dependent Claims (2, 3, 4)
-
-
5. An automated music composition and generation system for composing and generating pieces of digital music in response to a system user providing, as input, musical energy quality control parameters, said automated music composition and generation system comprising:
-
a system user interface subsystem including at least one GUI-based system user interface that supports composition control over musical energy (ME) embodied in pieces of digital music being composed and generated; and an automated music composition and generation engine communication with said system user interface subsystem to receive musical energy quality control parameters from the system user, and supporting subsystems employing music-theoretic system operating parameters (SOP) to automatically compose and generate each said piece of digital music in response to said musical energy quality control parameters provided as input; wherein said system user interface subsysytem supports communication of musical energy quality control parameters from the system user to said automated music composition and generation engine, for transformation into said musical-theoretic system operating parameters (SOP) used to drive said subsystems supported by said automated music composition and generation engine, and support dimensions of control over the qualities of musical energy (ME) embodied or expressed in the pieces of digital music being automatically composed and generated by said automated music composition and generation engine; and wherein the dimensions of control over musical energy (ME) in each said piece of digital music composed and generated by said automated music composition and generation engine are provided by one or more musical energy quality control parameters selected from the group consisting of (i) emotion/mood type musical experience descriptors expressed in the form of at least one of graphical icons, emojis, images, words and other linguistic expressions, (ii) style/genre type musical experience descriptors expressed in the form of at least one of graphical icons, emojis, images, words and other linguistic expressions, and (iii) one or more musical energy quality control parameters selected from the group consisting of tempo, dynamics, rhythm, harmony, melody, instrumentation, orchestration, instrument performance, ensemble performance, volume, start/hit/stop event markers for marking the location of start, hit and stop events in a piece of digital music, and framing control markers for marking the location of the intro, climax, and outro of the piece of digital music, thereby allowing the system user to exert a specific amount of control over each piece of digital music being automatically composed and generated by said automated music composition and generation system without the system user requiring any specific knowledge of or experience in music theory or music performance. - View Dependent Claims (6, 7, 8)
-
-
9. A method of composing and generating pieces of digital music in response to a system user providing, as input, musical energy quality control parameters, said method comprising the steps of:
-
(a) capturing or accessing a digital media object to be uploaded to a studio application, and scored with one or more pieces of digital music to be automatically composed and generated by an automated music composition and generation engine interfaced with a graphical user interface (GUI); (b) enabling an automated music composition studio with said studio application and being operably associated with said graphical user interface (GUI); (c) selecting one or more emotion-type musical experience descriptors (MXD) from menus supported by the GUI, and loading the selected emotion-type musical experience descriptors into said automated music composition and generation engine; (d) selecting style-type musical experience descriptors (MXD) from menus supported by the GUI, and loading the selected style-type musical experience descriptors and default libraries of musical instruments into said automated music composition and generation engine; (e) selecting musical instruments to be represented in the piece of digital music to be automatically composed and generated by said automated music composition and generation engine; (f) adjusting spotting markers along the timeline of the digital media object to be scored with one or more said pieces of digital music, so as to control the musical energy quality of said one or more pieces of digital music, wherein said adjusted spotting markers represent music energy control parameters selected from a group consisting of instrumentation, ensemble performance, dynamics, melody, orchestration, volume, tempo, rhythm, harmony, start/hit/stop markers indicating the location of start, hit and stop events in a piece of digital music, and framing control markers indicating the location of the intro, climax, and outro of the piece of digital music; (g) rendering the piece of composed digital music using the selected MXD emotion-type musical experience descriptors, the selected style-type musical experience descriptors, the selected musical instruments, and the adjusted spotting markers along said timeline, provided to said automated music composition and generation engine; (h) reviewing the piece of digital music automatically generated by said automated music composition and generation engine; (i) changing the spotting marker settings and re-rendering the piece of digital music using said automated music composition and generation engine; (j) reviewing new composed piece of digital music generated by said automated music composition and generation engine, to determine that it said new composed piece of digital music is acceptable and satisfactory for an intended end-user application; (k) combining the composed piece of digital music with the selected digital media object uploaded to the studio application; and (l) sending the musically-scored digital media object to an intended destination. - View Dependent Claims (10, 11, 12)
-
Specification