System and process for embedding electronic messages and documents with pieces of digital music automatically composed and generated by an automated music composition and generation engine driven by user-specified emotion-type and style-type musical experience descriptors
First Claim
1. An automated music composition and generation system allowing users to produce and deliver electronic messages embedded with pieces of digital music automatically composed and generated by an automated music composition and generation engine driven by user-selected emotion-type and/or style-type musical experience descriptors, said automated music composition and generation system comprising:
- a plurality of client machines deployed on the infrastructure of the Internet,wherein each said client machine supports at least one electronic messaging service providing a graphical user interface (GUI) for producing and delivering electronic messages;
an automated music composition and generation engine operably connected to the infrastructure of the Internet, for automatically composing and generating a piece of digital music in response to emotion-type and/or style-type musical experience descriptors provided to said automated music composition and generation engine; and
a communication server operably connected to the infrastructure of the Internet and said automated music composition and generation engine, for serving pieces of digital music automatically composed and generated by said automated music composition and generation engine;
wherein a system user uses said GUI supported by said client machine to select an electronic message and provide emotion-type and/or style-type musical experience descriptors to said automated music composition and generation engine for use in automatically composing and generating a piece of digital music and embedding the piece of digital music into said electronic message so that when said electronic message is reviewed, said piece of digital music is served from said communication server and experienced with said electronic message; and
wherein said automated music composition and generation engine, once initiated by the system user, automatically transforms said provided emotion-type and style-type musical experience descriptors into a set of music-theoretic system operating parameters, which are used by said automated music composition and generation engine to automatically compose and generate the piece of digital music for embedding into said electronic message.
2 Assignments
0 Petitions
Accused Products
Abstract
An automated music composition and generation system and process allowing system users to produce and deliver electronic messages and documents, such as text, SMS and email, augmented with automatically-composed music generated using user-selected emotion-type and style-type musical experience descriptors. The automated music composition and generation system includes an automated music composition and generation engine operably connected to a system user interface, and the infrastructure of the Internet. Mobile and desktop client machines provide text, SMS and/or email services supported on the Internet. Each client machine has a text application, SMS application and/or email application augmented by the addition of automatically-composed music by systems users using the automated music composition and generation engine. By selecting and providing musical emotion and style descriptors to the engine, music is automatically composed, generated, and embedded in text, SMS and/or email messages for delivery to other client machines over the infrastructure of the Internet.
258 Citations
20 Claims
-
1. An automated music composition and generation system allowing users to produce and deliver electronic messages embedded with pieces of digital music automatically composed and generated by an automated music composition and generation engine driven by user-selected emotion-type and/or style-type musical experience descriptors, said automated music composition and generation system comprising:
-
a plurality of client machines deployed on the infrastructure of the Internet, wherein each said client machine supports at least one electronic messaging service providing a graphical user interface (GUI) for producing and delivering electronic messages; an automated music composition and generation engine operably connected to the infrastructure of the Internet, for automatically composing and generating a piece of digital music in response to emotion-type and/or style-type musical experience descriptors provided to said automated music composition and generation engine; and a communication server operably connected to the infrastructure of the Internet and said automated music composition and generation engine, for serving pieces of digital music automatically composed and generated by said automated music composition and generation engine; wherein a system user uses said GUI supported by said client machine to select an electronic message and provide emotion-type and/or style-type musical experience descriptors to said automated music composition and generation engine for use in automatically composing and generating a piece of digital music and embedding the piece of digital music into said electronic message so that when said electronic message is reviewed, said piece of digital music is served from said communication server and experienced with said electronic message; and wherein said automated music composition and generation engine, once initiated by the system user, automatically transforms said provided emotion-type and style-type musical experience descriptors into a set of music-theoretic system operating parameters, which are used by said automated music composition and generation engine to automatically compose and generate the piece of digital music for embedding into said electronic message. - View Dependent Claims (2, 3, 4, 5)
-
-
6. An automated music composition and generation system allowing users to produce and deliver electronic documents embedded with pieces of digital music automatically composed and generated using user-selected emotion-type and/or style-type musical experience descriptors, said automated music composition and generation system comprising:
-
a plurality of client machines deployed on the infrastructure of the Internet, wherein each said client machine supports at least one electronic document service providing a graphical user interface (GUI) for producing and delivering electronic documents; an automated music composition and generation engine operably connected to the infrastructure of the Internet, for automatically composing and generating a piece of digital music in response to emotion-type and/or style-type musical experience descriptors provided to said automated music composition and generation engine; and a communication server operably connected to the infrastructure of the Internet and said automated music composition and generation engine, for serving pieces of digital music automatically composed and generated by said automated music composition and generation engine; wherein a system user uses said GUI supported by said client machine to select an electronic document and provide emotion-type and/or style-type musical experience descriptors to said automated music composition and generation engine for use in automatically composing and generating a piece of digital music and embedding the piece of digital music into said electronic document so that when said electronic document is reviewed, said piece of digital music is served from said communication server and experienced with said electronic document; wherein said automated music composition and generation engine, once initiated by the system user, automatically transforms said provided emotion-type and style-type musical experience descriptors into a set of music-theoretic system operating parameters, which are used by said automated music composition and generation engine to automatically compose and generate the piece of digital music for embedding into said electronic document. - View Dependent Claims (7, 8, 9, 10)
-
-
11. An automated music composition and generation process supporting the use of emotion-type and/or style-type musical experience descriptors to produce electronic messages embedded with pieces of digital music automatically composed and generated using said emotion-type and style-type music experience descriptors, said automated music composition and generation process comprising the steps of:
-
(a) a system user accessing an automated music composition and generation system deployed on the infrastructure of the Internet and having a system interface for receiving emotion-type and/or style-type musical experience descriptors; (b) said system user selecting an electronic message to be embedded with a piece of digital music automatically composed and generated by said automated music composition and generation system; (c) said system user providing emotion-type and/or style-type musical experience descriptors to the system interface of said automated music composition and generation system; (d) said system user initiating said automated music composition and generation system to automatically compose and generate a piece of digital music based on said emotion-type and/or style-type musical experience descriptors provided to said system interface, wherein said automated music composition and generation engine, once initiated by the system user, automatically transforms said provided emotion-type and/or style-type musical experience descriptors into a set of music-theoretic system operating parameters, which are used by said automated music composition and generation engine to automatically compose and generate the piece of digital music for embedding into said electronic message; (e) said system user either (i) accepting the piece of digital music composed and generated for said electronic message, or (ii) rejecting the piece of digital music and providing feedback to said system interface including providing updated musical experience descriptors and requesting said automated music composition and generation system to re-compose the piece of digital music based on the updated musical experience descriptors, so as to provide a final piece of digital music for embedding into said electronic message; (f) embedding the final piece of digital music into said electronic message so that the final piece of digital music is served from a communication server; and (g) delivering said electronic message to a client system operably connected to the infrastructure of the Internet, for review of said electronic message while said final piece of digital music is being served from said communication server and experienced with said electronic message. - View Dependent Claims (12, 13, 14, 15)
-
-
16. An automated music composition and generation process supporting the use of emotion-type and/or style-type musical experience descriptors to produce electronic documents embedded with pieces of digital music automatically composed and generated using emotion-type and style-type musical experience descriptors, said automated music composition and generation process comprising the steps of:
-
(a) a system user accessing an automated music composition and generation system deployed on the infrastructure of the Internet and having a system interface; (b) said system user selecting an electronic document to be embedded with a piece of digital music automatically composed and generated by said automated music composition and generation system; (c) said system user providing emotion-type and/or style-type musical experience descriptors to the system interface of said automated music composition and generation system; (d) said system user initiating said automated music composition and generation system to automatically compose and generate a piece of digital music based on said emotion-type and/or style-type musical experience descriptors provided to said system interface, wherein said automated music composition and generation engine, once initiated by the system user, automatically transforms said provided emotion-type and/or style-type musical experience descriptors into a set of music-theoretic system operating parameters, which are used by said automated music composition and generation engine to automatically compose and generate the piece of digital music for embedding into said electronic document; (e) said system user accepting the piece of digital music composed and generated for said electronic document, or rejecting the piece of digital music and providing feedback to said system interface, including providing updated musical experience descriptors and requesting said automated music composition and generation system to re-compose the piece of digital music based on the updated musical experience descriptors, so as to provide a final piece of digital music for linking to said electronic document; (f) embedding the final piece of digital music into said electronic document so that the final piece of digital music is served from a communication server; and (g) delivering said electronic document to a client system operably connected to the infrastructure of the Internet, for review of said electronic document while said final piece of digital music is being served from said communication server and experienced with said electronic document. - View Dependent Claims (17, 18, 19, 20)
-
Specification