×

Automated music composition and generation system driven by emotion-type and style-type musical experience descriptors

  • US 10,163,429 B2
  • Filed: 04/17/2017
  • Issued: 12/25/2018
  • Est. Priority Date: 09/29/2015
  • Status: Active Grant
First Claim
Patent Images

1. An automated music composition and generation system comprising:

  • a system user interface for enabling system users to create a project for a digital piece of music to be composed and generated, and review and select one or more emotion-type musical experience descriptors, one or more style-type musical experience descriptors, as well as time and/or space parameters; and

    an automated music composition and generation engine, operably connected to said system user interface, for receiving, storing and processing said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user;

    wherein said automated music composition and generation engine includes a plurality of function-specific subsystems cooperating together to automatically compose and generate one or more digital pieces of music in response to said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by said system user;

    wherein said digital piece of music composed and generated has a rhythmic landscape and a pitch landscape and contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for said digital piece of music being composed;

    wherein said plurality of function-specific subsystems include a rhythmic landscape subsystem, a pitch landscape subsystem, and a digital piece creation subsystem;

    wherein said rhythmic landscape subsystem is configured to generate and manage the rhythmic landscape of said digital piece of music being composed;

    wherein said pitch landscape subsystem is configured to generate and manage the pitch landscape of said digital piece of music being composed;

    wherein said digital piece creation subsystem is configured for creating said digital piece of music, employing one or more automated music synthesis techniques, and delivering said digital piece of music to said system user interface; and

    wherein said digital piece of music composed and generated has the emotional and stylistic characteristics expressed throughout the rhythmic and pitch landscapes of said digital piece of music as represented by said set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by said system user to said automated music composition and generation engine.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×