×

Electronic musical instrument, electronic musical instrument control method, and storage medium

  • US 10,629,179 B2
  • Filed: 06/20/2019
  • Issued: 04/21/2020
  • Est. Priority Date: 06/21/2018
  • Status: Active Grant
First Claim
Patent Images

1. An electronic musical instrument comprising:

  • a plurality of operation elements respectively corresponding to mutually different pitch data;

    a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data including training lyric data and training pitch data, and on training singing voice data of a singer corresponding to the training musical score data, the trained acoustic model being configured to receive lyric data and prescribed pitch data and output acoustic feature data of a singing voice of the singer in response to the received lyric data and pitch data; and

    at least one processor in which a first mode and a second mode are interchangeably selectable,wherein in the first mode, the at least one processor;

    in accordance with a user operation on an operation element in the plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, anddigitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of the acoustic feature data output by the trained acoustic model in response to the inputted prescribed lyric data and the inputted pitch data, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element, andwherein in the second mode, the at least one processor;

    in accordance with a user operation on an operation element in the plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, anddigitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model in response to the inputted prescribed lyric data and the inputted pitch data, without using instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×