×

DISCRETE VARIATIONAL AUTO-ENCODER SYSTEMS AND METHODS FOR MACHINE LEARNING USING ADIABATIC QUANTUM COMPUTERS

  • US 20180247200A1
  • Filed: 08/18/2016
  • Published: 08/30/2018
  • Est. Priority Date: 08/19/2015
  • Status: Active Grant
First Claim
Patent Images

1. A method for unsupervised learning over an input space comprising discrete or continuous variables, and at least a subset of a training dataset of samples of the respective variables, to attempt to identify the value of at least one parameter that increases the log-likelihood of the at least a subset of a training dataset with respect to a model, the model expressible as a function of the at least one parameter, the method executed by circuitry including at least one processor and comprising;

  • forming a first latent space comprising a plurality of random variables, the plurality of random variables comprising one or more discrete random variables;

    forming a second latent space comprising the first latent space and a set of supplementary continuous random variables;

    forming a first transforming distribution comprising a conditional distribution over the set of supplementary continuous random variables, conditioned on the one or more discrete random variables of the first latent space;

    forming an encoding distribution comprising an approximating posterior distribution over the first latent space, conditioned on the input space;

    forming a prior distribution over the first latent space;

    forming a decoding distribution comprising a conditional distribution over the input space conditioned on the set of supplementary continuous random variables;

    determining an ordered set of conditional cumulative distribution functions of the supplementary continuous random variables, each cumulative distribution function comprising functions of a full distribution of at least one of the one or more discrete random variables of the first latent space;

    determining an inversion of the ordered set of conditional cumulative distribution functions of the supplementary continuous random variables;

    constructing a first stochastic approximation to a lower bound on the log-likelihood of the at least a subset of a training dataset;

    constructing a second stochastic approximation to a gradient of the lower bound on the log-likelihood of the at least a subset of a training dataset; and

    increasing the lower bound on the log-likelihood of the at least a subset of a training dataset based at least in part on the gradient of the lower bound on the log-likelihood of the at least a subset of a training dataset.

View all claims
  • 10 Assignments
Timeline View
Assignment View
    ×
    ×