Reinforcement Learning for Online Sampling Trajectory Optimization for Magnetic Resonance Imaging

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A method for performing a magnetic resonance imaging scan, the method comprising:
 a) performing an MRI acquisition using an undersampling pattern to produce undersampled kspace data;
b) adding the undersampled kspace data to aggregate undersampled kspace data for the scan;
c) reconstructing an image from the aggregate undersampled kspace data;
d) updating the undersampling pattern from the reconstructed image and aggregate undersampled kspace data using a deep reinforcement learning technique defined by an environment, reward, and agent, where the environment comprises an MRI reconstruction technique, where the reward comprises an image quality metric, and where the agent comprises a deep convolutional neural network and fully connected layers;
e) repeating steps (a), (b), (c), and (d) to produce a final reconstructed MRI image for the scan.
1 Assignment
0 Petitions
Accused Products
Abstract
A magnetic resonance imaging scan performs an MRI acquisition using an undersampling pattern to produce undersampled kspace data; adds the undersampled kspace data to aggregate undersampled kspace data for the scan; reconstructs an image from the aggregate undersampled kspace data; updates the undersampling pattern from the reconstructed image and aggregate undersampled kspace data using a deep reinforcement learning technique defined by an environment, reward, and agent, where the environment comprises an MRI reconstruction technique, where the reward comprises an image quality metric, and where the agent comprises a deep convolutional neural network and fully connected layers; and repeats these steps to produce a final reconstructed MRI image for the scan.
0 Citations
No References
No References
6 Claims
 1. A method for performing a magnetic resonance imaging scan, the method comprising:
a) performing an MRI acquisition using an undersampling pattern to produce undersampled kspace data; b) adding the undersampled kspace data to aggregate undersampled kspace data for the scan; c) reconstructing an image from the aggregate undersampled kspace data; d) updating the undersampling pattern from the reconstructed image and aggregate undersampled kspace data using a deep reinforcement learning technique defined by an environment, reward, and agent, where the environment comprises an MRI reconstruction technique, where the reward comprises an image quality metric, and where the agent comprises a deep convolutional neural network and fully connected layers; e) repeating steps (a), (b), (c), and (d) to produce a final reconstructed MRI image for the scan.  View Dependent Claims (2, 3, 4, 5, 6)
1 Specification
This application claims priority from U.S. Provisional Patent Application 62/750,342 filed Oct. 25, 2018, which is incorporated herein by reference.
This invention was made with Government support under contract EB009690 and HL127039 awarded by the National Institutes of Health. The Government has certain rights in the invention.
The present invention relates generally to magnetic resonance imaging (MRI) techniques. More specifically, it relates to methods for MRI using undersampling.
Magnetic resonance imaging (MRI) is an important medical imaging modality but MRI scans can be slow to acquire complete data for basic image reconstruction techniques. MRI acquires data in the Fourier domain over several readouts and requires several minutes per scan to acquire enough data to satisfy ShannonNyquist sampling rates. For example, in Cartesian sampling, one row of the Cartesian matrix is sampled per readout and this must be repeated for all rows.
To accelerate imaging, less data can be acquired in a process known as undersampling. Using nonlinear reconstruction techniques such as compressed sensing and deep learning, clinicallyuseful images can be recovered from the undersampled data. However, an unsolved problem is how to optimally choose the undersampling pattern, i.e., which data points to acquire when undersampling, as the best pattern can depend on many factors including anatomy, reconstruction technique, and image quality metric used to define optimality.
Although there have been prior attempts to find an optimal undersampling pattern, they have only used prior data. These existing techniques for undersampling are thus predetermined, not exploiting the data as it is collected. In addition, existing sampling trajectory designs implicitly minimize L_{2 }error, which does not necessarily imply better perceptual image quality.
According to the approach of the present invention, an MRI undersampling trajectory is determined online and updated during the scan using reinforcement learning (RL). The image reconstruction technique is the environment, the reward is based upon an image metric, and the agent infers an updated sampling pattern for the next acquisition. The agent is statistically unbiased so it does not affect the results and insights that can be learned about the reconstruction technique and image metric.
A key feature of this approach is that it exploits realtime information to determine better sampling patterns, and also updates the sampling pattern as the scan progresses. As data is collected, the image can be better understood and the collected data is exploited in realtime to guide additional data collection.
The reinforcement learning technique incorporates scan data on a readoutbyreadout basis, which makes it suitable for arbitrary MRI sampling trajectories, such as nonCartesian and 2D, 3D, and higher dimensional trajectories, including time. By formulating the problem as a reinforcement learning problem, it makes finding a solution for online sampling trajectory optimization feasible. Also by formulating the problem as a reinforcement learning problem, the problem does not have to be endtoend differentiable, enabling components such as the reward and environment to be nondifferentiable.
In one aspect, the invention provides a method for performing a magnetic resonance imaging scan, the method comprising: performing an MRI acquisition using an undersampling pattern to produce undersampled kspace data; adding the undersampled kspace data to aggregate undersampled kspace data for the scan; reconstructing an image from the aggregate undersampled kspace data; updating the undersampling pattern from the reconstructed image and aggregate undersampled kspace data using a deep reinforcement learning technique defined by an environment, reward, and agent, where the environment comprises an MRI reconstruction technique, where the reward comprises an image quality metric, and where the agent comprises a deep convolutional neural network and fully connected layers; and repeating the previous steps to produce a final reconstructed MRI image for the scan.
Preferably, the MRI reconstruction technique produces a reconstructed image as output from undersampled kspace data as input. Examples include reconstruction techniques based on the Fourier transform, compressed sensing, and deep learning.
The image quality metric of the reward preferably uses an L_{2 }norm, L_{1 }norm, discriminators from trained generative adversarial networks, losses trained with semisupervised techniques, and/or deep learning measures of image quality. The deep learning measures of image quality preferably are sharpness and/or signaltonoise ratio.
The agent is preferably configured to have the reconstructed image and the aggregate undersampled kspace data as input and the updated undersampling pattern as output. The agent may be implemented, for example, as a threelayer residual convolutional neural network.
Embodiments of the present invention provide MRI methods that use online deep reinforcement learning techniques for finding optimal undersampling patterns during a scan. The term “online” here means that the technique can process realtime data in a serial fashion as it becomes available.
Reinforcement learning in the context of this description is defined as a type of machine learning involving a software agent that takes actions in an environment to maximize a reward. In embodiments of this invention, the environment is an MRI reconstruction technique, with undersampled kspace as input and the reconstructed image as output. Reconstruction techniques may include, for example, algorithms based upon the Fourier transform, compressed sensing, and deep learning. The reward in embodiments of this invention is defined by an image quality metric on an MRI image. The tested metrics were based upon L_{2 }norm, L_{1 }norm, and metrics based upon deep learning, such as discriminators from trained generative adversarial networks and losses trained with semisupervised techniques.
The network 122 is preferably a generative adversarial network, where the generator for reconstructing the image is an unrolled optimization network trained with a supervised L_{1 }loss using randomlyweighted sampling patterns with 0%100% sampled data on a different dataset than used for the reinforcement learning.
In other embodiments of the invention, the environment could be implemented using other image reconstruction techniques such as a Fourier transform or compressed sensing. For a compressed sensing reconstruction, L_{1}ESPIRiT may be used with total variation regularization. For compressed sensing, a typical formulation is arg min_{x }∥Ax−y∥_{2}+λ∥Tx∥_{1 }where x is the reconstructed image, y is the collected data, A is some signal model transform (in the simplest case the Fourier transform), and T is some sparsifying transform such as wavelet or total variation. However, this implicitly has biases. A more general compressed sensing formulation is arg
where d is some arbitrary distance function between the reconstructed image x and the collected data y, which could be an analytical function or a neural network. R(x) is an arbitrary regularizing term, which could be an analytical function or a neural network.
More generally, the input to the reconstruction network could be an image of any kind. The input could also be arbitrary collected data. kspace implies a Fourier relationship between the collected data and the final image but there are also nonFourier data collection techniques and the reconstruction technique could address these as well. Furthermore, the reconstruction technique could accept any combination of kspace, image, and arbitrary collected data. As for output, the reconstruction technique could also output any combination of kspace, image, and arbitrary data. An example of arbitrary data output could be a vector in an embedding space.
The probability output from the trained discriminators is an image metric. The reward is defined by subtracting the metric between the current acquisition step and the previous step. In this embodiment, the reward is the negative difference in probability between acquisition steps. To reward an earlier stopping condition for all metrics, we additionally added a −1% penalty to each sampling step.
Other embodiments of the invention may use different metrics. For example, the reward may be the difference in L_{2 }or L_{1 }metric between the current step and the previous step. More generally, the image quality metric could be any arbitrary metric. Also, the input to the reward can be image data, kspace data, or a combination of the two. An example of arbitrary data input could be a vector in an embedding space. The metric could also be implemented using any neural network.
The deep agent has convolutional neural networks and fully connected layers with and image domain input to decide which readout to acquire next, in realtime. In other embodiments, the inputs may be from both kspace and image domains
The agent may be trained with various reinforcement learning methods. An agent may be trained with deep Qlearning methods with Rainbow, which includes double Qlearning, prioritized replay, dueling networks, multistep learning, distributional reinforcement learning, and noisy nets for deep reinforcement learning, policy gradients for deep reinforcement learning, and residual networks for deep learning in general.
The agent can be online with respect to different time scales. The preferred embodiment has the agent learn a policy with respect to each readout. At one extreme, the learned policy could be online with respect to each sample, such that as each sample is collected, the agent is realtime deciding which next sample to collect. A readout is composed of many samples. The learned policy could also be online with respect to multiple readouts at a time, such that the agent decides which samples to collect in the next batch of readouts.
A deep Rainbow Qlearning agent may be trained to select the rows of the Fourier domain (Cartesian phase encodes) to sample. The network may be trained with the Bellman equation and discount factor γ=0.95. The action state may be a vector of rows already sampled and an image reconstructed with the currently sampled data. An ϵgreedy approach may be used, selecting a random action with probability exponentially decaying from 0.9 to 0.05 over 1000 episodes. Experience replay may be used to decorrelate the experiences.
The agent may be trained by policy gradient methods to learn optimal policies for each combination of environment and reward. In commercial applications, each type of scan has its own type of reconstruction (environment) and potentially it could have its own image quality metric (reward). Thus, each scan would normally have its own agent. As for agent training method, all agents would normally be trained with the same method.
The techniques of the present invention were experimentally tested using a set of ten fullysampled, 3D knee datasets from mridata.org for a total of 3,840 2D images cropped to 256×256. These central patches of 256×256 were taken from the axial slices for a total of 3,840 Fourierdomain datasets and corresponding 2D images.
To first verify performance, we constructed a toy dataset with each row in the Fourier domain having a constant, increasing value, such that a successful agent should learn to sample the rows sequentially. For this experiment, we used the inverse Fourier transform for the environment and L_{2 }for reward. We then trained the agent on real data, with all combinations of environments and rewards. With the L_{2 }reward specifically, Parseval'"'"'s Theorem allows us to determine the actual optimal order of readouts. To evaluate the policy in general, we calculated the average number of readouts required to achieve less than 0.5% reward over 50 episodes.
Nine agents were trained, for every combination of environment and reward. As a benchmark to evaluate the learned policies, the average number of readouts required to achieve less than 0.5% reward was determined over 100 episodes. 0.5% reward was chosen as a stopping point, based upon initial results to achieve an undersampling factor of about two to three.
Both compressed sensing and deep reconstructions acquired reward more quickly, echoing the results in
Similar to the optimal policy, the learned policies of all reconstruction environments sample the center of the Fourier domain first, before sampling higher spectral components. The corresponding images, sampled until 0.5% L_{2 }reward, are shown in
TABLE 1 shows the average number of readouts to achieve less than 0.5% reward as a function of reconstruction. Compressed sensing and the deep reconstruction require fewer readouts than the Fourier transform reconstruction because these techniques infer the image based upon on priors.
From the results in TABLE 1, the unrolled network requires significantly fewer readouts than compressed sensing to achieve the same reward, which makes sense because the network has learned the prior distribution. Also interestingly, compressed sensing requires more samples than the Fourier Transform to achieve a 0.5% reward with the discriminator. This may be because the discriminator is unfamiliar with the image artifacts that compressed sensing produces.
The compressed sensing and deep reconstruction techniques required fewer readouts than the Fourier transform reconstruction for the L_{2 }and L_{1 }rewards. This makes sense because the former two techniques are designed to infer data from undersampled raw data.
The reinforcement learning framework provides nearly optimal results. The results highlight the inability of the L_{2 }reward to capture image quality. This provides motivation for the development of image quality metrics better aligned with diagnostic quality, which could then be addressed by the reinforcement learning framework.
The framework formulation can accommodate nonCartesian and higher dimensional trajectories as well as 2D Cartesian trajectories. Adapting this technique to higher dimensions is straightforward to implement with additional computational and storage resources. However, it would be expected to require greater effort to stably train the agent, as the action space exponentially grows in size.
The way the reinforcement learning has been defined makes it compatible with arbitrary MRI reconstruction techniques and image quality metrics, making it valuable for future deep learning reconstruction techniques and deep learning image quality metrics. Additionally, the present technique is general enough to account for other considerations such as dynamic imaging and artifacts from m sources such as motion.
Furthermore, the present technique does not introduce bias or require assumptions to learn the policy. Given the environment and reward, the agent learns an optimal policy, guided by the biases and assumptions introduced by the environment and reward. For example, compressed sensing minimizes an L_{2 }dataconsistency term and deep learning networks usually minimize a supervised L_{1 }loss. As new techniques emerge, the traditional intuition to sample the center may not be as pronounced. This is especially plausible with the development of semisupervised and unsupervised techniques for training deep learning reconstruction algorithms and deeplearning based image quality metrics. In these cases, the results of this reinforcement learning framework may not necessarily follow the conventional intuition and the resultant sampling patterns may help elucidate the behavior of these networks.