Continuous-Time Deep Glioma Growth Models

@article{Petersen2021ContinuousTimeDG,
  title={Continuous-Time Deep Glioma Growth Models},
  author={Jens Petersen and Fabian Isensee and Gregor Koehler and Paul F. Jager and David Zimmerer and Ulf Neuberger and Wolfgang Wick and J{\"u}rgen Debus and Sabine Heiland and Martin Bendszus and Philipp Vollmuth and Klaus H. Maier-Hein},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12917}
}
The ability to estimate how a tumor might evolve in the future could have tremendous clinical benefits, from improved treatment decisions to better dose distribution in radiation therapy. Recent work has approached the glioma growth modeling problem via deep learning and variational inference, thus learning growth dynamics entirely from a real patient data distribution. So far, this approach was constrained to predefined image acquisition intervals and sequences of fixed length, which limits… 
Casting the inverse problem as a database query. The case of personalized tumor growth modeling
TLDR
This paper proposes a method compressing complex traditional strategies for solving an inverse problem into a simple database query task and shows that the query approach can yield accurate and, depending on the chosen optimization, also deterministic results in the order of seconds.
Image prediction of disease progression by style-based manifold extrapolation
TLDR
This work has combined a regularized generative adversarial network (GAN) and a latent nearest neighbor algorithm for joint optimization to generate plausible images of future time points and opens up a new possibility of using model-based morphology and risk prediction to make predictions about future disease occurrence.
Predicting Osteoarthritis Progression in Radiographs via Unsupervised Representation Learning
TLDR
An unsupervised learning scheme based on generative models to predict the future development of OA based on knee joint radiographs using longitudinal data from osteoarthritis studies to predict a patient’s future radiographs up to the eight-year follow-up visit is introduced.

References

SHOWING 1-10 OF 24 REFERENCES
Conditional Neural Processes
TLDR
Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.
Neural Processes
TLDR
This work introduces a class of neural latent variable models which it calls Neural Processes (NPs), combining the best of both worlds: probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability.
Learning models for visual 3D localization with implicit mapping
TLDR
This work proposes a formulation of visual localization that does not require construction of explicit maps in the form of point clouds or voxels, and shows that models with implicit mapping are able to capture the underlying 3D structure of visually complex scenes.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Deep Probabilistic Modeling of Glioma Growth
TLDR
Evidence is presented that the proposed alternative approach based on recent advances in probabilistic segmentation and representation learning that implicitly learns growth dynamics directly from data without an underlying explicit model is able to learn a distribution of plausible future tumor appearances.
Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology.
TLDR
Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved patient survival prospects.
Linformer: Self-Attention with Linear Complexity
TLDR
This paper demonstrates that the self-attention mechanism of the Transformer can be approximated by a low-rank matrix, and proposes a new self-Attention mechanism, which reduces the overall self-ATTention complexity from $O(n^2)$ to $O (n)$ in both time and space.
Multiatlas Calibration of Biophysical Brain Tumor Growth Models with Mass Effect
TLDR
A novel inversion scheme is introduced that uses multiple brain atlases as proxies for the healthy precancer patient brain resulting in robust and reliable parameter estimation and provides both global and local quantitative measures of tumor infiltration and mass effect.
Reformer: The Efficient Transformer
TLDR
This work replaces dot-product attention by one that uses locality-sensitive hashing and uses reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of several times, making the model much more memory-efficient and much faster on long sequences.
Attentive Neural Processes
TLDR
Attention is incorporated into NPs, allowing each input location to attend to the relevant context points for the prediction, which greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.
...
1
2
3
...