SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images

  title={SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images},
  author={Malte Hoffmann and Benjamin Billot and Douglas N. Greve and Juan Eugenio Iglesias and Bruce R. Fischl and Adrian V. Dalca},
  journal={IEEE transactions on medical imaging},
  pages={543 - 558}
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during… 
HyperMorph: Amortized Hyperparameter Learning for Image Registration
Amortized hyperparameter learning for image registration is introduced, a novel strategy to learn the effects of hyperparameters on deformation fields, and it is demonstrated that this approach can be used to optimize multiplehyperparameter values considerably faster than existing search strategies, leading to a reduced computational and human burden as well as increased flexibility.
Rapid processing and quantitative evaluation of multicontrast EPImix scans for adaptive multimodal imaging
It is demonstrated that quantitative information can be derived from a neuroimaging scan acquired and processed within minutes, which could further be used to implement adaptive multimodal imaging and tailor neuroim imaging examinations to individual patients.
ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration
Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize
Molecular Computational Anatomy: Unifying the Molecular to Tissue Continuum Via Measure Representations of the Brain
This work has shown the ability to distinguish between the activity of the immune system and the “spatially aggregating” cells, which is a sign of inflammation and indicates the need for further research into these mechanisms.
Generative Aging of Brain Images with Diffeomorphic Registration
The experimental results show that the proposed method can produce anatomically plausible predictions that can be used to enhance longitudinal datasets, in turn enabling data-hungry AI-driven healthcare tools.
SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration
This paper argues that the relative failure of supervised registration approaches can be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation, and introduces a simple but crucial modi cation to the U-Net that disentangles feature extraction and matching from deformation prediction.
Mapping the Neural Dynamics of Locomotion across the Drosophila Brain
Volumetric two-photon imaging is used to map neural activity associated with walking across the entire brain of Drosophila to suggest a dynamical systems framework for constructing walking maneuvers reminiscent of models of forelimb reaching in primates and set a foundation for understanding how local circuits interact across large-scale networks.
Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions
An automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enable non-computer experts, e.g., medical/clinical users, to conveniently find off-the-shelf registration algorithms for diverse scenarios.
Region Specific Optimization (RSO)-based Deep Interactive Registration
This work identified the reason why the TTO technique was slow, or even failed, to improve some regions’ registration results, and proposed a two-levels Tto technique, i.e., image-specific optimization (ISO) and region- specific optimization (RSO), where the region can be interactively indicated by the clinician during the registration result reviewing process.
SUD: Supervision by Denoising for Medical Image Segmentation
This work proposes “supervision by denoising” (SUD), a framework that enables us to supervise segmentation models using their denoised output as targets and validates SUD on three tasks, demonstrating a significant improvement in the Dice overlap and the Hausdorff distance of segmentations over supervised-only and temporal ensemble baselines.


VoxelMorph: A Learning Framework for Deformable Medical Image Registration
VoxelMorph promises to speed up medical image analysis and processing pipelines while facilitating novel directions in learning-based registration and its applications and demonstrates that the unsupervised model’s accuracy is comparable to the state-of-the-art methods while operating orders of magnitude faster.
Non-rigid image registration using fully convolutional networks with deep self-supervision
A novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered that has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.
Robust Non-rigid Registration Through Agent-Based Action Learning
This paper investigates in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis and presents a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs.
Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning
A learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data that scales well to new image modalities or new image applications with little to no human intervention.
Label-driven weakly-supervised learning for multimodal deformarle image registration
A weakly-supervised, label-driven formulation for learning 3D voxel correspondence from higher-level label correspondence is proposed, thereby bypassing classical intensity-based image similarity measures.
Pulmonary CT Registration Through Supervised Learning With Convolutional Neural Networks
This approach results in an accurate and very fast deformable registration method, without a requirement for parameterization at test time or manually annotated data for training.
Nonrigid Image Registration Using Multi-scale 3D Convolutional Neural Networks
The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information, thereby greatly simplifying the training problem.
Data Augmentation Using Learned Transformations for One-Shot Medical Image Segmentation
This work learns a model of transformations from the images, and uses the model along with the labeled example to synthesize additional labeled examples, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures.