TransforMesh: A Transformer Network for Longitudinal modeling of Anatomical Meshes

@article{Sarasua2021TransforMeshAT,
  title={TransforMesh: A Transformer Network for Longitudinal modeling of Anatomical Meshes},
  author={Ignacio Sarasua and Sebastian P{\"o}lsterl and Christian Wachinger},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.00532}
}
The longitudinal modeling of neuroanatomical changes related to Alzheimer’s disease (AD) is crucial for studying the progression of the disease. To this end, we introduce TransforMesh, a spatiotemporal network based on transformers that models longitudinal shape changes on 3D anatomical meshes. While transformer and mesh networks have recently shown impressive performances in natural language processing and computer vision, their application to medical image analysis has been very limited. To… Expand
1 Citations

Figures and Tables from this paper

OViTAD: Optimized Vision Transformer to Predict Various Stages of Alzheimer’s Disease Using Resting-State fMRI and Structural MRI Data
Advances in applied machine learning techniques to neuroimaging have encouraged scientists to implement models to early diagnose brain disorders such as Alzheimer’s Disease. Predicting various stagesExpand

References

SHOWING 1-10 OF 32 REFERENCES
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
TLDR
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. Expand
SpiralNet++: A Fast and Highly Efficient Mesh Convolution Operator
TLDR
This paper presents a fast and efficient intrinsic mesh convolution operator that does not rely on the intricate design of kernel function, and explicitly formulate the order of aggregating neighboring vertices, instead of learning weights between nodes, and then a fully connected layer follows to fuse local geometric structure information with vertex features. Expand
A Simple Approach to Intrinsic Correspondence Learning on Unstructured 3D Meshes
TLDR
This work investigates whether resampling can be avoided, and proposes a simple and direct encoding approach that does not only increase processing efficiency due to its simplicity – its direct nature also avoids any loss in data fidelity. Expand
Discriminative and Generative Models for Anatomical Shape Analysison Point Clouds with Deep Neural Networks
TLDR
Deep neural networks are introduced for the analysis of anatomical shapes that learn a low-dimensional shape representation from the given task, instead of relying on hand-engineered representations, and a conditional generative model is proposed, where the condition vector provides a mechanism to control the generative process. Expand
Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study
TLDR
A ranking of the methods in the field of Unsupervised Anomaly Detection in brain MRI by utilizing a single architecture, a single resolution and the same dataset(s) is provided, to establish comparability among recent methods. Expand
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
TLDR
BART is presented, a denoising autoencoder for pretraining sequence-to-sequence models, which matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks. Expand
Sampling-free uncertainty estimation in gated recurrent units with applications to normative 10 Sarasua et al
  • modeling in neuroimaging. In: Proceedings of The 35th Uncertainty in Artificial Intelligence Conference. vol. 115, pp. 809–819
  • 2020
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
Clinical-grade computational pathology using weakly supervised deep learning on whole slide images
TLDR
A multiple instance learning-based deep learning system that uses only the reported diagnoses as labels for training, thereby avoiding expensive and time-consuming pixel-wise manual annotations, and has the ability to train accurate classification models at unprecedented scale. Expand
DIVE: A spatiotemporal progression model of brain pathology in neurodegenerative disorders
TLDR
DIVE is an image‐based disease progression model with single‐vertex resolution, designed to reconstruct long‐term patterns of brain pathology from short‐term longitudinal data sets, and has potential clinical relevance, despite being based only on imaging data, by showing that the stages correlate with cognitive test scores. Expand
...
1
2
3
4
...