• Corpus ID: 248119164

Modeling Mask Uncertainty in Hyperspectral Image Reconstruction

@inproceedings{Wang2021ModelingMU,
  title={Modeling Mask Uncertainty in Hyperspectral Image Reconstruction},
  author={Jiamian Wang and Yulun Zhang and Xin Yuan and Ziyi Meng and Zhiqiang Tao},
  year={2021}
}
Recently, hyperspectral imaging (HSI) has attracted increasing research attention, especially for the ones based on a coded aperture snapshot spectral imaging (CASSI) system. Existing deep HSI reconstruction models are generally trained on paired data to retrieve original signals upon 2D compressed measurements given by a particular optical hardware mask in CASSI, during which the mask largely impacts the reconstruction performance and could work as a “model hyperparameter” governing on data… 

References

SHOWING 1-10 OF 64 REFERENCES
Self-supervised Neural Networks for Spectral Snapshot Compressive Imaging
TLDR
This paper develops a framework by integrating DIP into the plug-and-play regime, leading to a self-supervised network for spectral SCI reconstruction, and shows that the proposed algorithm without training is capable of achieving competitive results to the training based networks.
End-to-End Low Cost Compressive Spectral Imaging with Spatial-Spectral Self-Attention
TLDR
This work reproduces a stable single disperser CASSI system and proposes a novel deep convolutional network to carry out the real-time reconstruction by using self-attention, employing Spatial-Spectral Self-Attention (TSA) to process each dimension sequentially, yet in an order-independent manner.
Snapshot multispectral endomicroscopy.
TLDR
A snapshot multispectral endomicroscope that employs a fiber bundle to deliver an in-body tissue spatial-spectral datastream to an external compressive spectral imager that is equipped with an end-to-end deep-learning-based reconstruction algorithm.
Weight Uncertainty in Neural Network
TLDR
This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.
The Graph Neural Network Model
TLDR
A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
SwinIR: Image Restoration Using Swin Transformer
TLDR
A strong baseline model SwinIR is proposed for image restoration based on the Swin Transformer that outperforms state-of-the-art methods on different tasks by up to 0.14∼0.45dB, while the total number of parameters can be reduced byUp to 67%.
Do Vision Transformers See Like Convolutional Neural Networks?
TLDR
Analyzing the internal representation structure of ViTs and CNNs on image classification benchmarks, there are striking differences between the two architectures, such as ViT having more uniform representations across all layers and ViT residual connections, which strongly propagate features from lower to higher layers.
...
...