On Orthogonal Projections for Dimension Reduction and Applications in Augmented Target Loss Functions for Learning Problems

@article{Breger2019OnOP,
  title={On Orthogonal Projections for Dimension Reduction and Applications in Augmented Target Loss Functions for Learning Problems},
  author={Anna Breger and Jos{\'e} Ignacio Orlando and Pavol Har{\'a}r and Monika D{\"o}rfler and Sophie Klimscha and Christoph Grechenig and Bianca S. Gerendas and Ursula Schmidt-Erfurth and Martin Ehler},
  journal={Journal of Mathematical Imaging and Vision},
  year={2019},
  volume={62},
  pages={376-394}
}
The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, preservation of variance and of pairwise relative distances. Investigations of their asymptotic correlation as well as numerical experiments show that a projection does usually not satisfy both objectives at once. In a standard classification problem, we determine projections on the input data… 
Improving Machine Hearing on Limited Data Sets
TLDR
This contribution investigates how input and target representations interplay with the amount of available training data in a music information retrieval setting and compares the standard mel-spectrogram inputs with a newly proposed representation, called Mel scattering.
Machines listening to music: the role of signal representations in learning from music
TLDR
It is shown how applying various different signal analysis methods can lead to useful invariances and improve the overall performance in MIR problems by reducing the amount of necessary training data or the necessity of augmentation.
An amplified-target loss approach for photoreceptor layer segmentation in pathological OCT scans
TLDR
This paper introduces a novel amplified-target loss that explicitly penalizes errors within the central area of the input images, based on the observation that most of the challenging disease appeareance is usually located in this area.
Segmentation of multicorrelated images with copula models and conditionally random fields
TLDR
A method for segmenting multisource images that are statistically correlated and compared with different state-of-the-art methods, which includes supervised (convolutional neural networks) and unsupervised (hierarchical MRF).
On the reconstruction accuracy of multi-coil MRI with orthogonal projections
TLDR
This work aims to analyze the final coil combination step in the framework of linear compression, including principal component analysis (PCA), and finds that the compression with PCA outperforms all other methods, including rSOS on the uncompressed image space data.
An Introduction to Johnson-Lindenstrauss Transforms
Johnson–Lindenstrauss Transforms are powerful tools for reducing the dimensionality of data while preserving key characteristics of that data, and they have found use in many fields from machine
BRNO UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering and Communication DOCTORAL THESIS SHORTENED VERSION
TLDR
This work is the first to experiment with deep learning in this field and on so far the largest combined database of dysphonic voices, which was created in this work, provides a thorough examination of publicly available data sources and identifies their limitations.
Deep Learning Based Segmentation of Brain Tissue from Diffusion MRI
TLDR
A convolutional neural network is trained to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary and adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters.
Improved Principal Component Analysis and Linear Discriminant Analysis for the Determination of Origin of Coffee Beans using
In this work an improved Principal Component Analysis (pca) method is used for better determination of geographical origins of Ethiopian Green Coffee Beans. In the commercially available and widely
...
1
2
...

References

SHOWING 1-10 OF 74 REFERENCES
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
TLDR
A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios.
Random Projections of Smooth Manifolds
Abstract We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about
Label-Free Supervision of Neural Networks with Physics and Domain Knowledge
TLDR
This work introduces a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs, derived from prior domain knowledge.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Generating Images with Perceptual Similarity Metrics based on Deep Networks
TLDR
A class of loss functions, which are called deep perceptual similarity metrics (DeePSiM), are proposed that compute distances between image features extracted by deep neural networks and better reflects perceptually similarity of images and thus leads to better results.
Random projection in dimensionality reduction: applications to image and text data
TLDR
It is shown that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection.
NuMax: A Convex Approach for Learning Near-Isometric Linear Embeddings
TLDR
A novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points and develops a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs.
Generalized Low Rank Models
TLDR
This work extends the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types, and proposes several parallel algorithms for fitting generalized low rank models.
Random Projections for Large-Scale Regression
TLDR
It can be shown that the combination of random projections with least squares regression leads to similar recovery as ridge regression and principal component regression.
Random Projections of Signal Manifolds
  • M. Wakin, Richard Baraniuk
  • Computer Science
    2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings
  • 2006
TLDR
Preliminary theoretical and experimental evidence is provided that manifold-based signal structure can be preserved using small numbers of random projections and Whitney's embedding theorem, which states that a K-dimensional manifold can be embedded in Ropf2K+1, is examined.
...
1
2
3
4
5
...