When can unlabeled data improve the learning rate?
- Christina Göpfert, Shai Ben-David, O. Bousquet, S. Gelly, I. Tolstikhin, Ruth Urner
- Computer ScienceAnnual Conference Computational Learning Theory
- 28 May 2019
This work argues that for improvements in the minimax learning rate to be realistic and indisputable, certain specific conditions should be satisfied and previous analyses have failed to meet those conditions.
Differential privacy for learning vector quantization
Statistical Mechanics of On-Line Learning Under Concept Drift
- M. Straat, F. Abadi, Christina Göpfert, B. Hammer, Michael Biehl
- Computer ScienceEntropy
- 5 September 2018
A modeling framework for the investigation of on-line machine learning processes in non-stationary environments is introduced and it is shown that LVQ is capable of tracking a classification scheme under drift to a non-trivial extent and that concept drift can cause the persistence of sub-optimal plateau states in gradient based training of layered neural networks for regression.
Time Series Prediction for Graphs in Kernel and Dissimilarity Spaces
- Benjamin Paassen, Christina Göpfert, B. Hammer
- Computer ScienceNeural Processing Letters
- 21 April 2017
It is found that simple regression methods, such as kernel regression, are sufficient to capture the dynamics in the theoretical models, but that Gaussian process regression significantly improves the prediction error for real-world data.
Interpretation of linear classifiers by means of feature relevance bounds
Prototype-based classifiers in the presence of concept drift: A modelling framework
- Michael Biehl, F. Abadi, Christina Göpfert, B. Hammer
- Computer ScienceWorkshop on Self-Organizing Maps
- 18 March 2019
First results demonstrate that, while basic LVQ algorithms are suitable for the training in non-stationary environments, weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes.
Feature Relevance Bounds for Linear Classification
- Christina Göpfert, Lukas Pfannschmidt, B. Hammer
- Computer ScienceThe European Symposium on Artificial Neural…
This work addresses the important case of linear classifiers and transfers the problem how to infer feature relevance bounds to a convex optimization problem and demonstrates the superiority of the resulting technique in comparison to popular feature-relevance determination methods in several benchmarks.
FRI-Feature Relevance Intervals for Interpretable and Interactive Data Exploration
- Lukas Pfannschmidt, Christina Göpfert, Ursula Neumann, D. Heider, B. Hammer
- Computer ScienceIEEE Symposium on Computational Intelligence in…
- 2 March 2019
FRI is an open source Python library that can be used to identify all-relevant variables in linear classification and (ordinal) regression problems and is able to provide the base for further general experimentation or in specific can facilitate the search for alternative biomarkers.
Convergence of Multi-pass Large Margin Nearest Neighbor Metric Learning
- Christina Göpfert, Benjamin Paassen, B. Hammer
- Computer ScienceInternational Conference on Artificial Neural…
- 6 September 2016
It is shown that an iterated LMNN scheme (multi-pass LMNN) is a valid optimization technique for the original LMNN cost function without the assumption that the nearest neighbors within classes remain constant.
Supervised learning in the presence of concept drift: a modelling framework
- M. Straat, F. Abadi, Z. Kan, Christina Göpfert, B. Hammer, Michael Biehl
- Computer ScienceNeural computing & applications (Print)
- 21 May 2020
Two example types of learning systems are model: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks, and so-called student–teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data.