# Elementos da teoria de aprendizagem de máquina supervisionada

@article{Pestov2019ElementosDT, title={Elementos da teoria de aprendizagem de m{\'a}quina supervisionada}, author={Vladimir G. Pestov}, journal={ArXiv}, year={2019}, volume={abs/1910.06820} }

This is a set of lecture notes for an introductory course (advanced undergaduates or the 1st graduate course) on foundations of supervised machine learning (in Portuguese). The topics include: the geometry of the Hamming cube, concentration of measure, shattering and VC dimension, Glivenko-Cantelli classes, PAC learnability, universal consistency and the k-NN classifier in metric spaces, dimensionality reduction, universal approximation, sample compression. There are appendices on metric and…

## One Citation

### On the Vapnik-Chervonenkis dimension of products of intervals in $\mathbb{R}^d$

- Mathematics
- 2021

It is concluded that the VapnikChervonenkis dimension of the set of balls in l ∞ – which denotes R equipped with the sup norm – equals ⌊(3d+ 1)/2⌋.

## References

SHOWING 1-10 OF 127 REFERENCES

### Scale-sensitive dimensions, uniform convergence, and learnability

- MathematicsProceedings of 1993 IEEE 34th Annual Foundations of Computer Science
- 1993

A characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire, and shows that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.

### Distance Metric Learning for Large Margin Nearest Neighbor Classification

- Computer ScienceNIPS
- 2005

This paper shows how to learn a Mahalanobis distance metric for kNN classification from labeled examples in a globally integrated manner and finds that metrics trained in this way lead to significant improvements in kNN Classification.

### Borel Isomorphic Dimensionality Reduction of Data and Supervised Learning

- Computer Science
- 2013

In this project we further investigate the idea of reducing the dimensionality of datasets using a Borel isomorphism with the purpose of subsequently applying supervised learning algorithms, as…

### Intrinsic dimension of a dataset: what properties does one expect?

- Computer Science2007 International Joint Conference on Neural Networks
- 2007

An axiomatic approach to the concept of an intrinsic dimension of a dataset, based on a viewpoint of geometry of high-dimensional structures, postulates that high values of dimension be indicative of the presence of the curse of dimensionality.

### Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension

- Computer ScienceMachine Learning
- 2004

It is demonstrated that the existence of a sample compression scheme of fixed-size for aclass C is sufficient to ensure that the class C is pac-learnable, and the relationship between sample compression schemes and the VC dimension is explored.

### A Compression Approach to Support Vector Model Selection

- Computer ScienceJ. Mach. Learn. Res.
- 2004

Inspired by several generalization bounds, "compression coefficients" for SVMs are constructed which measure the amount by which the training labels can be compressed by a code built from the separating hyperplane and can fairly accurately predict the parameters for which the test error is minimized.

### Combinatorial Variability of Vapnik-chervonenkis Classes with Applications to Sample Compression Schemes

- Mathematics, Computer ScienceDiscret. Appl. Math.
- 1998

### PAC learnability under non-atomic measures: A problem by Vidyasagar

- MathematicsTheor. Comput. Sci.
- 2013

### Sample compression schemes for VC classes

- Computer Science2016 Information Theory and Applications Workshop (ITA)
- 2016

It is shown that every concept class C with VC dimension d has a sample compression scheme of size exponential in d, and an approximate minimax phenomenon for binary matrices of low VC dimension is used, which may be of interest in the context of game theory.