An ensemble Multi-Agent System for non-linear classification

@article{Fourez2022AnEM,
  title={An ensemble Multi-Agent System for non-linear classification},
  author={Thibault Fourez and Nicolas Verstaevel and Fr{\'e}d{\'e}ric Migeon and Fr'ed'eric Schettini and Fr{\'e}d{\'e}ric Amblard},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.06824}
}
Self-Adaptive Multi-Agent Systems (AMAS) transform machine learning problems into problems of local cooperation between agents. We present smapy , an ensemble based AMAS implementation for mobility prediction, whose agents are provided with machine learning models in addition to their cooperation rules. With a detailed methodology, we show that it is possible to use linear models for nonlinear classification on a benchmark transport mode detection dataset, if they are integrated in a… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 16 REFERENCES

The Self-Adaptive Context Learning Pattern: Overview and Proposal

The pattern enabling the dynamic and interactive learning of the mapping between context and actions by the self-adaptive multi-agent systems is presented.

A decision-theoretic generalization of on-line learning and an application to boosting

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

Ultraconservative Online Algorithms for Multiclass Problems

This paper studies online classification algorithms for multiclass problems in the mistake bound model and introduces the notion of ultracon-servativeness, a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores.

Greedy function approximation: A gradient boosting machine.

A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.

Random Forests

Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.

Regression Shrinkage and Selection via the Lasso

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

Regularization and variable selection via the elastic net

It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.

An Introduction to Support Vector Machines and Other Kernel-based Learning Methods

This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory, and will guide practitioners to updated literature, new applications, and on-line software.

XGBoost: A Scalable Tree Boosting System

This paper proposes a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning and provides insights on cache access patterns, data compression and sharding to build a scalable tree boosting system called XGBoost.