Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
The WEKA data mining software: an update
- M. Hall, Eibe Frank, G. Holmes, B. Pfahringer, P. Reutemann, I. Witten
- Computer ScienceSKDD
- 16 November 2009
This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.
Classifier chains for multi-label classification
This paper presents a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity, and illustrates the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.
MOA: Massive Online Analysis
MOA includes a collection of offline and online methods as well as tools for evaluation that implements boosting, bagging, and Hoeffding Trees, all with and without Naive Bayes classifiers at the leaves.
New ensemble methods for evolving data streams
A new experimental data stream framework for studying concept drift, and two new variants of Bagging: ADWIN Bagging and Adaptive-Size Hoeffding Tree (ASHT) Bagging are proposed.
Active Learning With Drifting Streaming Data
- I. Žliobaitė, A. Bifet, B. Pfahringer, G. Holmes
- Computer ScienceIEEE Transactions on Neural Networks and Learning…
This paper presents a theoretically supported framework for active learning from drifting data streams and develops three active learning strategies for streaming data that explicitly handle concept drift, based on uncertainty, dynamic allocation of labeling efforts over time, and randomization of the search space.
Adaptive random forests for evolving data stream classification
This work presents the adaptive random forest (ARF) algorithm, which includes an effective resampling method and adaptive operators that can cope with different types of concept drifts without complex optimizations for different data sets.
Leveraging Bagging for Evolving Data Streams
A new variant of bagging is proposed, called leveraging bagging, which combines the simplicity of baging with adding more randomization to the input, and output of the classifiers.
Multinomial Naive Bayes for Text Categorization Revisited
- A. M. Kibriya, Eibe Frank, B. Pfahringer, G. Holmes
- Computer ScienceAustralian Conference on Artificial Intelligence
- 4 December 2004
It is shown how the performance of multinomial naive Bayes can be improved using locally weighted learning, and that support vector machines are still the method of choice if the aim is to maximize accuracy.
Multi-label Classification Using Ensembles of Pruned Sets
- J. Read, B. Pfahringer, G. Holmes
- Computer ScienceEighth IEEE International Conference on Data…
- 15 December 2008
The results from experimental evaluation on a variety of multi-label datasets show that [E]PS can achieve better performance and train much faster than other multi- label methods.
Locally Weighted Naive Bayes
A locally weighted version of naive Bayes that relaxes the independence assumption by learning local models at prediction time is presented, and experimental results show that locally weighted naiveBayes rarely degrades accuracy compared to standard naive Baye and, in many cases, improves accuracy dramatically.