• Publications
  • Influence
Not So Naive Bayes: Aggregating One-Dependence Estimators
TLDR
A new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers is presented, which delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test timerelative to the former and at training time relative to the latter. Expand
MultiBoosting: A Technique for Combining Boosting and Wagging
TLDR
MultiBoosting is an extension to the highly successful AdaBoost technique for forming decision committees that is able to harness both AdaBoost's high bias and variance reduction with wagging's superior variance reduction. Expand
Supervised Descriptive Rule Discovery: A Unifying Survey of Contrast Set, Emerging Pattern and Subgroup Mining
TLDR
It is shown that various rule learning heuristics used in CSM, EPM and SD algorithms all aim at optimizing a trade off between rule coverage and precision. Expand
Lazy Learning of Bayesian Rules
TLDR
This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR, which can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. Expand
Encyclopedia of Machine Learning
TLDR
The style of the entries in the Encyclopedia of Machine Learning is expository and tutorial, making the book a practical resource for machine learning experts, as well as professionals in other fields who need to access this vital information but may not have the time to work their way through an entire text on their topic of interest. Expand
InceptionTime: Finding AlexNet for Time Series Classification
TLDR
An important step towards finding the AlexNet network for TSC is taken by presenting InceptionTime---an ensemble of deep Convolutional Neural Network models, inspired by the Inception-v4 architecture, which outperforms HIVE-COTE's accuracy together with scalability. Expand
Discovering Significant Patterns
TLDR
This paper proposes techniques to overcome the extreme risk of type-1 error by applying well-established statistical practices, which allow the user to enforce a strict upper limit on the risk of experimentwise error. Expand
Discretization for naive-Bayes learning: managing discretization bias and variance
TLDR
Properly managing discretization bias and variance can effectively reduce naive-Bayes classification error by adjusting the number of intervals and theNumber of training instances contained in each interval is supplied. Expand
Advances in Knowledge Discovery and Data Mining
TLDR
This paper proposes strategies for estimating performance of a classifier using as little labeling resource as possible and shows that these strategies can reduce the variance in estimation of classifier accuracy by a significant amount compared to simple random sampling. Expand
OPUS: An Efficient Admissible Algorithm for Unordered Search
TLDR
The use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated. Expand
...
1
2
3
4
5
...