Share This Author
Drug Design by Machine Learning: Support Vector Machines for Pharmaceutical Data Analysis
The Generalized FITC Approximation
An efficient generalization of the sparse pseudo-input Gaussian process model developed by Snelson and Ghahramani is presented, applying it to binary classification problems and resulting in a numerically stable algorithm with O(NM2) training complexity.
Performance Degradation in Boosting
It is concluded that, if the strength of the underlying learner approaches the identified strength levels, it is possible to avoid performance degradation and achieve high productivity in boosting by weakening the learner prior to boosting.
Cross-validation for binary classification by real-valued functions: theoretical analysis
This paper devise new holdout and cross-validation estimators for the case where real-valued functions are used as classifiers, and analyse theoretically the accuracy of these.
Machine Learning for First-Order Theorem Proving
- James P. Bridge, S. Holden, Lawrence Charles Paulson
- Computer ScienceJournal of Automated Reasoning
- 1 August 2014
The aim was to demonstrate that sufficient information is available from simple feature measurements of a conjecture and axioms to determine a good choice of heuristic, and that the choice process can be automatically learned.
On the power of polynomial discriminators and radial basis function networks
This paper examines the representational and expressive power of two types of linearly weighted neural network: the polynomial discriminators (PDFs) and the radial basis function networks (RBFNs), and obtains bounds on the VC dimensions of RBFNs with certain standard basis functions.
Support Vector Machines for ADME Property Classification
The SVM is shown to be competitive with techniques representing a state-of-the-art on three challenging pharmaceutical classification tasks and demonstrates good potential for further use in this area of drug discovery.
Robust Regression with Twinned Gaussian Processes
A Gaussian process (GP) framework for robust inference in which a GP prior on the mixing weights of a two-component noise model augments the standard process over latent function values and yields more confident predictions on benchmark problems than classical heavy-tailed models and exhibits improved stability for data with clustered corruptions.
PAC-like upper bounds for the sample complexity of leave-one-out cross-validation
- S. Holden
- Computer ScienceCOLT '96
When designing a pattern classifier it is often the case that we have available a supervised learning technique and a collection of training data, and we would like to gain some idea of what the…
Quantifying Generalization in Linearly Weighted Neural Networks
The Vapnik-Chervonenkis dimension of certain types of linearly weighted neural networks is investigated and the "probably approximately correct" learning framework is described and the importance of the Vapik-C ChervonenKis dimension is illustrated.