• Publications
  • Influence
Theory Refinement on Bayesian Networks
TLDR
Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. Expand
  • 730
  • 68
  • PDF
Operations for Learning with Graphical Models
TLDR
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Expand
  • 635
  • 34
  • PDF
Machine Invention of First Order Predicates by Inverting Resolution
TLDR
We present a mechanism for automatically inventing and generalising first-order Horn clause predicates using incremental induction to augment incomplete clausal theories. Expand
  • 590
  • 31
  • PDF
A Guide to the Literature on Learning Probabilistic Networks from Data
TLDR
The literature review presented discusses different methods under the general rubric of learning Bayesian networks from data. Expand
  • 539
  • 27
  • PDF
Learning classification trees
TLDR
This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. Expand
  • 420
  • 27
  • PDF
Variational Extensions to EM and Multinomial PCA
TLDR
This paper reviews the EM algorithm and its variational extension to the expectation-maximization algorithm and applies them to multinomial PCA. Expand
  • 193
  • 24
  • PDF
Improving LDA topic models for microblogs via tweet pooling and automatic labeling
TLDR
We investigate methods to improve topics learned from Twitter content without modifying the basic machinery of LDA; we achieve this through various pooling schemes that aggregate tweets in a data preprocessing step. Expand
  • 339
  • 22
  • PDF
Unsupervised Object Discovery: A Comparison
TLDR
The goal of this paper is to evaluate and compare models and methods for learning to recognize basic entities in images in an unsupervised setting. Expand
  • 215
  • 19
  • PDF
Bayesian Back-Propagation
TLDR
This paper presents approximate Bayesian meth ods to statistical components of back-propagat ion: choosing a cost funct ion and penalty term (interpreted as a form of prior probability), pruning insignifican t weights, est imat ing the uncertainty of weights, predict ing for new pat terns ("out -of-sample") , estimating the generalizat ion erro r, comparing different network st ructures, and handling missing values in the t raining patterns. Expand
  • 349
  • 18
  • PDF
A theory of learning classification rules
TLDR
The main contributions of this thesis are a Bayesian theory of learning classi Cation rules, the uni cation and comparison of this theory with some previous theories of learning, and two extensive applications of the theory to the problems of learning Class probability trees and bounding error when learning logical rules. Expand
  • 144
  • 16
...
1
2
3
4
5
...