#### Filter Results:

- Full text PDF available (73)

#### Publication Year

2004

2017

- This year (5)
- Last 5 years (59)
- Last 10 years (74)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Jesse Davis, Mark Goadrich
- ICML
- 2006

Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a… (More)

- Stefan Schoenmackers, Jesse Davis, Oren Etzioni, Daniel S. Weld
- EMNLP
- 2010

Even the entire Web corpus does not explicitly answer all questions, yet inference can uncover many implicit answers. But where do inference rules come from? This paper investigates the problem of learning inference rules from Web text in an unsupervised, domain-independent manner. The SHERLOCK system, described herein, is a first-order learner that… (More)

- Jesse Davis, Pedro M. Domingos
- ICML
- 2009

Standard inductive learning requires that training and test instances come from the same distribution. Transfer learning seeks to remove this restriction. In shallow transfer, test instances are from the same domain, but have a different distribution. In deep transfer, test instances are from a different domain entirely (i.e., described by different… (More)

- Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, Luc De Raedt
- IJCAI
- 2011

Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this… (More)

- Jan Van Haaren, Jesse Davis
- AAAI
- 2012

The structure of a Markov network is typically learned in one of two ways. The first approach is to treat this task as a global search problem. However, these algorithms are slow as they require running the expensive operation of weight (i.e., parameter) learning many times. The second approach involves learning a set of local models and then combining them… (More)

One of the most popular techniques for multi-relational data mining is Inductive Logic Programming (ILP). Given a set of positive and negative examples, an ILP system ideally finds a logical description of the underlying data model that discriminates the positive examples from the negative examples. However, in multi-relational data mining, one often has to… (More)

- Nima Taghipour, Daan Fierens, Jesse Davis, Hendrik Blockeel
- AISTATS
- 2012

Lifted probabilistic inference algorithms exploit regularities in the structure of graphical models to perform inference more efficiently. More specifically, they identify groups of interchangeable variables and perform inference once for each group, as opposed to once for each variable. The groups are defined by means of constraints, so the flexibility of… (More)

- Jesse Davis, Pedro M. Domingos
- ICML
- 2010

The structure of a Markov network is typically learned using top-down search. At each step, the search specializes a feature by conjoining it to the variable or feature that most improves the score. This is inefficient, testing many feature variations with no support in the data, and highly prone to local optima. We propose bottom-up search as an… (More)

Statistical relational learning (SRL) algorithms learn statistical models from relational data, such as that stored in a relational database. We previously introduced view learning for SRL, in which the view of a relational database can be automatically modified, yielding more accurate statistical models. The present paper presents SAYU-VISTA, an algorithm… (More)