#### Filter Results:

- Full text PDF available (69)

#### Publication Year

2004

2017

- This year (2)
- Last 5 years (55)
- Last 10 years (70)

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Jesse Davis, Mark Goadrich
- ICML
- 2006

Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a… (More)

- Stefan Schoenmackers, Jesse Davis, Oren Etzioni, Daniel S. Weld
- EMNLP
- 2010

Even the entire Web corpus does not explicitly answer all questions, yet inference can uncover many implicit answers. But where do inference rules come from? This paper investigates the problem of learning inference rules from Web text in an un-supervised, domain-independent manner. The SHERLOCK system, described herein, is a first-order learner that… (More)

- Jesse Davis, Pedro M. Domingos
- ICML
- 2009

Standard inductive learning requires that training and test instances come from the same distribution. Transfer learning seeks to remove this restriction. In shallow transfer, test instances are from the same domain, but have a different distribution. In deep transfer, test instances are from a different domain entirely (i.e., described by different… (More)

One of the most popular techniques for multi-relational data mining is Inductive Logic Programming (ILP). Given a set of positive and negative examples, an ILP system ideally finds a logical description of the underlying data model that discriminates the positive examples from the negative examples. However, in multi-relational data mining, one often has to… (More)

- Jesse Davis, Pedro M. Domingos
- ICML
- 2010

The structure of a Markov network is typically learned using top-down search. At each step, the search specializes a feature by con-joining it to the variable or feature that most improves the score. This is inefficient, testing many feature variations with no support in the data, and highly prone to local optima. We propose bottom-up search as an… (More)

- Jan Van Haaren, Jesse Davis
- AAAI
- 2012

The structure of a Markov network is typically learned in one of two ways. The first approach is to treat this task as a global search problem. However, these algorithms are slow as they require running the expensive operation of weight (i.e., parameter) learning many times. The second approach involves learning a set of local models and then combining them… (More)

- Kendrick Boyd, Jesse Davis, David Page, Vítor Santos Costa
- ICML
- 2012

Precision-recall (PR) curves and the areas under them are widely used to summarize machine learning results, especially for data sets exhibiting class skew. They are often used analogously to ROC curves and the area under ROC curves. It is known that PR curves vary as class skew changes. What was not recognized before this paper is that there is a region of… (More)

- Jan Struyf, Jesse Davis, David Page
- ECML
- 2006

Greedy machine learning algorithms suffer from shortsight-edness, potentially returning suboptimal models due to limited exploration of the search space. Greedy search misses useful refinements that yield a significant gain only in conjunction with other conditions. Re-lational learners, such as inductive logic programming algorithms, are especially… (More)

- Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, Luc De Raedt
- IJCAI
- 2011

Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms , which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this… (More)