Data Set Used
We introduce a new segmentation evaluation measure, WinPR, which resolves some of the limitations of WindowDiff. WinPR distinguishes between false positive and false negative errors; produces more intuitive measures, such as precision, recall, and F-measure; is insensitive to window size, which allows us to customize near miss sensitivity; and is based on… (More)
In internet advertising, negative key phrases are used in order to exclude the display of an advertisement to non-target audience. We describe a method for automatically identifying negative key phrases. We use Wikipedia as our sense inventory and as an annotated corpus from which we create context vectors and determine negative phrases, which correlate… (More)
We submitted runs from two different systems for the update summary task at TAC 2009. The first system refined its use of Roget's Thesaurus, moving beyond 2008's semantic relatedness to compute an entropy-based uniqueness measure, with improved results in summary construction. The other system, our first use of deeper semantic knowledge, represents… (More)
To improve information retrieval from films we attempt to segment movies into scenes using the subtitles. Film subtitles differ significantly in nature from other texts; we describe some of the challenges of working with movie subtitles. We test a few modifications to the TextTiling algorithm, in order to get an effective segmentation.
We present a method for automatic extraction of frames from .a dependency graph. Our method uses machine learning applied to a dependency tree to assign frames and assign frame elements. The system is evaluated by cross-validation on FrameNet sentences, and also on the test data from the SemEval 2007 task 19. Our system is intended for use in natural… (More)