Rahul Katragadda

Learn More
Readability of a summary is usually graded manually on five aspects of readability: gram-maticality, coherence and structure, focus, referential clarity and non-redundancy. In the context of automated metrics for evaluation of summary quality, content evaluations have been presented through the last decade and continue to evolve, however a careful(More)
In this paper, we describe a sentence position based summarizer that is built based on a sentence position policy, created from the evaluation testbed of recent summariza-tion tasks at Document Understanding Conferences (DUC). We show that the summa-rizer thus built is able to outperform most systems participating in task focused summariza-tion evaluations(More)
This paper describes our participation at TAC 2008 in all the three tracks. For the Summa-rization Track we introduced two major features. First, a feature based on Information Loss if we don't pick a particular sentence. Second , a language modeling extension that boosts novel terms and penalizes stale terms. During our post-TAC analysis we observed that a(More)
Large scientific knowledge bases (KBs) are bound to contain inconsistencies and under-specified knowledge. Inconsistencies are inherent because the approach to modeling certain phenomena evolves over time, and at any given time, contradictory approaches to modeling a piece of domain knowledge may simultaneously exist in the KB. Un-derspecification is(More)
Automated evaluation is crucial in the context of automated text summaries, as is the case with evaluation of any of the language technologies. While the quality of a summary is determined by both content and form of a summary, throughout the literature there has been extensive study on the automatic and semi-automatic evaluation of content of summaries and(More)