Rahul Katragadda

Learn More
Readability of a summary is usually graded manually on five aspects of readability: gram-maticality, coherence and structure, focus, referential clarity and non-redundancy. In the context of automated metrics for evaluation of summary quality, content evaluations have been presented through the last decade and continue to evolve, however a careful(More)
In this paper, we describe a sentence position based summarizer that is built based on a sentence position policy, created from the evaluation testbed of recent summariza-tion tasks at Document Understanding Conferences (DUC). We show that the summa-rizer thus built is able to outperform most systems participating in task focused summariza-tion evaluations(More)
Automated evaluation is crucial in the context of automated text summaries, as is the case with evaluation of any of the language technologies. While the quality of a summary is determined by both content and form of a summary, throughout the literature there has been extensive study on the automatic and semi-automatic evaluation of content of summaries and(More)
Large scientific knowledge bases (KBs) are bound to contain inconsistencies and under-specified knowledge. Inconsistencies are inherent because the approach to modeling certain phenomena evolves over time, and at any given time, contradictory approaches to modeling a piece of domain knowledge may simultaneously exist in the KB. Un-derspecification is(More)
Multicast is a communication model in which a message is sent from a source to an arbitrary number of distinct destinations. Two main parameters that are used to evaluate multicast routing are the time it takes to deliver the message to all destinations and the traffic, i.e., the total number of links involved in the multicast process. It has been proved(More)
  • 1