Kenneth B. Haase

Learn More
Haase This paper describes the architecture, implementation and evaluation of NetSerf, a program for finding information archives on the Internet using natural language queries. NetSerf's query processor extracts structured, disambiguated representations from the queries. The query representations are matched to hand-coded representations of the archives(More)
Within the next decade, the majority of data carried over telecommunications links is likely to be visual material. The biggest problem in delivering video and image services is that the technology for organizing, searching, and presenting images is still in its infancy. Consequently we are developing tools for building and browsing multimedia databases,(More)
This article argues for the growing importance of quality metadata and the equation of that quality with precision and semantic grounding. Such semantic grounding requires metadata that derives from intentional human intervention as well as mechanistic measurement of content media. In both cases, one chief problem in the automatic generation of semantic(More)
The transition of print media into a digital form allows the tailoring of news for different audiences. This thesis presents a new approach to tailoring called augmenting. Augmenting makes articles more informative and relevant to the reader. The PLUM system augments news on worldwide natural disasters that readers often find remote and irrelevant. Using(More)
This thesis considers the form and function of the visual communication of historical information in computer-based media. By applying new techniques derived from traditional graphic design and cinema, such as infinite zoom, translucency, and animation, the traditional timeline is transformed into a dynamic, three-dimensional framework for the interactive(More)
We describe a similarity calculation model called IFSM (Inherited Feature Similarity Measure) between objects (words/concepts) based on their common and distinctive features. We propose an implementation method for obtaining features based on abstracted triples extracted fi'om a large text eorpus utilizing taxonomical knowledge. This model represents an(More)