Learn More
This paper presents the evaluation of a design and architecture for browsing and searching MPEG-7 images. Our approach is novel in that it exploits concept lattices for the representation and navigation of image content. Several concept lattices provide the foundation for the system (called IMAGE-SLEUTH) each representing a different search context, one for(More)
This paper presents a Java-based hyperbolic-style browser designed to render RDF files as structured ontological maps. The program was motivated by the need to browse the content of a web-accessible ontology server: WEBKB-2. The ontology server contains descriptions of over 74,500 object types derived from the WORDNET 1.7 lexical database and can be(More)
SearchSleuth is a program developed to experiment with the automated local analysis of Web search using formal concept analysis. SearchSleuth extends a standard search interface to include a conceptual neighborhood centered on a formal concept derived from the initial query. This neighborhood of the concept derived from the search terms is decorated with(More)
Formal Concept Analysis (FCA) has typically been applied in the field of software engineering to support software maintenance and object-oriented class identification tasks. This paper presents a broader overview by describing and classifying academic papers that report the application of FCA to software engineering. The papers are classified using a(More)
Mail-Sleuth is a personal productivity tool that allows individuals to manage email and visualize its contents using line diagrams. Based on earlier work on the Conceptual Email Manager (Cem), a major hypothesis of Mail-Sleuth is that novices to Formal Concept Analysis can read a lattice diagram. Since there is no empirical evidence for this in the Formal(More)
Query-directed browsing of unstructured Web-texts using Formal Concept Analysis (FCA) confronts two problems. Firstly on-line Web-data is sometimes unstructured and any FCA-system must include additional mechanisms to structure input sources. Secondly many on-line collections are large and dynamic so a Web-robot must be used to automatically extract data.(More)