Toward A Qualitative Search Engine

@article{Li1998TowardAQ,
  title={Toward A Qualitative Search Engine},
  author={Yanhong Li},
  journal={IEEE Internet Comput.},
  year={1998},
  volume={2},
  pages={24-29}
}
  • Yanhong Li
  • Published 1 July 1998
  • Computer Science
  • IEEE Internet Comput.
Traditional search engines do not consider document quality in ranking search results. The paper discusses the Hyperlink Vector Voting method which adds a qualitative dimension to its rankings by factoring in the number and descriptions of hyperlinks to the document. 

Figures and Tables from this paper

Beyond relevance ranking: hyperlink vector voting

A new method for hypertext indexing and retrieval called Hyperlink Vector Voting (HVV) is proposed. It combines relevance ranking and quality ranking for hypertext retrieval systems. Ranking of

A Voting Method for XML Retrieval

The retrieval approach proposed by the SIG/EVI group of the IRIT research centre in INEX'2004 evaluation uses a voting method coupled with some processes to answer content only and content and structure queries.

Webpage Ranking Algorithms Second Exam Report

The survey focuses on an overview of eight selected search ranking algorithms that strive to improve the quality of search results: (1) the meaning of the search query; and (2) the relevancy of the result in relation to user’s intention.

IRIT at INEX 2003

This paper describes the retrieval approaches proposed by IRIT in INEX’2003 evaluation and discusses a second approach based on a voting method previously applied in the context of automatic text categorization.

Web Structure Mining

This chapter covers the basic properties, concepts and models of the Web graph, as well as the main link ranking and Web page clustering algorithms. We also address important algorithmic issues such

Intelligent Web Search via Personalizable Meta-search Agents

A methodology and architecture for an agent-based system, WebSifter, that captures the semantics of a user's search intent, transforms the semantic query into target queries for existing search engines, and ranks resulting page hits according to a user-specified, weighted-rating scheme.

Implementation of two-tier link extractor in optimized search engine filtering system

This research is towards the link extractor's architectural design, which involves in the process of elimination, by document comparison methods with the help of some filters, to reduce the access time of the users.

Chapter 10 World Wide Web Search Engines

This chapter provides an overview of the existing technologies for Web search engines and classifies them into six categories: i) hyperlink exploration, ii) information retrieval, iii) metasearches, iv) SQL approaches, v) content-based multimedia searches, and vi) others.

Uksearch -web Search with Knowledge-rich Indices Related Work

This paper addresses the problem of search over a restricted domain by indexing the source data in a more elaborate way than in standard search engine technology, to extract concepts that are used to create a structure for the documents that is similar to that found in classiied directories.

Automatic Web Page Categorization by Link and Context Analysis

The paper describes the novel technique of categorization by context, which instead extracts useful information for classifying a document from the context where a URL referring to it appears, and presents the results of experimenting with Theseus, a classifier that exploits this technique.
...

References

SHOWING 1-10 OF 15 REFERENCES

Beyond relevance ranking: hyperlink vector voting

A new method for hypertext indexing and retrieval called Hyperlink Vector Voting (HVV) is proposed. It combines relevance ranking and quality ranking for hypertext retrieval systems. Ranking of

A retrieval model incorporating hypertext links

A RETRIEVAL MODEL developed for BIBLIOGRAPHic Information Information Retrieval is described and how HYPERTEXT LINKS can be INCORPORated is shown.

Searching for information in a hypertext medical handbook

This approach responds to a query by initially treating each hypertext card as a full-text document, which utilizes information about document structure to propagate weights to neighboring cards and produces a ranked list of potential starting points for graphical browsing.

Searching for information in a hypertext medical handbook

  • E. Mark
  • Computer Science, Medicine
  • 1988
Implementing a popular medical handbook in hypertext underscores the need to study hypertext in the context of full-text document retrieval, machine learning, and user interface issues.

Multi-Engine Search and Comparison Using the MetaCrawler

The MetaCrawler provides a single, central interface for Web document searching that facilitates customization, privacy, sophisticated ltering of references, and more and serves as a tool for comparison of diverse search services.

Multi-Service Search and Comparison Using the MetaCrawler

The MetaCrawler provides a single, central interface for Web document searching that facilitates customization, privacy, sophisticated ltering of references, and more and serves as a tool for comparison of diverse search services.

Length Normalization in Degraded Text

This study examines the eeects of the well known cosinenormalization method in the presence of OCR errors, and proposes a new, more robust, normalization method that is less sensitive to OCRerrors and facilitates use of more diverse basic weighting schemes.

Cyberspace 2000: dealing with information overload

Cyberspace seems to most satisfy Bacon’s requirement that a truly differentiating technology have far-reaching consequences for society, and is the only one that will come to be associated with the 21st century.