Controlling the robots of Web search engines

@inproceedings{Talim2001ControllingTR,
  title={Controlling the robots of Web search engines},
  author={Jerome Talim and Zhen Liu and Philippe Nain and Edward G. Coffman},
  booktitle={SIGMETRICS/Performance},
  year={2001}
}
Robots are deployed by a Web search engine for collecting information from different Web servers in order to maintain the currency of its data base of Web pages. In this paper, we investigate the number of robots to be used by a search engine so as to maximize the currency of the data base without putting an unnecessary load on the network. We adopt a finite-buffer queueing model to represent the system. The arrivals to the queueing system are Web pages brought by the robots; service… CONTINUE READING
Highly Cited
This paper has 42 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 25 extracted citations

Design and development of classifier system for Gconference.net

7th International Conference on Networked Computing • 2011
View 6 Excerpts
Highly Influenced

A high-precision forum crawler based on vertical crawling

2009 IEEE International Conference on Network Infrastructure and Digital Content • 2009
View 5 Excerpts
Highly Influenced

Whittle Index Policy for Crawling Ephemeral Content

IEEE Transactions on Control of Network Systems • 2018
View 1 Excerpt

Whittle index policy for crawling ephemeral content

2015 54th IEEE Conference on Decision and Control (CDC) • 2015
View 1 Excerpt

Design and implementation of competent web crawler and indexer using web services

2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies • 2014
View 3 Excerpts

Similar Papers

Loading similar papers…