Learn More
We investigate the issues involved in developing a scalable World Wide Web (WWW) server on a cluster of workstations and parallel machines, using the Hypertext Transport Protocol (HTTP). The main objective is to strengthen the processing capabilities of such a server by utilizing the power of multicomputers to match huge demands in simultaneous access(More)
In this paper, we investigate the issues involved in developing a scalable World Wide Web (WWW) server called SWEB on a cluster of workstations. The objective is to strengthen the processing capabilities of such a server in order to match huge demands in simultaneous access requests from the Internet, especially when these requests involve delivery of large(More)
Medical devices historically have been monolithic units – developed , validated, and approved by regulatory authorities as stand-alone entities. Modern medical devices increasingly incorporate connectivity mechanisms that offer the potential to stream device data into electronic health records, integrate information from multiple devices into single(More)
We investigate scalability issues involved in developing high performance digital library systems. Our observations and solutions are based on our experience with the Alexandria Digital Library (ADL) testbed under development at UCSB. The current ADL system provides on-line browsing and processing of digitized maps and other geo-spatially mapped data via(More)
In this paper we present a model for dynamically scheduling HTTP requests across clusters of servers, optimizing the use of client resources as well as the scattered server nodes. We also present a system, H-SWEB, implementing our techniques and showing experimental improvements of over 250%, which have been achieved through utilizing a global approach to(More)
This paper studies runtime partitioning, scheduling and load balancing techniques for improving performance of on-line WWW-based information systems such as digital libraries. The main performance bottlenecks of such a system are caused by the server computing capability and Internet bandwidth. Our observations and solutions are based on our experience with(More)
Firewalls, and packet classification in general, are becoming more and more significant as data rates soar and hackers become increasingly sophisticated-and more forceful. In this paper, we present a new packet-classification approach that uses set theory to classify packets. This approach has significant theoretical advantages over current approaches. We(More)