Dynamic parallel access to replicated content in the internet

@article{Rodriguez2002DynamicPA,
  title={Dynamic parallel access to replicated content in the internet},
  author={Pablo Rodriguez and Ernst W. Biersack},
  journal={IEEE/ACM Trans. Netw.},
  year={2002},
  volume={10},
  pages={455-465}
}
Popular content is frequently replicated in multiple servers or caches in the Internet to offload origin servers and improve end-user experience. However, choosing the best server is a nontrivial task and a bad choice may provide poor end user experience. In contrast to retrieving a file from a single server, we propose a parallel-access scheme where end users access multiple servers at the same time, fetching different portions of that file from different servers and reassembling them locally… Expand
A paracasting model for concurrent access to replicated Internet content
TLDR
A model to study how to effectively download a document from a set of replicated servers and shows that the file download time drops when a request is served concurrently by a larger number of homogeneous replicated servers, although the performance improvement quickly saturates when the number of servers increases. Expand
A paracasting model for concurrent access to replicated content
  • Ka-Cheong Leung, V. Li
  • Computer Science
  • 2002 14th International Conference on Ion Implantation Technology Proceedings (IEEE Cat. No.02EX505)
  • 2003
TLDR
A framework to study how to download effectively a copy of the same document from a set of replicated servers and shows that the file download time drops when a request is served concurrently by a larger number of homogeneous replicated servers, although the performance improvement quickly saturates when the number of servers used increases. Expand
REDUCING DOWNLOAD TIME THROUGH MIRROR SERVERS
TLDR
A simulation based comparative study between Traditional FTP and Parallel FTP is performed and some more QoS parameter which are Hop Counts and Delay are added. Expand
Implementation issues of parallel downloading methods for a proxy system
TLDR
This paper combines the parallel downloading technology and the proxy server technology in order to download a file quickly and to serve the latest files and clarifies a tradeoff between the buffering time and the redundant traffic generated by duplicate requests to multiple servers using the substituting download. Expand
Improved Parallel Downloading Algorithm for Large File Distribution
  • Jin Bo
  • Computer Science
  • 2012 International Conference on Computer Distributed Control and Intelligent Environmental Monitoring
  • 2012
TLDR
A new parallel downloading algorithm which takes the bandwidth and availability into consideration when selecting the mirror servers is proposed and can significantly improve the efficiency of parallel downloading for large file distribution. Expand
Parallel Downloading Using Variable Length Blocks for Proxy Servers
TLDR
This paper compares the proposed parallel downloading using variable length blocks to the existing method using fixed length blocks by simulation experiments and confirmed that the proposal reduces the redundant traffic and the buffer size required for proxy systems compared to theexisting method without degrading the download time. Expand
On large scale deployment of parallelized file transfer protocol
TLDR
A simulation-based study to investigate the performance of P-FTP when it is adopted by a large user base finds that, by virtue of its self-tuning capability, P-fTP continues to exhibit improved performance even with many simultaneous clients. Expand
TCP-PARIS: a parallel download protocol for replicas
  • R. Karrer, E. Knightly
  • Computer Science
  • 10th International Workshop on Web Content Caching and Distribution (WCW'05)
  • 2005
Parallel download protocols have the potential to reduce file download time and to achieve a server-side load balancing in replica systems, such as peer-to-peer networks, content distributionExpand
Content Delivery Policies in Replicated Web Services: Client-Side vs. Server-Side
TLDR
Analysis of server-side and client-side approaches for geographical replication, identifying their pros and cons in order to propose the features of an eventual complete approach. Expand
Client-side content delivery policies in replicated web services: parallel access versus single server approach
TLDR
This study contrasted, qualitatively and quantitatively (via simulation), the most promising client-side techniques: one parallel strategy and one single-server strategy with the aim of identifying the best solutions for content-delivery systems. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
Study of Parallel Access Schemes to Speed up the Internet
TLDR
A new parallel access scheme is presented that automatically schedules the transmission between the different sources to minimize clients’ retrieval times and does not require to re-encode the documents at the sources and uses the existing network protocols. Expand
Locating copies of objects using the Domain Name System
In order to reduce average delay and bandwidth usage in the Web, geographically dispersed servers often store copies of popular objects. For example, with network caching, the origin server stores aExpand
A novel server selection technique for improving the response time of a replicated service
  • Zongming Fei, S. Bhattacharjee, E. Zegura, M. Ammar
  • Computer Science
  • Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98
  • 1998
TLDR
This paper targets an environment in which servers are distributed across the Internet, and clients identify servers using the authors' application-layer any-casting service, and develops an approach for estimating the performance that a client would experience when accessing particular servers. Expand
Performance Analysis of a Dynamic Parallel Downloading Scheme from Mirror Sites Throughout the Internet
TLDR
There are a number of enhancements that can be made to the paraloader to improve its performance in different network environments, and some of these enhancement techniques will be outlined. Expand
SPAND: Shared Passive Network Performance Discovery
TLDR
A system called SPAND (Shared Passive Network Performance Discovery) is proposed that determines network characteristics by making shared, passive measurements from a collection of hosts and it is shown that sharing measurements can significantly increase the accuracy and timeliness of predictions. Expand
Selection algorithms for replicated Web servers
TLDR
Two new algorithms for selection of replicated servers are designed and it is shown that the new server selection algorithms improve the performance of other existing algorithms on the average by 55% and the existing non-replicated Web servers on average by 69%. Expand
Accessing multiple mirror sites in parallel: using Tornado codes to speed up downloads
  • J. Byers, M. Luby, M. Mitzenmacher
  • Computer Science
  • IEEE INFOCOM '99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320)
  • 1999
TLDR
This work considers enabling a client to access a file from multiple mirror sites in parallel to speed up the download, and develops a feedback-free protocol based on erasure codes that can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. Expand
Reduce, reuse, recycle: an approach to building large Internet caches
TLDR
It is shown that these drawbacks are easily overcome for well configured CRISP caches, and report on early studies of CRISp caches in actual use and under synthetic load. Expand
Server selection using dynamic path characterization in wide-area networks
TLDR
This work proposes a dynamic server selection, and shows that it enables application-level congestion avoidance and consistently outperforms static policies, reducing response times by as much as 50%. Expand
Summary cache: a scalable wide-area web cache sharing protocol
TLDR
This paper demonstrates the benefits of cache sharing, measures the overhead of the existing protocols, and proposes a new protocol called "summary cache", which reduces the number of intercache protocol messages, reduces the bandwidth consumption, and eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP. Expand
...
1
2
3
4
5
...