Yair Toaff

Learn More
Large backup and restore systems may have a petabyte or more data in their repository. Such systems are often compressed by means of deduplication techniques , that partition the input text into chunks and store recurring chunks only once. One of the approaches is to use hashing methods to store fingerprints for each data chunk, detecting identical chunks(More)
A special case of data compression in which repeated chunks of data are stored only once is known as deduplication. The input data is cut into chunks and a cryptographically strong hash value of each (different) chunk is stored. To restrict the influence of small inserts and deletes to local perturbations, the chunk boundaries are usually defined in a data(More)
The time efficiency of many storage systems rely critically on the ability to perform a large number of evaluations of certain hashing functions fast enough. The remainder function B mod P , generally applied with a large prime number P , is often used as a building block of such hashing functions, which leads to the need of accelerating remainder(More)
Many Web clients today are connected to the Internet via low speed computer links such as cellular connections. In order to efficiently use the cellular connection for Web access, the connection must be accelerated using a Performance Enhancing Proxy (PEP) as a gateway to the Web. In this paper we investigate the challenges created by the use of PEP. In(More)
  • 1