Learn More
Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to run a request on multiple servers and wait for the first completion (discarding all remaining copies of the request). However there is no exact analysis of systems with redundancy. This paper presents the first exact analysis of systems with redundancy.(More)
Redundancy is an important strategy for reducing response time in multi-server queueing systems that has been used in a variety of settings, but only recently has begun to be studied analytically. The idea behind redundancy is that customers can greatly reduce their response time by waiting in multiple queues at the same time, thereby experiencing the(More)
This paper considers the problem of server-side scheduling for jobs composed of multiple pieces with consecutive (progressive) deadlines. One example is server-side scheduling for video service, where clients request flows of content from a server with limited capacity, and any content not delivered by its deadline is lost. We consider the simultaneous(More)
Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to replicate a request so that it joins the queue at multiple servers. The request is considered complete as soon as any one copy of the request completes. Redundancy is beneficial because it allows us to overcome server-side variability - the fact that the(More)
PURPOSE Research supporting the "early-onset" theory of antipsychotic activity is reviewed, with an emphasis on psychometric assessment of early response to antipsychotic agents as a tool for optimizing schizophrenia treatment outcomes. SUMMARY A growing body of evidence indicates that a poor response to antipsychotic therapy in the first weeks of(More)
An increasingly prevalent technique for improving response time in queueing systems is the use of redundancy. In a system with redundant requests, each job that arrives to the system is copied and dispatched to multiple servers. As soon as the first copy completes service, the job is considered complete, and all remaining copies are deleted. A great deal of(More)
Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to run a single request on multiple servers and only wait for the first completion (discarding all remaining instances of the request). However no exact analysis of systems with redundancy exists. This paper presents the first exact analysis of systems with(More)
Redundancy is an important strategy for reducing response time in multi-server distributed queueing systems that has been used in a variety of settings, but only recently has begun to be studied analytically. The idea behind redundancy is that customers can greatly reduce their response time by waiting in multiple queues at the same time, thereby(More)
An increasingly prevalent technique for improving response time in queueing systems is the use of redundancy. In a system with redundant requests, each job that arrives to the system is copied and dispatched to multiple servers. As soon as the first copy completes service, the job is considered complete, and all remaining copies are deleted. A great deal of(More)