Learn More
MOTIVATION Bayesian estimation of phylogeny is based on the posterior probability distribution of trees. Currently, the only numerical method that can effectively approximate posterior probabilities of trees is Markov chain Monte Carlo (MCMC). Standard implementations of MCMC can be prone to entrapment in local optima. Metropolis coupled MCMC [(MC)(3)], a(More)
We present OPUS, a tool for dynamic software patching capable of applying fixes to a C program at run-time. OPUS's primary goal is to enable application of security patches to interactive applications that are a frequent target of security exploits. By restricting the type of patches admitted by our system, we are able to significantly reduce any additional(More)
We have developed a new replay debugging tool, liblog, for distributed C/C++ applications. It logs the execution of deployed application processes and replays them deterministically, faithfully reproducing race conditions and non-deterministic failures, enabling careful offline analysis. To our knowledge, liblog is the first replay tool to address the(More)
Debugging and profiling large-scale distributed applications is a daunting task. We present Friday, a system for debugging distributed applications that combines de-terministic replay of components with the power of symbolic , low-level debugging and a simple language for expressing higher-level distributed conditions and actions. Friday allows the(More)
Deterministic replay tools offer a compelling approach to debugging hard-to-reproduce bugs. Recent work on relaxed-deterministic replay techniques shows that replay debugging with low in-production overhead is possible. However, despite considerable progress, a replay-debugging system that offers not only low in-production runtime overhead but also high(More)
Cashmere is a software distributed shared memory (S-DSM) system designed for clusters of server-class machines. It is distinguished from most other S-DSM projects by (1) the effective use of fast user-level messaging, as provided by modern system-area networks, and (2) a “two-level” protocol structure that exploits hardware coherence within(More)
—Debugging data-intensive distributed applications running in datacenters is complex and time-consuming because developers do not have practical ways of deterministically replaying failed executions. The reason why building such tools is hard is that non-determinism that may be tolerable on a single node is exacerbated in large clusters of interacting(More)
The sensors include sonar, cameras, keyboard, the dead reckoning position and a microphone. The drivers for using these sensors, as well as a library for writing behaviors, were provided with the ActivMedia robot. 2 We produced a map of the competition areas in advance using a tape measure. On this map we drew a graph of places that the robot could go from(More)