Jose M. Faleiro

Learn More
Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems(More)
Existing database systems employ an \textit{eager} transaction processing scheme---that is, upon receiving a transaction request, the system executes all the operations entailed in running the transaction (which typically includes reading database records, executing user-specified transaction logic, and logging updates and writes) before reporting to the(More)
Although significant recent progress has been made in improving the multi-core scalability of high throughput transactional database systems, modern systems still fail to achieve scalable throughput for workloads involving frequent access to highly contended data. Most of this inability to achieve high throughput is explained by the fundamental constraints(More)
Designing distributed database systems is an inherently complex process, involving many choices and tradeoffs that must be considered. It is impossible to build a distributed database that has all the features that users intuitively desire. In this article, we discuss the three-way relationship between three such desirable features — fairness, isolation,(More)
In order to guarantee recoverable transaction execution, database systems permit a transaction’s writes to be observable only at the end of its execution. As a consequence, there is generally a delay between the time a transaction performs a write and the time later transactions are permitted to read it. This delayed write visibility can significantly(More)
Recent research on multi-core database architectures has made the argument that, when possible, database systems should abandon the use of latches in favor of latch-free algorithms. Latch-based algorithms are thought to scale poorly due to their use of synchronization based on mutual exclusion. In contrast, latch-free algorithms make strong theoretical(More)
Early iterations of datacenter-scale computing were a reaction to the expensive multiprocessors and supercomputers of their day. They were built on clusters of commodity hardware, which at the time were packages with 2--4 CPUs. However, as datacenter-scale computing has matured, cloud vendors have provided denser, more powerful hardware. Today's cloud(More)
  • 1