The Efficient Server Audit Problem, Deduplicated Re-execution, and the Web

@article{Tan2017TheES,
  title={The Efficient Server Audit Problem, Deduplicated Re-execution, and the Web},
  author={Cheng Tan and Lingfan Yu and Joshua B. Leners and Michael Walfish},
  journal={Proceedings of the 26th Symposium on Operating Systems Principles},
  year={2017}
}
You put a program on a concurrent server, but you don't trust the server; later, you get a trace of the actual requests that the server received from its clients and the responses that it delivered. You separately get logs from the server; these are untrusted. How can you use the logs to efficiently verify that the responses were derived from running the program on the requests? This is the Efficient Server Audit Problem, which abstracts real-world scenarios, including running a web application… 
Proving the correct execution of concurrent services in zero-knowledge
TLDR
Spice is introduced, a system for building verifiable state machines (VSMs) that produces proofs establishing that requests were executed correctly according to a specification and is the first system that can succinctly prove the correct execution of concurrent services.
Cobra: Making Transactional Key-Value Stores Verifiably Serializable
TLDR
Cobra is the first system that combines black-box checking, of (b) serializability, while (c) scaling to real-world online transactional processing workloads, and introduces several new techniques, including a new encoding of the validity condition.
Understanding and detecting server-side request races in web applications
TLDR
A dynamic framework, ReqRacer, is developed for detecting and exposing server-side request races in web applications and proposes novel approaches to model happens-before relationships between HTTP requests, which are essential to web applications.
Practical Verification of MapReduce Computation Integrity via Partial Re-execution
TLDR
V-MR (Verifiable Map Reduce), which is a framework that verifies the integrity of MapReduce computation outsourced in the untrusted cloud via partial re-execution, is presented, which can detect the violation of Map reduce computation integrity and identify the malicious workers involved in the that produced the incorrect computation.
SPEED: Accelerating Enclave Applications Via Secure Deduplication
TLDR
This work proposes SPEED, a secure and generic computation deduplication system in the context of Intel SGX that allows SGX-enabled applications to identify redundant computations and reuse computation results, while protecting the confidentiality and integrity of code, inputs, and results.
Execution integrity without implicit trust of system software
TLDR
This paper describes a TO design that inherently does not require any trust of system call results (and thus of the kernel or hypervisor), and DOG, a prototype TO implementation for Intel SGX that upholds application execution integrity, even for applications that do not fit within today's SGX virtual memory limits, and incurs modest execution overhead.
A Characteristic Study of Deadlocks in Database-Backed Web Applications
TLDR
A characteristic study with 49 deadlocks collected from real-world web applications developed following different programming paradigms, providing categorization results based on HTTP request numbers and resource types, with a special focus on cat-egorizing deadlocks on database locks.
QShield: Protecting Outsourced Cloud Data Queries With Multi-User Access Control Based on SGX
TLDR
It is shown that QShield can securely query over outsourced data with high efficiency and scalable multi-user support, and embeds a trust-proof mechanism into QShield to guarantee the trustworthiness of TEE function invocation.
Transparency Dictionaries with Succinct Proofs of Correct Operation
—This paper introduces Verdict, a transparency dictionary, where an untrusted service maintains a label-value map that clients can query and update (foun- dational infrastructure for end-to-end
Custos: Practical Tamper-Evident Auditing of Operating Systems Using Trusted Execution
TLDR
CUSTOS forces anti-forensic attackers into a “lose-lose” situation, where they can either be covert and not tamper with logs, or erase logs but then be detected by CUSTOS, a practical framework for the detection of tampering in system logs.
...
1
2
...

References

SHOWING 1-10 OF 99 REFERENCES
Ripley: automatically securing web 2.0 applications through replicated execution
TLDR
Ripley is a system that uses replicated execution to automatically preserve the integrity of a distributed computation and is built on top of Volta, a distributing compiler that translates .NET applications into JavaScript, effectively providing a measure of security by construction for Volta applications.
Efficient Patch-based Auditing for Web Application Vulnerabilities
TLDR
POIROT's techniques allow it to audit past requests 12-51× faster than the time it took to originally execute the same requests, for patches to code executed by every request, under a realistic mediaWiki workload.
Verifying computations with state
TLDR
Pantry composes proof-based verifiable computation with untrusted storage: the client expresses its computation in terms of digests that attest to state, and verifiably outsources that computation.
vSQL: Verifying Arbitrary SQL Queries over Dynamic Outsourced Databases
Cloud database systems such as Amazon RDS or Google Cloud SQLenable the outsourcing of a large database to a server who then responds to SQL queries. A natural problem here is to efficiently verify
Pinocchio: Nearly Practical Verifiable Computation
TLDR
This work introduces Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions, and is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps).
VC3: Trustworthy Data Analytics in the Cloud Using SGX
We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of
Practical byzantine fault tolerance and proactive recovery
TLDR
A new replication algorithm, BFT, is described that can be used to build highly available systems that tolerate Byzantine faults and is used to implement the first Byzantine-fault-tolerant NFS file system, BFS.
Secure Deduplication of General Computations
TLDR
Evaluation of UNIC on four popular open-source applications shows that UNIC is easy to use, fast, and with little storage overhead.
Verena: End-to-End Integrity Protection for Web Applications
TLDR
Verena is presented, a web application platform that provides end-to-end integrity guarantees against attackers that have full access to the web and database servers and can support real applications with modest overhead.
What Consistency Does Your Key-Value Store Actually Provide?
TLDR
By analyzing the trace of interactions between the client machines and a key-value store, the algorithms can report whether the trace is safe, regular, or atomic, and if not, how many violations there are in the trace.
...
1
2
3
4
5
...