Shruti Tople

Learn More
Web browsers isolate web origins, but do not provide direct abstractions to isolate sensitive data and control computation over it within the same origin. As a result, guaranteeing security of sensitive web content requires trusting all code in the browser and client-side applications to be vulnerability-free. In this paper, we propose a new abstraction,(More)
Web servers are vulnerable to a large class of attacks which can allow network attacker to steal sensitive web content. In this work, we investigate the feasibility of a web server architecture, wherein the vulnerable server VM runs on a trusted cloud. All sensitive web content is made available to the vulnerable server VM in encrypted form, thereby(More)
Secure computation on encrypted data stored on untrusted clouds is an important goal. Existing secure arithmetic computation techniques, such as fully homomorphic encryption (FHE) and somewhat homomorphic encryption (SWH), have prohibitive performance and/or storage costs for the majority of practical applications. In this work, we investigate a new secure(More)
Cloud providers are realizing the outsourced database model in the form of database-as-a-service offerings. However, security in terms of data privacy remains an obstacle because data storage and processing are performed on an untrusted cloud. Achieving strong security under additional constraints of functionality and performance is even more challenging ,(More)
Peer-to-peer (P2P) systems are predominantly used to distribute trust, increase availability and improve performance. A number of content-sharing P2P systems, for file-sharing applications (e.g., BitTorrent and Storj) and more recent peer-assisted CDNs (e.g., Akamai Netses-sion), are finding wide deployment. A major security concern with content-sharing P2P(More)
Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective , this opens collaborative deep learning to poisoning attacks , wherein adversarial users deliberately alter their inputs to mis-train the model. These(More)
  • 1