• Corpus ID: 17720004

Icebergs in the Clouds: The Other Risks of Cloud Computing

  title={Icebergs in the Clouds: The Other Risks of Cloud Computing},
  author={Bryan Ford},
  • B. Ford
  • Published 8 March 2012
  • Computer Science
  • ArXiv
Cloud computing is appealing from management and efficiency perspectives, but brings risks both known and unknown. Well-known and hotly-debated information security risks, due to software vulnerabilities, insider attacks, and side-channels for example, may be only the "tip of the iceberg." As diverse, independently developed cloud services share ever more fluidly and aggressively multiplexed hardware resource pools, unpredictable interactions between load-balancing and other reactive mechanisms… 

Figures from this paper

Secure the Cloud
In response to the revival of virtualized technology by Rosenblum and Garfinkel [2005], NIST defined cloud computing, a new paradigm in service computing infrastructures. In cloud environments, the
Auditing the Structural Reliability of the Clouds Ennan
The cloud Structural Reliability Auditor enables a cloud administrator to be able to evaluate risks within the cloud beforehand and improve the reliability of her service deployments before the occurrences of critical failure events.
An untold story of redundant clouds: making your service deployment truly reliable
iRec is presented, a cloud independence recommender system that calculates a novel protocol that calculates the weighted number of overlapping infrastructure components among different cloud providers, while preserving the secrecy of each cloud provider's proprietary information.
Making Availability as a Service in the Clouds
A new win-win concept for cloud users and providers in term of 'Availability as a Service' (abbreviated as 'AaaS') is proposed, to provide comprehensive and aimspecific runtime avaliabilty analysis services for cloud Users by integrating plent of data-driven and modeldriven approaches.
Towards Reliable Application Deployment in the Cloud
The experimental results show that, even in a large cloud environment with more than 27K hosts, ReCloud needs only 30 seconds to find a deployment plan that is one order of magnitude more reliable than the common practice.
SFAPCC: A Secure and Flexible Architecture for Public Cloud Computing
A secure and flexible architecture, called SFAPCC, is proposed to address two challenges for public cloud computing, and it is argued that more comprehensive controls over public cloud services need to be provided for users.
Techniques for Optimizing Cloud Footprint
  • A. Kejariwal
  • Computer Science
    2013 IEEE International Conference on Cloud Engineering (IC2E)
  • 2013
Novel techniques to optimize operational efficiency in the cloud are presented and resulted in up to 50% reduction in operational costs for the target Netflix applications.
Secure data service outsourcing with untrusted cloud
This dissertation introduces service-centric solutions to address two types of security threats existing in the current cloud environments: semi-honest cloud providers and malicious cloud customers and designs and realizes CloudSafe, a framework that supports secure and efficient data processing with minimum key leakage in the vulnerable cloud virtualization environment.
SKYDA applies high-performance messaging and fault tolerance protocols to both the network and the SCADA Master application itself, resulting in a SCADA system that is easier to deploy and offers a lower total cost of ownership, significantly higher availability, better security and better performance than is possible today.
Failure Recovery: When the Cure Is Worse Than the Disease
It is proposed that failure recovery should be engineered foremost according to the maxim of primum non nocere, that it "does no harm" and recover only when observed activity safely allows for it.


CloudVisor: retrofitting protection of virtual machines in multi-tenant cloud with nested virtualization
This paper proposes a transparent, backward-compatible approach that protects the privacy and integrity of customers' virtual machines on commodity virtualized infrastructures, even facing a total compromise of the virtual machine monitor (VMM) and the management VM.
The Xen-Blanket: virtualize once, run everywhere
The Xen-Blanket is introduced, a thin, immediately deployable virtualization layer that can homogenize today's diverse cloud infrastructures and shows that a user-centric approach to homogenizing clouds can achieve similar performance to a paravirtualized environment while enabling previously impossible tasks like cross-provider live migration.
NoHype: virtualized cloud infrastructure without the virtualization
The NoHype architecture, named to indicate the removal of the hypervisor, addresses each of the key roles of the virtualization layer: arbitrating access to CPU, memory, and I/O devices, acting as a network device, and managing the starting and stopping of guest virtual machines.
No "power" struggles: coordinated multi-level power management for the data center
This paper proposes and validate a power management solution that coordinates different individual approaches and performs a detailed quantitative sensitivity analysis to draw conclusions about the impact of different architectures, implementations, workloads, and system design choices.
R3: resilient routing reconfiguration
This paper proposes Resilient Routing Reconfiguration (R3), a novel routing protection scheme that is provably congestion-free under a large number of failure scenarios, efficient by having low router processing overhead and memory requirements, and robust to both topology failures and traffic variations.
The LOCKSS peer-to-peer digital preservation system
The LOCKSS project presents a design for and simulations of a novel protocol for voting in systems of this kind that incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected.
Auditing to Keep Online Storage Services Honest
It is argued that third-party auditing is important in creating an online service-oriented economy, because it allows customers to evaluate risks, and it increases the efficiency of insurance-based risk mitigation.
Venti: A New Approach to Archival Storage
The feasibility of the write-once model for storage is demonstrated using data from over a decade's use of two Plan 9 file systems, resulting in an access time for archival data that is comparable to non-archival data.
Making information flow explicit in HiStar
HiStar is a new operating system designed to minimize the amount of code that must be trusted, which allows users to specify precise data security policies without unduly limiting the structure of applications.
Deciding when to forget in the Elephant file system
This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files and contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems.