Learn More
In this work we are concerned with the cost associated with replicating intermediate data for dataflows in Cloud environments. This cost is attributed to the extra resources required to create and maintain the additional replicas for a given data set. Existing data-analytic platforms such as Hadoop provide for fault-tolerance guarantee by relying on(More)
In this work, we optimize the admission policy of application deployment requests submitted to data centers. Data centers are typically comprised of many physical servers. However, their resources are limited, and occasionally demand can be higher than what the system can handle, resulting with lost opportunities. Since different requests typically have(More)
—In this work we are concerned with the cost associated with replicating intermediate data for dataflows in Cloud environments. This cost is attributed to the extra resources required to create and maintain the additional replicas for a given data set. Existing data-analytic platforms such as Hadoop provide for fault-tolerance guarantee by relying on(More)
We demonstrate a prototype system called COLD that we are developing at IBM Research which provides optimized deployment of workload in the cloud. A workload refers to an application, consisting of virtual entities (e.g. VM, volume), to be deployed in a cloud infrastructure, consisting of physical entities (e.g. PM, storage). The resource requirements of(More)
  • 1