Daniele Cesini

Learn More
Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid(More)
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users.(More)
The Grid infrastructures, such as the one provided by the European Grid Infrastructure, represent suitable solutions to achieve cost effective computational power. However, these solutions present to Grid users some challenges, i.e. to create a customized computing environment and its management.For this reason a Cloud-like approach is for certain(More)
Cloud computing opens new perspectives for small-medium biotechnology laboratories that need to perform bioinformatics analysis in a flexible and effective way. This seems particularly true for hybrid clouds that couple the scalability offered by general-purpose public clouds with the greater control and ad hoc customizations supplied by the private ones. A(More)
Distributed Computing Infrastructures have dedicated mechanisms to provide user communities with computational environments. While in the last decade the Grid has demonstrated to be a powerful paradigm in supporting scientific research, the complexity of the user experience still limits its adoption by unskilled user communities. Command line interfaces,(More)
Scheduling services are core grid components of paramount importance to support the transparent distribution of tasks to remote shared resources in an efficient way. High availability of these core services is thus of great importance. Given the distributed nature of the system, monitoring the task lifecycle and the aggregate workflow patterns generated by(More)
Even though the Italian Grid Infrastructure (IGI) is a general purpose distributed platform, in the past it has been used mainly for serial computations. Parallel applications have been typically executed on supercomputer facilities or, in case of ``not high-end'' HPC applications, on local commodity parallel clusters. Nowadays, with the availability of(More)
The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP(More)
Low power Systems-on-Chip (SoCs), originally developed in the context of mobile and embedded technologies, are becoming attractive for the scientific community given their increasing computing performances, coupled with relatively low cost and power demand. In this work, we investigate the potential of SoCs for realistic scientific workloads, in particular(More)
The embedded and high-performance computing (HPC) sectors, that in the past were completely separated, are now somehow converging under the pressure of two driving forces: the release of less power consuming server processors and the increased performance of the new low power Systems-on-Chip (SoCs) developed to meet the requirements of the demanding mobile(More)