Learn More
The increasing demand for high performance computing resources has led to new forms of collaboration of distributed systems, such as grid computing systems. Moreover, the need for interoperability among different grid systems through the use of common protocols and standards has become increased in the last few years. In this paper we describe and evaluate(More)
This paper presents some techniques for efficient thread forking and joining in parallel execution environments, taking into consideration the physical structure of NUMA machines and the support for multi-level parallelization and processor grouping. Two work generation schemes and one join mechanism are designed, implemented, evaluated and compared with(More)
One of the current challenges in Grid computing is the efficient scheduling of HPC applications. In the eNANOS project we propose a 3-layer coordinated scheduling architecture for the execution of HPC applications, from the Grid level to the processor scheduling level. In this paper we propose an architecture that allows that the resource broker schedules(More)
Scheduling parallel applications on shared–memory multiprocessors is a difficult task that requires a lot of tuning from application programmers, as well as operating system developers and system managers. In this paper, we present the characteristics related to kernel–level scheduling of the NANOS environment and the results we are achieving. The NANOS(More)
Many commercial job scheduling strategies in multi processing systems tend to minimize waiting times of short jobs. However, long jobs cannot be left aside as their impact on the performance of the system is also determinant. In this work we propose a job scheduling strategy that maximizes resources utilization and improves the overall performance by(More)
This work is focused on processor allocation in shared-memory multiprocessor systems, where no knowledge of the application is available when applications are submitted. We perform the processor allocation taking into account the characteristics of the application measured at run-time. We want to demonstrate the importance of an accurate performance(More)
OpenMP is in the process of adding a tasking model that allows the programmer to specify independent units of work, called tasks, but does not specify how the scheduling of these tasks should be done (although it imposes some restrictions). We have evaluated different scheduling strategies (schedulers and cutoffs) with several applications and we found that(More)
—In task parallel languages, an important factor for achieving a good performance is the use of a cutoff technique to reduce the number of tasks created. Using a cutoff to avoid an excessive number of tasks helps the runtime system to reduce the total overhead associated with task creation, particularlt if the tasks are fine grain. Unfortunately, the best(More)