Francisco Almeida

Learn More
The MALLBA project tackles the resolution of combinatorial optimization problems using generic algorithmic skeletons implemented in C++. A skeleton in the MALLBA library implements an optimization method in one of the three families of generic optimization techniques offered: exact, heuristic and hybrid. Moreover, for each of those methods, MALLBA provides(More)
Following Karp's discrete Dynamic Programming (DP) approach, this work extends the sequential model for monadic DP to the parallel case. We propose general parallel DP algorithms for pipeline and ring networks. The study of the optimality of these algorithms leads us to the introduction of new classes of multistage automata. However, the important class of(More)
Dynamic programming is an important combinatorial optimization technique that has been widely used in various fields such as control theory, operations research, computational biology and computer science. Many authors have described parallel dynamic programming algorithms for the family of multistage problems. More scarce is the literature for the more(More)
The parallelization of the dynamic programming algorithm for the integral knapsack problem is approached from several perspectives. Two of them proceed by dividing the set of objects, while a third one proceeds by partitioning the set of capacities. Furthermore, we propose a new sequential algorithm and its parallelization by reducing the integral knapsack(More)
The mallba project tackles the resolution of combinatorial optimization problems using algorithmic skeletons implemented in C++. mallba offers three families of generic resolution methods: exact, heuristic and hybrid. Moreover, for each resolution method, mallba provides three different implementations: sequential, parallel for local area networks, and(More)
The evolution of the architecture of massively parallel computers is progressing toward systems with a hierarchical hardware design where each node is a shared memory system with several multi-core CPUs. There is consensus in the HPC community about the need to increment the efficiency of the programming paradigms used to exploit massive parallelism, if we(More)
The advent of multicore systems, joined to the potential acceleration of the graphics processing units, alleviates some well known important architectural problems at the expense of a considerable increment of the programmability wall. The heterogeneity, both at architectural and programming level at the same time, raises the programming difficulties.(More)