Leonardo Ramírez-Guzmán

Learn More
Parallel supercomputing has traditionally focused on the inner kernel of scientific simulations: the solver. The front and back ends of the simulation pipeline---problem description and interpretation of the output---have taken a back seat to the solver when it comes to attention paid to scalability and performance, and are often relegated to offline,(More)
We have developed a novel analytic capability for scientists and engineers to obtain insight from ongoing large-scale parallel unstructured mesh simulations running on thousands of processors. The breakthrough is made possible by a new approach that visualizes partial differential equation (PDE) solution data simultaneously while a parallel PDE solver(More)
Parallel supercomputing has typically focused on the inner kernel of scientific simulations: the solver. The front and back ends of the simulation pipeline—problem description and interpretation of the output—have taken a back seat to the solver when it comes to attention paid to scalability and performance, and are often relegated to offline, sequential(More)
We have developed a novel analytic capability for scientists and engineers to obtain insight from ongoing large-scale parallel unstructured mesh simulations running on thousands of processors. The breakthrough is made possible by a new approach that visualizes partial differential equation (PDE) solution data simultaneously while a parallel PDE solver(More)
State-of-the-art numerical solvers in Earth Sciences produce multi terabyte datasets per execution. Operating on increasingly larger datasets becomes challenging due to insufficient data bandwidth. Queries result in difficult to handle I/O access patterns. BEMC is a new mechanism that allows querying and processing wavefields in the compressed(More)
We demonstrate a new scalable approach to real-time monitoring, visualization, and steering of massively parallel simulations from a personal computer. The basis is an endto-end approach to parallel supercomputing in which all components — meshing, partitioning, solver, and visualization — are tightly coupled and execute in parallel on a supercomputer. This(More)
Conventional parallel scientific computing uses files as interface between simulation components such as meshing, partitioning, solving and visualizing. This approach results in time-consuming file transfers, disk I/O and data format conversions that consume large amounts of network, storage, and computing resources while contributing nothing to(More)
  • 1