• Corpus ID: 235727639

An Efficient Particle Tracking Algorithm for Large-Scale Parallel Pseudo-Spectral Simulations of Turbulence

  title={An Efficient Particle Tracking Algorithm for Large-Scale Parallel Pseudo-Spectral Simulations of Turbulence},
  author={Cristian Constantin Lalescu and B{\'e}renger Bramas and Markus Rampp and Michael Wilczek},
Particle tracking in large-scale numerical simulations of turbulent flows presents one of the major bottlenecks in parallel performance and scaling efficiency. Here, we describe a particle tracking algorithm for large-scale parallel pseudo-spectral simulations of turbulence which scales well up to billions of tracer particles on modern high-performance computing architectures. We summarize the standard parallel methods used to solve the fluid equations in our hybrid MPI/OpenMP implementation… 

Figures from this paper


A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations
Abstract A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations
An algorithm for tracking fluid particles in numerical simulations of homogeneous turbulence
Abstract Lagrangian statistical quantities are of fundamental physical importance in our understanding of turbulence, but are very difficult to measure and hence infrequently reported in the
Optimal interpolation schemes for particle tracking in turbulence.
A practical method is proposed that enables direct estimation of the interpolation and discretization error from the energy spectrum and it is shown that B-spline interpolation has the best accuracy given the computational cost.
A dual communicator and dual grid-resolution algorithm for petascale simulations of turbulent mixing at high Schmidt number
A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection–diffusion equation on the petascale supercomputer Blue Waters.
Acceleration statistics of tracer particles in filtered turbulent fields
We present results from direct numerical simulations of tracer particles advected in filtered velocity fields to quantify the impact of the scales of turbulence on Lagrangian acceleration statistics.
How tracer particles sample the complexity of turbulence
On their roller coaster ride through turbulence, tracer particles sample the fluctuations of the underlying fields in space and time. Quantitatively relating particle and field statistics remains a
16.4-Tflops Direct Numerical Simulation of Turbulence by a Fourier Spectral Method on the Earth Simulator
The high-resolution direct numerical simulations of incompressible turbulence with numbers of grid points up to 40963 have been executed on the Earth Simulator, based on the Fourier spectral method, and yields an energy spectrum exhibiting a wide inertial subrange, in contrast to previous DNSs with lower resolutions, and therefore provides valuable data for the study of the universal features of turbulence at large Reynolds number.
Lagrangian statistics from direct numerical simulations of isotropic turbulence
A comprehensive study is reported of the Lagrangian statistics of velocity, acceleration, dissipation and related quantities, in isotropic turbulence. High-resolution direct numerical simulations are
Impact of the floating-point precision and interpolation scheme on the results of DNS of turbulence by pseudo-spectral codes
This paper investigates the impact of the floating-point precision and interpolation scheme on the results of direct numerical simulations of turbulence by pseudo-spectral codes and finds that single precision computations allow for increased Reynolds numbers due to the reduced amount of memory needed.
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence
It is shown that the hybrid scheme achieves good scalability up to ∼20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%.