Learn More
Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a never-ending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by(More)
This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires(More)
This paper presents I/O solutions for the visualization of time-varying volume data in a parallel and distributed computing environment. Depending on the number of rendering processors used, our I/O strategies help significantly lower interframe delay by employing a set of I/O processors coupled with MPI parallel I/O support. The targeted application is(More)
Distributed, on-demand, data-intensive, and collaborative simulation analysis tools are being developed by an international team to solve real problems such as bioinformatics applications. The project consists of three distinct focuses: compute, visualize, and collaborate. Each component utilizes software and hardware that performs across the International(More)
The visualization of scientific data requires the translation of that data into a format suitable for rendering. As the number of supercomputing simulations and the number of available renderers increases, the magnitude of this translation problem increases as well. A potential solution is the use of a standard format for 3D models, which (ideally) all(More)
  • 1