#### Filter Results:

- Full text PDF available (10)

#### Publication Year

1998

2002

- This year (0)
- Last five years (0)

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

We present and analyze strategies which can be used for the parallel computation of large numbers of integrals which may be of diierent levels of diiculty. Paralleliza-tion on the integral level, which is generally used for large numbers of integrals, is combined with parallelization on the subregion level, which enables handling local integration… (More)

- Karlis Kaugars, Rodger Zanny, Elise de Doncker
- PDPTA
- 2000

PARVIS is a visualization system for distributed , adaptive partitioning algorithms. It allows data–driven examination of the behavior of the adaptive algorithm, even for large and complex problems. For the algorithm developers it supports the analysis of load balancing techniques, subregion error patterns, rate of algorithm convergence for specific… (More)

- Elise de Doncker, Rodger Zanny, Manuel Ciobanu, Yuqiang Guan
- Heterogeneous Computing Workshop
- 2000

We present an asynchronous Quasi-Monte Carlo (qmc) algorithm tailored for heterogeneous environments. qmc techniques are better suited for high dimensions than adaptive methods and have generally better convergence properties than classical Monte Carlo (mc). Our algorithm focused on the asynchronous computation of randomized lattice (Korobov) rules. Whereas… (More)

- E Dedoncker, M Ciobanu, Y Guan, R Zanny
- 2000

We present an asynchronous algorithm based on Quasi-Monte Carlo methods for the computation of multivari-ate integrals. Randomized Korobov and Richtmyer sequences are applied for problems of moderately high to high dimensions. We propose the use of Sobol rules in those dimensions where the integrand shows particular singular behavior. Timing results on MPI… (More)

- Elise de Doncker, Ajay K. Gupta, Rodger Zanny, John Maile
- HiPC
- 1998

This paper addresses the design of distributed methods which incorporate numerical extrapolation into adaptive multivariate integration, in order to increase the function-ality of the integration algorithms. When attempting to deal with singularities, adaptive integration algorithms need a very fine subdivision in the proximity of these " hot spots ". This… (More)

- Elise de Doncker, Rodger Zanny, Karlis Kaugars, Laurentiu Cucos
- International Conference on Computational Science
- 2001

We study the effect of irregular function behavior and dynamic task partitioning on the parallel performance of the adaptive mul-tivariate integration algorithm currently incorporated in ParInt. In view of the implicit hot spots in the computations, load balancing is essential to maintain parallel efficiency. A convergence model is given for a class of… (More)

We investigate the parallel work redundancy of a class of adaptive algorithms, incurred from an increase in the total work required as the number of processes increases. The phenomenon is observed in adaptive integration algorithms, which are prototypical for methods that select and partition tasks from a distributed task pool. We show that, for some… (More)

We examine current paradigms in parallel strategies for multi-variate integration algorithms. These include various process structures (centralized vs. global) and work distribution strategies (static or dynamic) in synchronous or asynchronous implementations. The target algorithm classes are Monte Carlo, quasi-Monte Carlo and adaptive. Strengths and… (More)

- Rodger Zanny
- 1999

This is a document created when the latest version of mpich was installed (version 1.1.2) in July of 1999. It attempts to make the process of beginning to use mpich here at wmu a little smoother, and also brieey explains some more advanced topics, like using debuggers and the logger. Note that it does not explain how to program in mpi itself. This document… (More)

The adaptive integration algorithm is eeective in numerically solving integration problems. It is able to focus the application of integration rules on the portion of the integration region where the integrand is the least well-behaved. Parallel implementations must use dynamic load balancing or performance suuers. Dynamic local load-balancing techniques… (More)

- ‹
- 1
- ›