Process Scheduling in DSC and the Large Sparse Linear Systems Challenge

@inproceedings{Daz1993ProcessSI,
  title={Process Scheduling in DSC and the Large Sparse Linear Systems Challenge},
  author={Angel D{\'i}az and Markus A. Hitz and Erich L. Kaltofen and Austin A. Lobo and Thomas Valente},
  booktitle={DISCO},
  year={1993}
}
New features of our DSC system for distributing a symbolic computation task over a network of processors are described. A new scheduler sends parallel subtasks to those compute nodes that are best suited in handling the added load of CPU usage and memory. Furthermore, a subtask can communicate back to the process that spawned it by a co-routine style calling mechanism. Two large experiments are described in this improved setting. We have implemented an algorithm that can prove a number of more… 
Fifteen years after DSC and WLSS2 what parallel computations I do today: invited lecture at PASCO 2010
TLDR
An important technique in symbolic computation is the evaluation/interpolation paradigm, and multivariate sparse polynomial parallel interpolation constitutes a keystone operation, for which a new algorithm is presented.
Parallel systems in symbolic and algebraic computation
TLDR
This thesis describes techniques that exploit the distributed memory in massively parallel processors to satisfy the peak memory requirements of some very large computer algebra problems and demonstrates that careful attention to memory management aids solution of very large problems even without the benefit of advanced algorithms.
Parallel Buchberger Algorithms on Virtual Shared Memory KSR 1
TLDR
Two parallel versions of Buchbergers Gröbner Basis algorithm for a virtual shared memory KSR1 computer do S-polynomial reduction concurrently and respects the same critical pair selection strategy as the sequential algorithm.
Parallel Computer Algebra 1
TLDR
An introduction to parallel algorithms in computer algebra, from the building of an efficient algorithm to its effective implementation on a given architecture, and the major techniques used to build efficient algorithms on theoretical machine models are presented.
Factoring high-degree polynomials by the black box Berlekamp algorithm
TLDR
It is shown that a sequential version of the black box Berlekamp algorithm is strongly related to their method and allows for the same asymptotic speed-ups, at least within a logarithmic factor.
Analysis of Coppersmith's Block Wiedemann Algorithm for the Parallel Solution of Sparse Linear Systems
TLDR
It is proved that by use of certain randomizations on the input system the parallel speed up is roughly by the number of vectors in the blocks when using as many processors.
On computing greatest common divisors with polynomials given by black boxes for their evaluations
TLDR
This work revisits the problem of computing the greatest common divisor (GCD) in black box format of several multivariate polynomials that themselves are given by black boxes and presents an improved version of the algorithm sketched in Kaltofen and Trager.
Symbolic computation: A Java based computer algebra system
TLDR
This work has come up with a vision of Symbolic computation which provides a quick, efficient and user friendly environment to its users.
FOXBOX: a system for manipulating symbolic objects in black box representation
TLDR
A software package that puts in practice the black box representation of symbolic objects and provides algorithms for performing the symbolic calculus with such representations is introduced and the results of several challenge problems are presented, representing the first symbolic solutions of such problems.
Implementation of Recursive Structural Parser for Symbolic Computation using Mathematical Pseudo Language and Features of Java
TLDR
A strong yet simple way to solve mathematical problems by representing expressions using Symbols and characters with the concept of pseudo language is represented.
...
...

References

SHOWING 1-10 OF 36 REFERENCES
DSC: a system for distributed symbolic computation
TLDR
This work has tested DSC with a primality test for large integers and with a factorization algorithm for polynomials over large finite fields and observed significant speed-ups over executing the best-known methods on a single workstation computation.
Algebraic Computing on a Local Net
TLDR
An extension of the computer algebra system SAC-2 for the execution of algorithms by the workstations on a local net is described, and the execution times for a distributed version of a modular algorithm are shown.
Progress report on a system for general-purpose parallel symbolic algebraic computation
  • B. Char
  • Computer Science
    ISSAC '90
  • 1990
TLDR
On-going work on large-grained parallel symbolic computation using a system based on Maple and Linda, which achieved parallel speedup on a variety of algebraic problems, although many significant improvements in efficiency remain to be achieved.
A New Library for Parallel Algebraic Computation
TLDR
An overview of Paclib, a library for parallel algebraic computation on shared memory multiprocessors, and the successful application for the parallelization of several algebraic algorithms are presented.
Programming in PACLIB
TLDR
This paper gives a short overview on PACLIB, a new system for parallel algebraic computation on shared memory computers that provides concurrency, shared memory communication, non-determinism, speculative parallelism, streams and pipelining and a parallelized garbage collection.
Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm
TLDR
A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.
Factoring high-degree polynomials by the black box Berlekamp algorithm
TLDR
It is shown that a sequential version of the black box Berlekamp algorithm is strongly related to their method and allows for the same asymptotic speed-ups, at least within a logarithmic factor.
Analysis of Coppersmith's Block Wiedemann Algorithm for the Parallel Solution of Sparse Linear Systems
TLDR
It is proved that by use of certain randomizations on the input system the parallel speed up is roughly by the number of vectors in the blocks when using as many processors.
Solving linear systems of determinant frequently zero over finite field GF(2)
Solving sparse linear equations over finite fields
TLDR
A "coordinate recurrence" method for solving sparse systems of linear equations over finite fields is described and a probabilistic algorithm is shown to exist for finding the determinant of a square matrix.
...
...