Very Fast Approximation of the Matrix Chain Product Problem

@article{Czumaj1996VeryFA,
  title={Very Fast Approximation of the Matrix Chain Product Problem},
  author={Artur Czumaj},
  journal={J. Algorithms},
  year={1996},
  volume={21},
  pages={71-79}
}
  • A. Czumaj
  • Published 1 July 1996
  • Computer Science
  • J. Algorithms
This paper considers the matrix chain product problem. This problem can be solved inO(nlogn) sequential time, while the best known parallel NC algorithm runs inO(log2n) time usingn6/log6nprocessors and inO(log3n) time withO(n2) time?processor product. This paper presents a very fast parallel algorithm for approximately solving the matrix chain product problem and for the problem for finding a near-optimal triangulation of a convex polygon. It runs inO(logn) time on a CREW PRAM and inO(loglogn… 

Figures from this paper

Parallelizing Matrix Chain Products Parallelizing Matrix Chain Products

TLDR
A processor scheduling algorithm for MCSP is introduced which attempts to minimize the evaluation time of a chain of matrix products on a parallel computer, even at the expense of a slight increase in the total number of operations.

Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems

TLDR
A new processor scheduling algorithm for MCSP is introduced which reduces the evaluation time of a chain of matrix products on a parallel computer, even at the expense of a slight increase in the total number of operations.

The generalized matrix chain algorithm

TLDR
A generalized version of the matrix chain algorithm is presented to generate efficient code for linear algebra problems, a task for which human experts often invest days or even weeks of works.

e Generalized Matrix Chain Algorithm

TLDR
A generalized version of the matrix chain algorithm is presented to generate efficient code for linear algebra problems, a task for which human experts invest days or even weeks of works.

Research Statement of Prof

My research is primarily concerned with problems related to the theoretical aspects of theanalysis and design of algorithms. Although I have studied many classical problems including those that can

Memory Safe Computations with XLA Compiler

TLDR
An XLA compiler extension 1 is developed that adjusts the computational data-flow representation of an algorithm according to a user-specified memory limit and shows that k-nearest neighbour and sparse Gaussian process regression methods can be run at a much larger scale on a single device, where standard implementations would have failed.

A Review of the Smith-Waterman GPU Landscape

TLDR
It is found that some optimization techniques are widespread and clearly beneficial, while others are not yet well-explored and exposes gaps in the literature which can be filled through future research.

References

SHOWING 1-10 OF 14 REFERENCES

Parallel Algorithm for the Matrix Chain Product and the Optimal Triangulation Problems Stacs'93 Version

TLDR
This paper gives a new algorithm which uses a diierent approach and reduces the problem to computing certain recurrence in a tree and shows that this recurrence can be optimally solved which enables to improve the parallel bound by a few factors.

Parallel Algorithms for Dynamic Programming Recurrences with More than O(1) Dependency

TLDR
This work presents a unifying framework for the parallel computation of dynamic programming recurrences with more than O(1) dependency, and uses two well-known methods, the closure method and the matrix product method, as general paradigms for developing parallel algorithms.

Some theorems about matrix multiplication

  • T. C. HuM. Shing
  • Computer Science, Mathematics
    21st Annual Symposium on Foundations of Computer Science (sfcs 1980)
  • 1980
TLDR
Some theorems about an optimum order of computing the matrices are presented and an O(n log n) algorithm for finding the optimum order is presented.

An O(n) algorithm for determining a near-optimal computation order of matrix chain products

TLDR
This paper discusses the computation of matrix chain products of the form 1-2-2 where matrices are matrices and an algorithm to find an order of computation which takes less than 25 percent longer than the optimal time is presented.

Highly parallelizable problems

of Results. We establish that several problems are highly parallelizable. For each of these problems, we design an optimal 0 (loglogn ) time parallel algorithm on the Common CRCW PRAM model which is

Almost Fully-parallel Parentheses Matching

and Y

  • Matias, personal communication,
  • 1992

Efficient parallel dynamic programming

  • manuscript, July
  • 1992