Learn More
A rich variety of tools help researchers with high-performance numerical computing, but few tools exist for large-scale combinatorial computing. The authors describe their efforts to build a common infrastructure for numerical and combinatorial computing by using parallel sparse matrices to implement parallel graph algorithms. M odern scientific(More)
Large–scale computation on graphs and other discrete structures is becoming increasingly important in many applications, including computational biology, web search, and knowledge discovery. High– performance combinatorial computing is an infant field, in sharp contrast with numerical scientific computing. We argue that many of the tools of high-performance(More)
Interactive environments such as MATLAB and STAR-P have made numerical computing tremendously accessible to engineers and scientists. They allow people who are not well– versed in the art of numerical computing to nonetheless reap the benefits of numerical computing. The same is not true in general for combinatorial computing. Often, many interesting(More)
Inherited loss of P/Q-type calcium channel function causes human absence epilepsy, episodic dyskinesia, and ataxia, but the molecular "birthdate" of the neurological syndrome and its dependence on prenatal pathophysiology is unknown. Since these channels mediate transmitter release at synapses throughout the brain and are expressed early in embryonic(More)
Preface The present notes are derived from a course taught at the University of Southern California. The focus of the course is on the mathematical and algorithmic theory underpinning the connections between networks and information. These connections take two predominant forms: • Network structure itself encodes a lot of information. For instance,(More)
Sparse matrices are first class objects in many VHLLs (very high level languages) used for scientific computing. They are a basic building block for various numerical and combinatorial algorithms. Parallel computing is becoming ubiquitous, specifically due to the advent of multi-core architectures. As existing VHLLs are adapted to emerging architectures,(More)