Learn More
The ARB (from Latin arbor, tree) project was initiated almost 10 years ago. The ARB program package comprises a variety of directly interacting software tools for sequence database maintenance and analysis which are controlled by a common graphical user interface. Although it was initially designed for ribosomal RNA data, it can be used for any nucleic and(More)
RNA interference (RNAi) has emerged as a powerful technique for studying loss-of-function phenotypes by specific down-regulation of gene expression, allowing the investigation of virus-host interactions by large-scale high-throughput RNAi screens. Here we present a robust and sensitive small interfering RNA screening platform consisting of an experimental(More)
In high-performance computing applications, a high-level I/O call will trigger activities on a multitude of hardware components. These are massively parallel systems supported by huge storage systems and internal software layers. Their complex interplay currently makes it impossible to identify the causes for and the locations of I/O bottlenecks. Existing(More)
Intelligently switching energy saving modes of CPUs, NICs and disks is mandatory to reduce the energy consumption. Hardware and operating system have a limited perspective of future performance demands, thus automatic control is suboptimal. However, it is tedious for a developer to control the hardware by himself. In this paper we propose an extension of an(More)
Tianhe-2 (MilkyWay-2)—installed at the National Super Computer Center in Guangzhou—ranks first in the Novem-ber 2013s TOP500 list and achieves an impressive peak performance of 33.86 Petaflops on the Linpack benchmark using 3,120,000 cores. However, such an amount of performance comes with a price: the power input of about 17.8 MW would result in an annual(More)
This paper deals with the parallel implementation of reconstruction algorithms for functional imaging on a network of workstations (NOW). Algorithms which provide the best image quality are not used in clinical routine, because they have a runtime of up to 60 hours with real clinical data sets of several hundred megabytes. After giving an overview of(More)
The First Workshop on Energy Aware High Performance Computing (EnA-HPC) is the successor of the EnA-HPC conference series which brings together researchers, vendors, and HPC center administrators since 2010. Its purpose is to foster discussions regarding the status and future of energy awareness in high performance computing. Fields of interest cover all(More)
Titan, a Cray XK7 system installed at the Oak Ridge National Laboratory ranks first in the November 2012's TOP500 list and achieves an impressive 17.59 Petaflops on the Linpack benchmark using 560,640 cores. Such an amount of performance comes with a price: The power input of about 8 MW would result in an annual electricity bill of more than 8 million Euros(More)
The International Supercomputing Conference, founded in 1986 as the " Supercomputer Seminar " , has been held annually for the last 25 years. Originally organized by Professor Hans Meuer, Professor of Computer Science at the University of Mannheim and former director of the computer centre , the Seminar brought together a group of 81 scientists and(More)
The June 2012 TOP500 list's first rank named Sequoia, the IBM BlueGene/Q system installed at the Department of En-ergy's Lawrence Livermore National Laboratory achieved an impressive 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores with a power input of about 8 MW. Its operation would produce an annual electricity bill of more than 8 million(More)