Learn More
A crucial element of large web companies is their ability to collect and analyze massive amounts of data. Tuple store databases are a key enabling technology employed by many of these companies (e.g., Google Big Table and Amazon Dynamo). Tuple stores are highly scalable and run on commodity clusters, but lack interfaces to support efficient development of(More)
n The Lincoln Laboratory Grid (LLGrid) project was initiated to provide Laboratory staff with an effective way to exploit cluster computing as a solution to the demand for computational power in large-scale algorithm development, data analysis, and simulation tasks. Because sensor capabilities and demands continue to increase, the dataset sizes and(More)
Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires(More)
Big Data (as embodied by Hadoop clusters) and Big Compute (as embodied by MPI clusters) provide unique capabilities for storing and processing large volumes of data. Hadoop clusters make distributed computing readily accessible to the Java community and MPI clusters provide high parallel efficiency for compute intensive workloads. Bringing the big data and(More)
The Apache Accumulo database is an open source relaxed consistency database that is widely used for government applications. Accumulo is designed to deliver high performance on unstructured data such as graphs of network data. This paper tests the performance of Accumulo using data from the Graph500 benchmark. The Dynamic Distributed Dimensional Data Model(More)
The supercomputing and enterprise computing arenas come from very different lineages. However, the advent of commodity computing servers has brought the two arenas closer than they have ever been. Within enterprise computing, commodity computing servers have resulted in the development of a wide range of new cloud capabilities: elastic computing,(More)
The Lincoln Multifunction Intelligence, Surveillance and Reconnaissance Testbed (LiMIT) is an airborne research laboratory for development, testing, and evaluation of sensors and processing algorithms. During flight tests it is desirable to process the sensor data to validate the sensors and to provide targets and images for use in other on board(More)
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Numerous tools exist that allow users to store, query and index these massive quantities of data. Each(More)
The PCA community suggested some of the additional kernel benchmarks referred to but not described in this report. Image processing kernel benchmarks were suggested by Mark Richards of the Georgia Tech Research Institute. The exception is the image compression benchmark, which is based on work done by Baxter and Seibert [1]. The incomplete gamma function(More)