Daniel Ohene-Kwofie

Learn More
Several meetings of the Extremely Large Databases Community for large scale scientific applications, advocate the use of multidimensional arrays as the appropriate model for representing scientific databases. Scientific databases gradually grow to massive sizes of the order of terabytes and petabytes. As such, the storage of such databases require efficient(More)
The data model found to be most appropriate for scientific databases is the array-oriented data model. This also forms the basis of storing and accessing the database onto and from physical storage. Such storage systems are exemplified by the hierarchical data format(HDF/HDF5), the network common data format (NetCDF) and recently the SciDB. Given that the(More)
Modern computer architecture with 64-bit addressing has now become commonplace. The consequence is that sufficiently large databases can be maintained, during a usage session as main memory resident database. Such in memory resident databases still require the use of an index for fast access to the data items. A more common architecture is to maintain an(More)
Over the past decade, I/O is has been a limiting factor for extreme scale parallel computing even though there has been substantial growth in the amount of data produced by parallel scientific applications. The datasets usually grow incrementally to massive sizes of the order of terabytes and petabytes. As such, the storage of such datasets, typically(More)
Parallelism in linear algebra libraries is a common approach to accelerate numerical and scientific applications. Matrix-matrix multiplication is one of the most widely used computations in scientific and numerical algorithms. Although a number of matrix multiplication algorithms exist for distributed memory environments (e.g., Cannon, Fox, PUMMA, SUMMA),(More)
Modern computer architectures provide high performance computing capability by having multiple CPU cores. Such systems are also typically associated with very large main-memory capacities, of the order of tens to hundreds of gigabytes, thereby allowing such architectures to be used for fast processing of in-memory databases applications. However, most of(More)
  • 1