Emerging exascale architectures bring forth new challenges related to heterogeneous systems power, energy, cost, and resilience. These new challenges require a shift from conventional paradigms in understanding how to best exploit and optimize these features and limitations. Our objective is to identify the top few dominant characteristics in a set of applications. Understanding these characteristics will allow the community to build and exploit customized architectures and tools best suited to optimize each dominant characteristic in the application domain. Every application will typically be composed of multiple characteristics and thus will use several of the customized accelerators and tools during its execution phases, with the eventual goal of using the entire system efficiently. In this poster, we describe a hybrid methodology, based on binary instrumentation, for characterizing scientific applications such as instruction mix and memory access patterns. We apply our methodology to proxy applications that are representative of a broad range of DOE scientific applications. With this empirical basis, we develop and validate statistical models that extrapolate application properties as a function of problem size. These models are then used to project the first quantitative characterization of an exascale computing workload, including computing and memory requirements. We evaluate the potential benefit of processor under memory, a radical new exascale architecture customization and understand how these new customization can impact applications.