• Corpus ID: 17444200

iACT: A Software-Hardware Framework for Understanding the Scope of Approximate Computing

@inproceedings{Mishra2014iACTAS,
  title={iACT: A Software-Hardware Framework for Understanding the Scope of Approximate Computing},
  author={Asit K. Mishra and Rajkishore Barik and S. Paul},
  year={2014}
}
Approximate computing has recently emerged as a paradigm for enabling energy efficient software and hardware implementations by exploiting the inherent resiliency in applications to impreciseness in their underlying computations. Much of the previous work in this area has demonstrated the potential for significant energy and performance improvements, but these works largely consist of ad hoc techniques that are applied to a small number of similar applications. Mainstream adoption of… 

Figures from this paper

Exploiting Errors for Efficiency
TLDR
This work presents a synthesis of research results on computing systems that only make as many errors as their end-to-end applications can tolerate, and introduces a formalization of terminology that allows for a coherent view across the techniques traditionally used by different research communities in their individual layer of focus.
Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms
TLDR
A synthesis of research results on computing systems that only make as many errors as their users can tolerate is presented, for the first time, from across the disciplines of computer aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory.
SEA-AC: Symbolic Execution-based Analysis towards Approximate Computing
TLDR
This paper proposes a new approach to answer the question of approximate computing by using symbolic execution to eliminate the data-dependency and the time wastage of existing methods.
Tools for Reduced Precision Computation
TLDR
There is still a gap to close in automation of reduced precision customization, especially for tools based on static analysis rather than profiling, as well as for integration within mainstream, industry-strength compiler frameworks.
HPAC: evaluating approximate computing techniques on HPC OpenMP applications
TLDR
HPAC is developed, a framework with compiler and runtime support for code annotation and transformation, and accuracy vs. performance trade-off analysis of OpenMP HPC applications, which reveals possible performance gains of approximation and its interplay with parallel execution.
Architecture-Aware Approximate Computing
TLDR
A program slicing-based approach that identifies the set of data accesses to drop such that the resulting performance/energy benefits are maximized and the execution remains within the error (inaccuracy) bound specified by the user.
A Taxonomy of Approximate Computing Techniques
TLDR
This work presents a taxonomy that classifies approximate computing techniques according to their most salient features: compute vs. data, deterministic vs. nondeterministic and coarsevs.
HiPA: history-based piecewise approximation for functions
TLDR
A function approximation scheme that can efficiently approximate functions in software and is evaluated on 90 mathematical and scientific functions from the GNU Scientific Library shows that the speed of 90% of these functions can be improved.
Energy-efficient approximate computation in Topaz
TLDR
The Topaz implementation maps approximate tasks onto the approximate machine and integrates the approximate results into the main computation, deploying a novel outlier detection and reliable re-execution mechanism to prevent unacceptably inaccurate results from corrupting the overall computation.
Invited: Cross-layer approximate computing: From logic to architectures
TLDR
This paper provides a systematical understanding of how to generate and explore the design space of approximate components, which enables a wide-range of power/energy, performance, area and output quality tradeoffs, and a high degree of design flexibility to facilitate their design.
...
...

References

SHOWING 1-10 OF 41 REFERENCES
Analysis and characterization of inherent application resilience for approximate computing
TLDR
This work analysis and characterization of inherent application resilience present in a suite of 12 widely used applications from the domains of recognition, data mining, and search and proposes a systematic framework for Application Resilience Characterization (ARC), which characterizes the resilient parts using approximation models that abstract a wide range of approximate computing techniques.
Architecture support for disciplined approximate programming
TLDR
An ISA extension that provides approximate operations and storage is described that gives the hardware freedom to save energy at the cost of accuracy and Truffle, a microarchitecture design that efficiently supports the ISA extensions is proposed.
EnerJ: approximate data types for safe and general low-power computation
TLDR
EnerJ is developed, an extension to Java that adds approximate data types and a hardware architecture that offers explicit approximate storage and computation and allows a programmer to control explicitly how information flows from approximate data to precise data.
Relax: an architectural framework for software recovery of hardware faults
TLDR
This paper considers whether exposing hardware fault information to software and allowing software to control fault recovery simplifies hardware design and helps technology scaling, and describes Relax, an architectural framework for software recovery of hardware faults.
Exploring the Synergy of Emerging Workloads and Silicon Reliability Trends
TLDR
A key insight is to expose device-level errors up the system stack instead of masking them, and propose light-weight application-agnostic mechanisms in hardware to mitigate the impact of errors.
Exploiting Application-Level Correctness for Low-Cost Fault Tolerance
TLDR
A detailed fault susceptibility study that measures how much more fault resilient programs are when defining correctness at the application level compared to the architecture level, and presents two lightweight fault recovery mechanisms that exploit the relaxed requirements of application-level correctness to reduce checkpoint cost.
Stochastic computing: Embracing errors in architecture and design of processors and applications
  • J. Sartori, Joseph Sloan, Rakesh Kumar
  • Computer Science
    2011 Proceedings of the 14th International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES)
  • 2011
TLDR
This paper presents the vision for design, architecture, compiler, and application-level stochastic computing techniques that embrace errors in order to ensure the continued viability of semiconductor scaling.
Design of voltage-scalable meta-functions for approximate computing
TLDR
This work proposes design techniques which enable the hardware implementations of these meta-functions to scale more gracefully under voltage over-scaling, and demonstrates that the optimized meta-function implementations consume up to 30% less energy at iso-error rates, while achieving upto 27% lower error rates when compared to their baseline counterparts.
Scalable effort hardware design: Exploiting algorithmic resilience for energy efficiency
TLDR
This work proposes scalable effort hardware design as an approach to tap the reservoir of algorithmic resilience and translate it into highly efficient hardware implementations, and implements an energy-efficient SVM classification chip based on the proposed scalable effort design approach.
Exploiting Soft Computing for Increased Fault Tolerance
TLDR
This paper identifies three characteristics of soft computations that make them resilient to error: redundancy, adaptivity, and reduced precision, and presents a method for identifyingsoft computations at the instruction level using dynamic slicing analysis.
...
...