Combining learning and optimization for transprecision computing

@article{Borghesi2020CombiningLA,
  title={Combining learning and optimization for transprecision computing},
  author={Andrea Borghesi and Giuseppe Tagliavini and M. Lombardi and Luca Benini and Michela Milano},
  journal={Proceedings of the 17th ACM International Conference on Computing Frontiers},
  year={2020}
}
The growing demands of the worldwide IT infrastructure stress the need for reduced power consumption, which is addressed in so-called transprecision computing by improving energy efficiency at the expense of precision. For example, reducing the number of bits for some floating-point operations leads to higher efficiency, but also to a non-linear decrease of the computation accuracy. Depending on the application, small errors can be tolerated, thus allowing to fine-tune the precision of the… Expand
Improving Deep Learning Models via Constraint-Based Domain Knowledge: a Brief Survey
TLDR
A first survey of the approaches devised to integrate domain knowledge, expressed in the form of constraints, in DL learning models to improve their performance, in particular targeting deep neural networks is presented. Expand
Learning Hard Optimization Problems: A Data Generation Perspective
TLDR
This paper demonstrates this critical challenge, connects the volatility of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks. Expand
Injective Domain Knowledge in Neural Networks for Transprecision Computing
TLDR
Improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task, namely precision tuning of transprecision computing applications are studied. Expand

References

SHOWING 1-10 OF 37 REFERENCES
The transprecision computing paradigm: Concept, design, and applications
TLDR
The driving motivations, roadmap, and expected impact of the European project OPRECOMP are presented, which aims at demolishing the ultra-conservative “precise” computing abstraction, replacing it with a more flexible and efficient one, namely transprecision computing. Expand
CFPU: Configurable floating point multiplier for energy-efficient computing
TLDR
This paper proposes a novel approximate floating point multiplier, called CFPU, which significantly reduces energy and improves performance of multiplication at the expense of accuracy, and shows that it can outperforms a standard FPU when at least 4% of multiplications are performed in approximate mode. Expand
PROMISE: floating-point precision tuning with stochastic arithmetic
TLDR
An algorithm and a tool based on the delta debugging search algorithm that provide a mixed precision configuration with a worst-case complexity quadratic in the number of variables to reduce the cost of double precision variables and improve the memory usage of their code is proposed. Expand
A transprecision floating-point platform for ultra-low power computing
TLDR
A software library that enables exploration of FP types by tuning both precision and dynamic range of program variables is introduced and a methodology to integrate the library with an external tool for precision tuning is presented, and experimental results that highlight the clear benefits of introducing the new formats are presented. Expand
A lagrangian propagator for artificial neural networks in constraint programming
TLDR
A new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm is proposed, capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. Expand
Optimization and Controlled Systems: A Case Study on Thermal Aware Workload Dispatching
TLDR
This paper uses an Artificial Neural Network to learn the behavior of a controlled system and plug it into a CP model by means of Neuron Constraints, and obtains significantly better results compared to an approach with no ANN guidance. Expand
A New Propagator for Two-Layer Neural Networks in Empirical Model Learning
TLDR
A new network-level propagator based on a Lagrangian relaxation, that is solved with a subgradient algorithm, that leads to a massive reduction of the size of the search tree, which is only partially countered by an increased propagation time. Expand
Rigorous floating-point mixed-precision tuning
TLDR
This work presents a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions, implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precision-annotated version of the given expression. Expand
A Transprecision Floating-Point Architecture for Energy-Efficient Embedded Computing
TLDR
An FP arithmetic unit capable of performing basic operations on smallFloat formats as well as conversions is presented, enabling hardware-supported power savings for applications making use of transprecision. Expand
Walking through the Energy-Error Pareto Frontier of Approximate Multipliers
TLDR
It is shown that design solutions configured through the proposed approach form the Pareto frontier of the energy-error space when considering direct quantitative comparisons with existing state-of-the-art design space. Expand
...
1
2
3
4
...