Combining learning and optimization for transprecision computing

@article{Borghesi2020CombiningLA,
  title={Combining learning and optimization for transprecision computing},
  author={Andrea Borghesi and Giuseppe Tagliavini and M. Lombardi and Luca Benini and Michela Milano},
  journal={Proceedings of the 17th ACM International Conference on Computing Frontiers},
  year={2020}
}
The growing demands of the worldwide IT infrastructure stress the need for reduced power consumption, which is addressed in so-called transprecision computing by improving energy efficiency at the expense of precision. For example, reducing the number of bits for some floating-point operations leads to higher efficiency, but also to a non-linear decrease of the computation accuracy. Depending on the application, small errors can be tolerated, thus allowing to fine-tune the precision of the… 

Figures and Tables from this paper

Improving Deep Learning Models via Constraint-Based Domain Knowledge: a Brief Survey

A first survey of the approaches devised to integrate domain knowledge, expressed in the form of constraints, in DL learning models to improve their performance, in particular targeting deep neural networks is presented.

Machine Learning for Combinatorial Optimisation of Partially-Specified Problems: Regret Minimisation as a Unifying Lens

It is increasingly common to solve combinatorial optimisation problems that are partially-specified. We survey the case where the objective function or the relations between variables are not known or

Reduced-Precision Acceleration of Radio-Astronomical Imaging on Reconfigurable Hardware

A reduced-precision implementation of the gridding component of the widely-used WSClean imaging application and proposes the first custom floating-point accelerator on a Xilinx Alveo U50 FPGA using High-Level Synthesis.

Service Deployment Challenges in Cloud-to-Edge Continuum

  • D. Petcu
  • Computer Science
    Scalable Comput. Pract. Exp.
  • 2021
It is argued that the adoption of the microservices and unikernels on large scale is adding new entries on the list of requirements of a deployment mechanism, but offers an opportunity to decentralize the associated processes and improve the scalability of the applications.

Learning Hard Optimization Problems: A Data Generation Perspective

This paper demonstrates this critical challenge, connects the volatility of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks.

Injective Domain Knowledge in Neural Networks for Transprecision Computing

Improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task, namely precision tuning of transprecision computing applications are studied.

References

SHOWING 1-10 OF 26 REFERENCES

The transprecision computing paradigm: Concept, design, and applications

The driving motivations, roadmap, and expected impact of the European project OPRECOMP are presented, which aims at demolishing the ultra-conservative “precise” computing abstraction, replacing it with a more flexible and efficient one, namely transprecision computing.

CFPU: Configurable floating point multiplier for energy-efficient computing

This paper proposes a novel approximate floating point multiplier, called CFPU, which significantly reduces energy and improves performance of multiplication at the expense of accuracy, and shows that it can outperforms a standard FPU when at least 4% of multiplications are performed in approximate mode.

PROMISE: floating-point precision tuning with stochastic arithmetic

An algorithm and a tool based on the delta debugging search algorithm that provide a mixed precision configuration with a worst-case complexity quadratic in the number of variables to reduce the cost of double precision variables and improve the memory usage of their code is proposed.

A transprecision floating-point platform for ultra-low power computing

A software library that enables exploration of FP types by tuning both precision and dynamic range of program variables is introduced and a methodology to integrate the library with an external tool for precision tuning is presented, and experimental results that highlight the clear benefits of introducing the new formats are presented.

A lagrangian propagator for artificial neural networks in constraint programming

A new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm is proposed, capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs.

Optimization and Controlled Systems: A Case Study on Thermal Aware Workload Dispatching

This paper uses an Artificial Neural Network to learn the behavior of a controlled system and plug it into a CP model by means of Neuron Constraints, and obtains significantly better results compared to an approach with no ANN guidance.

A New Propagator for Two-Layer Neural Networks in Empirical Model Learning

A new network-level propagator based on a Lagrangian relaxation, that is solved with a subgradient algorithm, that leads to a massive reduction of the size of the search tree, which is only partially countered by an increased propagation time.

Rigorous floating-point mixed-precision tuning

This work presents a rigorous approach to precision allocation based on formal analysis via Symbolic Taylor Expansions, and error analysis based on interval functions, implemented in an automated tool called FPTuner that generates and solves a quadratically constrained quadratic program to obtain a precision-annotated version of the given expression.

A Transprecision Floating-Point Architecture for Energy-Efficient Embedded Computing

An FP arithmetic unit capable of performing basic operations on smallFloat formats as well as conversions is presented, enabling hardware-supported power savings for applications making use of transprecision.

Walking through the Energy-Error Pareto Frontier of Approximate Multipliers

It is shown that design solutions configured through the proposed approach form the Pareto frontier of the energy-error space when considering direct quantitative comparisons with existing state-of-the-art design space.