• Corpus ID: 208158276

Energy Usage Reports: Environmental awareness as part of algorithmic accountability

@article{Lottick2019EnergyUR,
  title={Energy Usage Reports: Environmental awareness as part of algorithmic accountability},
  author={Kadan Lottick and Silvia Susai and Sorelle A. Friedler and Jonathan P. Wilson},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.08354}
}
The carbon footprint of algorithms must be measured and transparently reported so computer scientists can take an honest and active role in environmental sustainability. In this paper, we take analyses usually applied at the industrial level and make them accessible for individual computer science researchers with an easy-to-use Python package. Localizing to the energy mixture of the electrical power grid, we make the conversion from energy usage to CO2 emissions, in addition to contextualizing… 

Figures from this paper

Sustainable AI: Environmental Implications, Challenges and Opportunities
TLDR
The carbon footprint of AI computing is characterized by examining the model development cycle across industry-scale machine learning use cases and, at the same time, considering the life cycle of system hardware.
Evaluating the carbon footprint of NLP methods: a survey and analysis of existing tools
TLDR
The scope of the measures provided and the use of six tools used to measure energy use and CO2 emissions of NLP methods are described and actionable recommendations to accurately measure the environmental impact of N LP experiments are proposed.
Towards Climate Awareness in NLP Research
TLDR
A climate performance model card is proposed with the primary purpose of being practically usable with only limited information about experiments and the underlying computer hardware to increase awareness about the environmental impact of NLP research and, thereby, paving the way for more thorough discussions.
EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models
TLDR
EnergyVis aims to raise awareness concerning computational sustainability by interactively highlighting excessive energy usage during model training; and by providing alternative training options to reduce energy usage.
A Holistic Assessment of the Carbon Footprint of Noor, a Very Large Arabic Language Model
TLDR
A holistic assessment of the footprint of an extreme-scale language model, Noor, aiming to develop the largest multi-task Arabic language models–with up to 13B parameters–leveraging zero-shot generalisation to enable a wide range of downstream tasks via natural language instructions is proposed.
Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning
TLDR
This paper investigates the impact of differential privacy on learning algorithms in terms of their carbon footprint due to either longer run-times or failed experiments and provides guidance on choosing the noise levels which can strike a balance between desired privacy levels and reduced carbon emissions.
Cataloging Algorithmic Decision Making in the U.S. Government Government use of algorithmic decision-making (ADM) systems is wide-spreadanddiverse,andholdingtheseincreasinglyhigh-impact,oftenopaquegovernmentalgorithmsaccountablepresentsanumberofchallenges.SomeEuropeangovernmentshavelaunchedregistrie
Government use of algorithmic decision-making (ADM) systems is widespread and diverse, and holding these increasingly high-impact, often opaque government algorithms accountable presents a number of
Towards Greener Applications: Enabling Sustainable Cloud Native Applications Design
TLDR
This exploratory work aims to emphasize the awareness on the sustainability of applications by proposing a methodology for its evaluation and discussing the feasibility of this methodology by referring to existing tools and technologies capable of supporting the design features proposed in a production environment.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
TLDR
Recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values are provided.
Towards Green AI with tensor networks - Sustainability and innovation enabled by efficient algorithms
TLDR
This paper presents a promising tool for sustainable and thus Green AI: tensor networks (TNs), an established tool from multilinear algebra that has the capability to improve efficiency without compromising accuracy, and argues that better algorithms should be evaluated in terms of both accuracy and ef-fiency.
...
...

References

SHOWING 1-7 OF 7 REFERENCES
The carbon footprint of a distributed cloud storage
TLDR
Compared to the centralized cloud, the distributed architecture of Cubbit achieves a -87% reduction of the carbon footprint for data storage and a -50% reduction for data transfers, providing an example of how a radical paradigm shift can benefit both the final consumer and society as a whole.
Greenhouse Gas Emissions from Reservoir Water Surfaces: A New Global Synthesis.
TLDR
Although prior studies have linked reservoir GHG emissions to reservoir age and latitude, it is found that factors related to reservoir productivity are better predictors of emission.
Energy and Policy Considerations for Deep Learning in NLP
TLDR
This paper quantifies the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposes actionable recommendations to reduce costs and improve equity in NLP research and practice.
The Energy Intensity of the Internet: Home and Access Networks
TLDR
This chapter argues against the inclusion of end devices when assessing the energy intensity of the Internet, but in favor of including CPE, access networks, redundancy equipment, cooling and other overhead as well as optical fibers, and develops a formula for the energyintensity of CPE and access networks.
Disparate Impact in Big Data Policing
Police departments large and small have begun to use data mining techniques to predict the where, when, and who of crime before it occurs. But data mining systems can have a disproportionately
The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards
TLDR
The Dataset Nutrition Label is a diagnostic framework that lowers the barrier to standardized data analysis by providing a distilled yet comprehensive overview of dataset "ingredients" before AI model development.
Green AI
Creating efficiency in AI research will decrease its carbon footprint and increase its inclusivity as deep learning study should not require the deepest pockets.