Unpacking the Expressed Consequences of AI Research in Broader Impact Statements

@article{Nanayakkara2021UnpackingTE,
  title={Unpacking the Expressed Consequences of AI Research in Broader Impact Statements},
  author={Priyanka Nanayakkara and Jessica R. Hullman and Nicholas A. Diakopoulos},
  journal={Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
  year={2021}
}
The computer science research community and the broader public have become increasingly aware of negative consequences of algorithmic systems. In response, the top-tier Neural Information Processing Systems (NeurIPS) conference for machine learning and artificial intelligence research required that authors include a statement of broader impact to reflect on potential positive and negative consequences of their work. We present the results of a qualitative thematic analysis of a sample of… 
Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews
The artificial intelligence research community is continuing to grapple with the ethics of its work by encouraging researchers to discuss potential positive and negative consequences. Neural
AI Ethics Statements: Analysis and Lessons Learnt from NeurIPS Broader Impact Statements
TLDR
A dataset containing the impact statements from all NeurIPS 2020 papers is created, along with additional information such as affiliation type, location and subject area, and a simple visualisation tool for exploration.
Disentangling the Components of Ethical Research in Machine Learning
While practical applications of machine learning have been the target of considerable normative scrutiny over the past decade, there is growing concern with machine learning research as well. Debates
The Values Encoded in Machine Learning Research
TLDR
A method and annotation scheme for studying the values encoded in documents such as research papers is introduced and systematic textual evidence that these top values are being defined and applied with assumptions and implications generally supporting the centralization of power is found.
ESR: Ethics and Society Review of Artificial Intelligence Research
Artificial intelligence (AI) research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility
Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating Societal Impacts of Algorithmic Decision Making
TLDR
This work employs crowdsourcing as a means of participatory foresight to uncover four different types of impact areas based on a set of governmental algorithmic decision making tools and suggests that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues.
Metaethical Perspectives on 'Benchmarking' AI Ethics
. Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to
REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research
Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible. Despite these
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices
TLDR
A detailed quantitative and qualitative analysis of the ACL Anthology is conducted, as well as comparing the trends in the field to those of other related disciplines, such as cognitive science, machine learning, data mining, and systems.
How Different Groups Prioritize Ethical Values for Responsible AI
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their
...
...

References

SHOWING 1-10 OF 131 REFERENCES
Institutionalising Ethics in AI through Broader Impact Requirements
TLDR
In this Perspective, a governance initiative by one of the world’s largest AI conferences is reflected on and insights are gained regarding effective community-based governance and the role and responsibility of the AI research community more broadly.
A Framework for Understanding Unintended Consequences of Machine Learning
TLDR
This paper provides a framework that partitions sources of downstream harm in machine learning into six distinct categories spanning the data generation and machine learning pipeline, and describes how these issues arise, how they are relevant to particular applications, and how they motivate different solutions.
Overcoming Failures of Imagination in AI Infused System Development and Deployment
TLDR
It is argued that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense to effectively assist in anticipating harmful uses.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
TLDR
Recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values are provided.
Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation
TLDR
The use of the Value Cards toolkit can improve students' understanding of both the technical definitions and trade-offs of performance metrics and apply them in real-world contexts, help them recognize the significance of considering diverse social values in the development and deployment of algorithmic systems, and enable them to communicate, negotiate and synthesize the perspectives of diverse stakeholders.
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
TLDR
It is found that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability.
Like a Researcher Stating Broader Impact For the Very First Time
TLDR
How individual researchers reacted to the new statement of broader impact requirement, including not just their views, but also their experience in drafting and their reflections after paper acceptances is sought.
Governing with Algorithmic Impact Assessments: Six Observations
Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These assessments are seen as potentially useful for anticipating, avoiding, and
Counterfactual Predictions under Runtime Confounding
TLDR
This work proposes a doubly-robust procedure for learning counterfactual prediction models in the setting where all relevant factors are captured in the historical data, but it is either undesirable or impermissible to use some such factors in the prediction model.
It's Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process
TLDR
This work hypothesizes that a small change to the peer review process will force computing researchers to more deeply consider the negative impacts of their work, and expects that this change will incentivize research and policy that alleviates computing's negative impacts.
...
...