Filling gaps in trustworthy development of AI

@article{Avin2021FillingGI,
  title={Filling gaps in trustworthy development of AI},
  author={Shahar Avin and Haydn Belfield and Miles Brundage and Gretchen Krueger and Jasmine Wang and Adrian Weller and Markus Anderljung and Igor Krawczuk and David M. Krueger and Jonathan Lebensold and Tegan Maharaj and Noa Zilberman},
  journal={Science},
  year={2021},
  volume={374 6573},
  pages={
          1327-1329
        }
}
[Figure: see text]. 

Distinguishing two features of accountability for AI technologies

Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.

Risk Determination versus Risk Perception: A New Model of Reality for Human–Machine Autonomy

We review the progress in developing a science of interdependence applied to the determinations and perceptions of risk for autonomous human–machine systems based on a case study of the Department of

Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

It is found that the RLHF models are increasinglycult to red team as they scale, and a trend with scale for the other model types is found, which indicates that this transparency accelerates the ability to work together as a community in order to develop shared norms, practices, and technical standards.

Predictability and Surprise in Large Generative Models

This paper highlights a counterintuitive property of large-scale generative models, which have a paradoxical combination of predictable loss on a broad training distribution, and unpredictable specific capabilities, inputs, and outputs, and analyzed how these conflicting properties combine to give model developers various motivations for deploying these models, and challenges that can hinder deployment.

Toward Carbon-Aware Networking

This paper suggests building upon existing practices, such as network telemetry, programmable network elements and cost-aware routing to enable carbon-intelligent networking, a concept that goes beyond network energy efficiency and considers the impact of energy decarbonization on the routing and scheduling of data transmission.

Automating In-Network Machine Learning

The results show that Planter-based in-network machine learning algorithms can run at line rate, have a negligible effect on latency, coexist with standard switching functionality, and have no or minor accuracy trade-offs.

References

SHOWING 1-8 OF 8 REFERENCES

Trustworthy artificial intelligence

A data-driven research framework for TAI is developed and its utility is demonstrated by delineating fruitful avenues for future research, particularly with regard to the distributed ledger technology-based realization of TAI.

The global landscape of AI ethics guidelines

A detailed analysis of 84 AI ethics reports around the world finds a convergence around core principles but substantial divergence on practical implementation, highlighting the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

Calibrating Noise to Sensitivity in Private Data Analysis

The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.

Governing AI safety through independent audits

This Perspective proposes a pragmatic approach where independent audit of AI systems is central and would embody three AAA governance principles: prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements.

ISO / TC 22 Road Vehicles " Report on standardisation prospective for automated vehicles ( RoSPAV ) " ( 2021

  • Belfield . Yale J . L . & Tech

Lins , A . Sunyaev

  • Electron Markets

Report on standardisation prospective for automated vehicles (RoSPAV)

  • ISO/TC 22 Road Vehicles
  • 2021

The First Taxonomy of AI Incidents

  • Nat Mach Intell
  • 2021