Mediating artificial intelligence developments through negative and positive incentives

@article{Anh2021MediatingAI,
  title={Mediating artificial intelligence developments through negative and positive incentives},
  author={Han The Anh and Lu{\'i}s Moniz Pereira and Tom Lenaerts and Francisco C. Santos},
  journal={PLoS ONE},
  year={2021},
  volume={16}
}
The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to “win”. Starting from a baseline… 

AI Development Race Can Be Mediated on Heterogeneous Networks

This work investigates how different interaction structures among race participants can alter collective choices and requirements for regulatory actions and suggests that technology governance and regulation may profit from the world’s patent heterogeneity and inequality among firms and nations to design and implement meticulous interventions on a minority of participants capable of influencing an entire population towards an ethical and sustainable use of AI.

To Regulate or Not: A Social Dynamics Analysis of an Idealised AI Race

It is shown that, next to the risks of setbacks and being reprimanded for unsafe behaviour, the time-scale in which domain supremacy can be achieved plays a crucial role, and that imposing regulations for all risk and timing conditions may not have the anticipated effect.

A Regulation Dilemma in Artificial Intelligence Development

We examine a social dilemma that arises with the advance- ment of technologies such as AI, where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. SAFE is costlier

Emergent behaviours in multi-agent systems with Evolutionary Game Theory

  • T. Han
  • Computer Science
    AI Communications
  • 2022
This brief aims to sensitize the reader to EGT based issues, results and prospects, which are accruing in importance for the modeling of minds with machines and the engineering of prosocial behaviours in dynamical MAS, with impact on the understanding of the emergence and stability of collective behaviours.

Indirect exclusion can promote cooperation in repeated group interactions

Social exclusion has been regarded as one of the most effective measures to promote the evolution of cooperation. In real society, the way in which social exclusion works can be direct or indirect.

My title

  • 2022

Early exclusion leads to cyclical cooperation in repeated group interactions

Explaining the emergence and maintenance of cooperation among selfish individuals from an evolutionary perspective remains a grand challenge in biology, economy and social sciences. Social exclusion

Employing AI to Better Understand Our Morals

We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of

References

SHOWING 1-10 OF 81 REFERENCES

To Regulate or Not: A Social Dynamics Analysis of an Idealised AI Race

It is shown that, next to the risks of setbacks and being reprimanded for unsafe behaviour, the time-scale in which domain supremacy can be achieved plays a crucial role, and that imposing regulations for all risk and timing conditions may not have the anticipated effect.

Modelling and Influencing the AI Bidding War: A Research Agenda

This paper proposes a research agenda to develop theoretical models that capture key factors of the AI race, revealing which strategic behaviours may emerge and hypothetical scenarios therein, and provides actionable policies, showing how they need to be employed and deployed in order to achieve compliance and thereby avoid disasters as well as loosing confidence and trust in AI in general.

On the promotion of safe and socially beneficial artificial intelligence

Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities, and intrinsic measures are at least as important as extrinsic measures.

Reward and punishment in climate change dilemmas

The impact of reward and punishment in this type of collective endeavors — coined as collective-risk dilemmas — is investigated by means of a dynamic, evolutionary approach and shows that rewards are essential to initiate cooperation and sanctions are instrumental to maintain cooperation.

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.

An AI Race for Strategic Advantage: Rhetoric and Risks

The potential risks of the AI race narrative and of an actual competitive race to develop AI, such as incentivising corner-cutting on safe-ty and governance, or increasing the risk of conflict are assessed.

First carrot, then stick: how the adaptive hybridization of incentives promotes cooperation

Here, it is demonstrated that an institutional sanctioning policy called ‘first carrot, then stick’ is unexpectedly successful in promoting cooperation, and the adaptive hybridization of incentives offers the ‘best of both worlds’ by combining the effectiveness of rewarding in establishing cooperation with the effective of punishing in recovering it.

Counterfactual thinking in cooperation dynamics

A mathematical model is proposed, grounded on Evolutionary Game Theory, to examine the population dynamics emerging from the interplay between counterfactual thinking and social learning whenever the individuals in the population face a collective dilemma.

Making an Example: Signalling Threat in the Evolution of Cooperation

It is argued that fear acts as an effective stimulus to pro-social behaviour and catalyses cooperation, even when signalling is costly or when punishment would be impractical.

Reward and punishment

The analysis suggests that reputation is essential for fostering social behavior among selfish agents, and that it is considerably more effective with punishment than with reward.
...