• Corpus ID: 240354536

The Golden Rule as a Heuristic to Measure the Fairness of Texts Using Machine Learning

@article{Izzidien2021TheGR,
  title={The Golden Rule as a Heuristic to Measure the Fairness of Texts Using Machine Learning},
  author={Ahmed Izzidien and Peter Romero and S. D. Fitz and David Stillwell},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.00107}
}
To ‘treat others as one would wish to be treated’ is a common formulation of the golden rule (GR). Yet, despite its prevalence as an axiom throughout history, no transfer of this moral philosophy into computational systems exists. In this paper we consider how to algorithmically operationalise this rule so that it may be used to measure sentences such as ‘the boy harmed the girl’ and categorise them as fair or unfair. For the purposes of the paper, we define a fair act as one that one would be… 

Figures and Tables from this paper

Can Social Ontological Knowledge Representations be Measured Using Machine Learning?
TLDR
Personal Social Ontology (PSO), it is proposed, is how an individual perceives the ontological properties of terms, and the use of principal social perceptions is put forward as a viable method to feature engineer such texts.

References

SHOWING 1-10 OF 60 REFERENCES
Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices
TLDR
It is shown that applying machine learning to human texts can extract deontological ethical reasoning about "right" and "wrong" conduct, and indicates that text corpora contain recoverable and accurate imprints of the authors' social, ethical and even moral choices.
The Moral Choice Machine
TLDR
It is shown that applying machine learning to human texts can extract deontological ethical reasoning about “right” and “wrong” conduct, and that text corpora contain recoverable and accurate imprints of the authors' social, ethical and moral choices, even with context information.
Transparency As Design Publicity: Explaining and Justifying Inscrutable Algorithms
TLDR
It is argued that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction, and proposed a new form of algorithmic transparency that consists in explaining algorithms as an intentional product that serves a particular goal, or multiple goals, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached.
The Golden Rule
  • R. Gunderman
  • History
    Journal of the American College of Radiology : JACR
  • 2005
Using word embeddings to generate data-driven human agent decision-making from natural language
TLDR
The agent architecture proposed is able to mirror human likelihood assessments from natural language and offers a new way to model agent cognitive processes for a broad array of agent-based modeling use cases.
Word Embeddings, Analogies, and Machine Learning: Beyond king - man + woman = queen
TLDR
It is shown that simple averaging over multiple word pairs improves over the state-of-the-art, and a further improvement in accuracy is achieved by combining cosine similarity with an estimation of the extent to which a candidate answer belongs to the correct word class.
Moral concerns are differentially observable in language
TLDR
The results are the first to relate individual differences in moral concerns to language usage, and to uncover the signatures of moral concerns in language.
Akratic Action under the Guise of the Good
Abstract Many philosophers have thought that human beings do or pursue only what we see as good. These “guise-of-the-good” views face powerful challenges and counterexamples, such as akratic action,
On Approximation of Concept Similarity Measure in Description Logic ELH With Pre-Trained Word Embedding
TLDR
A neuro-symbolic integrated framework is defined to exploit the pre-trained word embeddings with semantic definitions in an ontology to yield an explainable degree of concept similarity, which shows that the proposed method remains both interpretability and explainability while achieving comparable performance, relative to the state-of-the-art approaches in the data and knowledge-driven methods.
Artificial Moral Agents: A Survey of the Current Status
TLDR
A taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed and it is illustrated that there is a long way to go before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.
...
...