Panel: A Debate on Data and Algorithmic Ethics

@article{Stoyanovich2018PanelAD,
  title={Panel: A Debate on Data and Algorithmic Ethics},
  author={Julia Stoyanovich and Bill Howe and Hosagrahar V. Jagadish and Gerome Miklau},
  journal={Proc. VLDB Endow.},
  year={2018},
  volume={11},
  pages={2165-2167}
}
Recently, there has begun a movement towards Fairness, Accountability, and Transparency (FAT) in algorithmic decision making, and in data science more broadly. The database community has not been significantly involved in this movement, despite "owning" the models, languages, and systems that produce the (potentially biased) input to the machine learning applications. What role should the database community play in this movement? Do the objectives of fairness, accountability and transparency… 

Transparency, Fairness, Data Protection, Neutrality

TLDR
Three recent regulatory frameworks are discussed: the European Union’s General Data Protection Regulation, the New York City Automated Decisions Systems (ADS) Law, and the Net Neutrality principle, which aim to protect the rights of individuals who are impacted by data collection and analysis.

Who's Learning? Using Demographics in EDM Research

The growing use of machine learning for the data-driven study of social issues and the implementation of data-driven decision processes has required researchers to re-examine the often implicit

Automated Feature Engineering for Algorithmic Fairness

TLDR
A novel multi-objective feature selection strategy that leverages feature construction to generate more features that lead to both high accuracy and fairness on three well-known datasets achieves higher accuracy than other fairness-aware approaches while maintaining similar or higher fairness.

Amplifying Domain Expertise in Clinical Data Pipelines (Preprint)

TLDR
A taxonomy of expertise amplification is presented, which can be applied when building systems for domain experts, which includes summarization, guidance, interaction, and acceleration.

Amplifying Domain Expertise in Clinical Data Pipelines

TLDR
A taxonomy of expertise amplification is presented, which can be applied when building systems for domain experts and includes summarization, guidance, interaction, and acceleration.

Ciência responsável dos dados: imparcialidade, precisão, confidencialidade, e transparência dos dados

Introducao : no contexto Big Data, surge, como necessidade urgente, a aplicacao de direitos individuais e empresariais e de normas regulatorias que resguardem a privacidade, a imparcialidade, a

Poisoning attack detection using client historical similarity in non-iid environments

TLDR
A federated learning poisoning attack detection method for detecting malicious clients and ensuring aggregation quality that can filter out anomaly models by comparing the similarity of the historical changes of clients and gradually identifying attacker clients through reputation mechanism is proposed.

References

SHOWING 1-10 OF 33 REFERENCES

Fides: Towards a Platform for Responsible Data Science

TLDR
A need for a data sharing and collaborative analytics platform with features to encourage (and in some cases, enforce) best practices at all stages of the data science lifecycle is seen, which is described as Fides, in the context of urban analytics, outlining a systems research agenda in responsible data science.

Data, Responsibly (Dagstuhl Seminar 16291)

TLDR
The goals of the Dagstuhl Seminar "Data, Responsibly" were to assess the state of data analysis in terms of fairness, transparency and diversity, identify new research challenges, and derive an agenda for computer science research and education efforts in responsible data analysis and use.

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

TLDR
The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.

Measuring discrimination in algorithmic decision making

TLDR
Various discrimination measures that have been used, analytically and computationally analyze their performance, and highlight implications of using one or another measure are reviewed to produce a unifying view of performance criteria when developing new algorithms for non-discriminatory predictive modeling.

Measuring discrimination in algorithmic decision making

TLDR
Various discrimination measures that have been used, analytically and computationally analyze their performance, and highlight implications of using one or another measure are reviewed to produce a unifying view of performance criteria when developing new algorithms for non-discriminatory predictive modeling.

[89WashLRev0001] The Scored Society: Due Process for Automated Predictions

TLDR
P Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems, and regulators should be able to test scoring systems to ensure their fairness and accuracy.

Big Data's Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.

Accountable Algorithms

Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police

A Nutritional Label for Rankings

TLDR
Ranking Facts is a Web-based application that generates a "nutritional label" for rankings that implements the latest research results on fairness, stability, and transparency for rankings, and that communicate details of the ranking methodology, or of the output, to the end user.

Auditing black-box models for indirect influence

TLDR
This paper presents a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work.