Algorithmic Transparency and Accountability

From EduTechWiki - Reading time: 2 min

Introduction[edit | edit source]

Algorithmic Transparency and Accountability deals with the consequences and risks of algorithms that take decisions affecting humans and how to deal with these problems.

“Software and algorithms have come to adjudicate an ever broader swath of our lives, including everything from search engine personalization and advertising systems, to teacher evaluation, banking and finance, political campaigns, and police surveillance. But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes.” (Nick Diakopoulos,2016, retrieved Jan 2017)

The USACM statement[edit | edit source]

In January 2017, the ACM US Public Policy Council of the Association for Computing Machinery (ACM) published Statement on Algorithmic Transparency and Accountability:

Principles for Algorithmic Transparency and Accountability

1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.

2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.

3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.

4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.

5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.

6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.

7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

In education[edit | edit source]

Danger areas:

  • Institutional automatic profiling of students with learning analytics
  • Student profiling with "big data" (e.g. social network mining)
  • Automated evaluation and grading of text productions
  • Search engines that serves contents that the user likes to hear or that agencies want to be favored
  • Automated teacher evaluation systems
  • ...

Links[edit | edit source]

Organizations and people[edit | edit source]

In education[edit | edit source]


Licensed under CC BY-SA 3.0 | Source: https://edutechwiki.unige.ch/en/Algorithmic_Transparency_and_Accountability
1 |
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF