Arif Kornweitz, 2020

Machine Hubris

The Threat of AI Ethics to Humanitarianism

An incompatibility between AI ethical standards and humanitarian ethics is starting to become visible. This schism is epitomised by the principle of ‘first, do no harm’ (non-maleficence) which is at the core of humanitarian ethics and prescribes that, above all, actors should not cause (more) harm while attempting to do good (Pictet, 1979). A non-maleficence operationalization has not been implemented for machine learning because an algorithm rarely can predict unique anomalous events (Pasquinelli, 2019). Nonetheless, machine learning is becoming part of the tool kit of humanitarianism. The potential effects of this shift are grave as they appear to enable the weaponization of humanitarianism (Weizman, 2012) and may include loss of life (Taylor et al, 2017; Raftree, 2019), an effect that is antithetical to the humanitarian principle of ‘do good’.

Framing this issue as a technical design problem to be solved denies the history of the use of intelligent technologies to exercise control and produce exclusions (Adam, 1995; O’Neil, 2016; Eubanks, 2017; Costanza-Chock, 2018), falsely representing machine learning algorithms as value-free agents (Cave, 2020). To highlight these dynamics, this research traces the history of deviations from designated functions of machine learning technologies in the context of humanitarian information activities by focusing on unintended consequences, specifically cases of function creep and mission creep. This history functions as a counter-narrative where the failure of machine learning technologies to adhere to the principle of ‘first, do no harm’ is not a technical exception but the material-discursive modus operandi of a technology that is value-laden and potentially causes loss of life. Part of the research design are interviews with humanitarians, representatives of so-called target populations, and researchers, as well as experiments with poisoning attacks against machine learning models.

 

Snow Storm: Hannibal and his Army Crossing the Alps (1812) by Joseph M.W. Turner

 

Bibliography

Adam, A. (1995). A Feminist Critique of Artificial Intelligence: European Journal of Women’s Studies.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Cave, S. (2020). The Problem with Intelligence: Its Value-Laden History and the Future of AI. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 29–35.

Costanza-Chock, S. (2018). Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice (Proceedings of the Design Research Society 2018).

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (1 edition). Crown.

Pasquinelli, M. (2019). How a Machine Learns and Fails – A Grammar of Error for Artificial Intelligence. Spheres, 5.

Pictet, J. (1979). The Fundamental Principles of the Red Cross: Commentary. International Federation of the Red Cross and Red Crescent Societies.

Raftree, L. (2019). A discussion on WFP-Palantir and the ethics of humanitarian data sharing. https://lindaraftree.com/2019/03/02/a-discussion-on-wfp-palantir-and-the-ethics-of-humanitarian-data-sharing/

Taylor, L., Floridi, L., & Sloot, B. van der (Eds.). (2017). Group Privacy: New Challenges of Data Technologies. Springer International Publishing.

Weizman, E. (2012). The Least of All Possible Evils: Humanitarian Violence from Arendt to Gaza. Verso.

← back