Marin Aeschbach

July 18, 2020 / 1107 words / 6 minute read

Farewell to Artifical Humanity: Deep Learning in an Anthropological Context

Rock and Roll Monkey

Public discourse often prematurely equates new developments in Artificial Intelligence (AI) with a search for a synthetic subject. Understanding AI as a technology embedded in an anthropological-cultural context confronts this imagery with a counternarrative that helps clarifying our ethical priorities.

Public debate about Deep Learning – the most promising approach in Artificial Intelligence for a few ears now – seems often spellbound by the question whether a synthetic (or artificial) subject is possible. An artificial subject would be a machine that is cognitively powerful enough to plan and implement a life after their own design (*Lebensentwurf* in German). This ability would then - as the argument goes - assign the same moral status to the machine as we do to humans.It is worth mentioning though that this does not reflect the current state of affairs in Artificial Intelligence research. Researchers are not interested in the creation of an artificial human, think of it as being highly unlikely to be achieved (or only achievable in the distant future), useless or even think of such an endeavour as being morally reprehensible.

We may interpret this chasm between expert activity and public perception as a reaction to the opacity of the technology at hand. Algorithms that adapt their own weights in response to sample solutions (a structure common to all Machine Learning as well as Deep Learning as a subfield) seem to fire our imagination with their autonomy. The dynamic nature of the algorithm also means that its precise state is not always known to the researcher.

Deep Learning distinguishes itself by features are extracted from input through several layers. In image recognition, for instance, an image will be analysed into all its color values at a given pixel coordinate, which will be used to identify edges, which in turn will be used to identify contours and then shapes. All of this takes place in the software internals, the process is opaque to the user. The precise state of the algorithm and the deep nature of the architecture are both are two intransparencies that leave us with plenty space for speculation.

Using neural networks that are loosely modeled after biological neurons may suggest that researchers are primarily interested in understanding the human brain. On closer inspection, this is a non sequitur: The architecture of software designed to solve cognitively demanding problems does not necessarily teach us something about how we are solving them.

These unfortunate conflations throw us back to the big philosophical questions - What is consciousness? Mind? A subject? - question whose meaningfulness I would question even if they would not pertain to research in artificial intelligence. To move on from these problems, I will briefly test the productivity of a counter-narrative. Its starting point is the historical fact that the learning machine, even if it would become an autonomous subject, would still have to be made by us. It is a narrative of artificial intelligence as a technology embedded in culture.

Embedded here means: Technology is a human-world-relation that fulfills human needs in regards to the world through adapting either us or the planet. The technological human makes the world, in which they are thrown, more comfortable by developing new practices for mutual adaptation.

These ideas may not seem particularly trendy right now, but they can be heard in the statements of many practicioners critical of the public discourse surrounding AI: If Joanna Bryson suggests that the history of Artificial Intelligence begins with the invention of writing, she thinks of AI as being cognitive exteriorisation.

Technology simplifies things (unless it is deployed for entertainment): It is easier to reap wheat with a scythe than by hand. Exteriorisation, a term coined by the French anthropologist André Leroi-Gourhan, is a particular form of facilitation: It indicates the delegation of entire operational processes to machines like a weaving loom.

Cognitive Exteriorisation is thus the removal of operation to increase our individual and collective capacity. By taking tasks off of us, technologies like writing (a mnemotechnic), a pocket calculator or Google Maps create space in our head for other things. For Bryson, all of this is AI in to a degree. If machine learning tranlstaes languages or interprets x-rays for us, we delegate some of our burden, independent from the fact whether we can use this technology to build a subject.

Exteriorisation as a concept embeds technology in its culturally determined context of use, without whom our Analysis would be incomplete. It reveals continuities between AI and reagular old technology. Thus, classical problems in the ethics of technology come into focus, while the question about the moral status of the machine that dominates in regards to the artificial subject is marginalised.

One example: To understand AI as an attempt to ameliorate people's material conditions must lead us to consider distributive justice. From the minor point on how much time that can be used for other things when using with an autonomous vehicle to the potentially life-altering effects of personalised medicine – Deep Learning (like most new tech) appears to widen the wealth gap: People able to afford things in the first place will become more economically productive and compensated accordingly while the rest are left behind. It is crucial that these kinds of problems are kept afloat in the rip current around artificial people.