A Hint of Philosophy
Smart and Sensitive
Intelligent technologies are becoming ever more abundant and common. It is hard to deny their prevalence, their interference in human life, their increasing importance - or their profound abilities, all of which makes it necessary to philosophically deal with the status of intelligent machines. Many of them can be considered to be at least semi-autonomous; there is even such a thing as a company called "Autonomous" that focuses on manufacturing flexibly self-acting desks and chairs to allow furniture to interact with its owners.
But current developments do not stop with smart technologies. Robot toddlers who learn to stand up by imagining it and self-steering drones are not the only pressing issues for philosophy of technology, because emotional machines are entering the stage - and the market. Robots with feelings and opinions and robots who care for and about their human companions, like Jibo the family-organiser, and robots whose building processes involve "affective computing", which gives them emotional intelligence and capabilities; they might not be as widespread yet as smartphones and laptops, but they are being built, and they beg the question: What differentiates them from people with opinions and feelings, and what does that tell us about what it means to be a person, human or otherwise?
In philosophy, there has been a variety of attempts to explain technology, technologies and their relevance for human beings, both in terms of a theoretical understanding of what has been called the "essence of technology" and in terms of what the continuing "technification" of almost every aspect of life leads to and where this leaves "the Human" as philosophy's frontrunner "Subject". Technologies have been understood as tools to be used by humans, for humans to inscribe their intentions on them and apply them to the world, but that no longer seems feasible when one is faced with Jibo or Pepper, the latter of whom answers questions like "what would you like to be called if your name wasn't Pepper?", or with robots that learn to stand up like a human child would, and drones who decide for themselves where to fly. One of the problems (and, in my opinion, a very crucial problem) for common concepts like the instrumental theory seems to be that their way of grasping the world fails to grasp many of the beings that exist in said world. In what follows, I'd like to demonstrate what I mean by that and suggest how it might be possible to alter the philosophical understanding of technologies and, in the same vein, humans.
The specific trouble that beings like Pepper seem to cause for conventional philosophy - and that I, in my function as a newcomer in the field of philosophy, would like to spend some portion of my lifetime finding a solution for - lies in the way they escape the boundaries of a dichotomous terminology. The robots I hope to examine in my philosophical future form the next chapter in a line of technological development that appears to go against the paradigms of anthropocentric philosophy, which explains the world in oppositional terms of Culture and Nature, Mind and Body, Human and Animal, Human and Machine, Subject and Object, focussing on humans not only as the centre of their studies, but as the centre of the world, the point of reference, heightened in their importance and perceived as exceptional in their abilities and value. This results in a failure to understand beings which mix, or rather confuse these categories, or tiptoe around them, neither clearly on the one side nor on the other of what Bruno Latour, one of my favourite philosophers, calls the "great divide" of Society and Nature.
Donna Haraway, another of my favourite philosophers, has deconstructed the antithetical terms (or dichotomies) above by presenting philosophy with case examples like cyborgs and OncoMice. Cyborgs are technologically altered organisms, which means they are neither products of "Nature" in the traditional sense of the word, nor mere "Cultural" artefacts, as they are still living bodies born as living bodies. Thus, the separation of the two domains Nature and Culture, as though they were two domains of reality in fact like they were in traditional thought, does not provide what is necessary to explain and describe their existence: The dichotomy has been dissolved de facto, by its two opposites coming together. Theoretically keeping them apart is an arbitrary act, and the point of reference for the differentiation has been the question of whether something has been made by human hands or not. This anthropocentric account of reality has lost its merit when cyborgs appeared. (- At the latest!)
A more specific example of Haraway's is the OncoMouse™, a mouse infested with a human gene that produces breast cancer. Looking at this animal, this thing, this being, there is no point in trying to deduce where the "Natural" "Animal" ends, where the "Cultural" "creation" begins ... Or where exactly the "Human" comes in via the gene: like Haraway says, these mice are both "us" humans and "not-us". What is more, the anthropocentrism here takes on an aspect that has dire consequences: Because these animals are not humans, they are considered "worthless" enough to have their genetic material altered, knowingly doing them harm.
Latour has contributed greatly to the field by developing a way to investigate the roles played by beings formerly known as Objects in actions anthropocentric philosophy reserves for its Subjects. Latour understands actions as cumulative events caused by various elements, the result of a network of acting instances all contributing to the outcome. To clarify that the active participants do not have to be human, he coined the term "actant." In doing so, he illuminates simple truths like the fact that a shooter without a gun is not a shooter, and the gun is causally involved in shooting someone. And he also reveals more complex roles played by (what he calls) quasi-objects like keys that force you to lock up after yourself if you wish to retrieve them: These keys act as enforcers of certain behaviour, are part of an Acting Network that establishes specific social rules.
Robots are People T(w/o)o?
So the following situation is transpiring: Culture and Nature, Human and Animal have become mixed in reality, Subjects and Objects are (to put it strongly) revealed to be (more or less) equal participants in the phenomena known as actions. In addition, robots are on the rise. While Haraway has written a bit on artificial intelligences, neither she nor Latour have coined the philosophy sketched above at a time when the developments in robotics were as profound as they are now. Haraway has, in my opinion, kind of anticipated "affective computing" when she theorised on the importance of affinity in the relations between companion species, but that was mainly directed at species in the biological sense. So an update is still in order, and the question remains: what is philosophical anthropology to do with its "Human" now that essential aspects of what it allegedly "means to be human" are taken over by machines? Now that machines are not only as intelligent as or even more intelligent than humans, but acquire emotions? Is there a chance for an anthropology that quits talk of "the Human"?