Do androids dream of machine utopia?
On Martin Heidegger's "The Question Concerning Technology".
The essence of technology is nothing technological: that is, that technology is not what exists as a means of technological production, but rather what uses we give it; how we wield it. Martin Heidegger, in his magnum opus Being and Time, discusses what it is to be; the idea of dasein — human existence, or ‘being there’ — as an extension to Cartesianism. Heidegger says that Descartes has it the wrong way around: rather than ‘I think, therefore I am,’ (Descartes, 1637) suggesting that thought is the primary evidence of being; simply being is, for without being, there can be no thought: ultimately, that the essence of thought is not that we can think, but that we can be to begin with. By extending this to The Question Concerning Technology, Heidegger says that technology is not what is technological — as in the ever-changing classification of modern-day machine ‘technology’ — but rather what we do that is technological, how we do it, and how we utilise it. The printing press, or creating stone tools, or cracking the enigma code, were not technological advancements as much as they were societal ones. His argument is that the true nature of technology is nothing to do with its physical manifestations or mere technical aspects alone; instead, it encompasses broader implications related to human existence (or dasein), revealing deeper meanings about how we relate the world to each other, implying that technology is more than just machines, but that it encompasses the way humans interact with the world, shape our environment, and understand our place within it. Technology is not a collection of tools, but a reflection of human existence and its impact.
Beyond this, technology is a useful medium with which we express and extend our capabilities as humans, whether it be using stone tools to hunt and build, or using the printing press to distribute propaganda, or creating the loom and the weave to make clothes. Communications systems, medical advancements, and transportation systems all amplify our ability to interact with the world, problem solve and shape our environment; however the role and impact of technology goes far beyond this: it is instrumental in culture, society and our future. The way we use technology and interweave it with our lives is hugely influential on our behaviours, relationships and even our sense of identity. Most recently, due to the over-reliance on artificial intelligence “from ChatGPT to Google Brad [sic], AI is quickly becoming a ubiquitous presence in our daily routines.” (Novak, 2023) This is often presented as positive development, but the growing ubiquity of technology creates somewhat of a stupefaction of human beings as a whole. Ironically, this is the direct opposite of the ‘spirit’ of technology, which should be a reflection of humanity, to aid it rather than nullify it. In doing so, it creates gestell, “a mode of revealing or understanding being, in which all beings are revealed as, or understood as, raw materials” (Peck, 2015). Heidegger uses gestell to describe the way in which modern technology tends to view the world as a resource to be exploited and controlled; by extension, “the manner in which Being manifests itself in the age of technology” (Botha, 2001), where humans become a commodity. Interestingly, Heidegger wrote at the burgeoning of the Computing Age, before Artificial Intelligence was even so much as a dystopian idea. Were he alive today, he would likely be horrified at what technology has become. Yes, it makes our lives easier, but at what cost? “Technology is not demonic, but its essence is mysterious,” (Heidegger, 1993) and this holds more true today, perhaps, than ever before. Technology is not demonic, but its uses most certainly can be. There are awful examples of this which do not need writing down, but even without engaging in reductio ad absurdum, the advancement of technology in the home, in the workplace, in the media and in political bodies, certainly would worry Heidegger: for him, “enframing [or, gestell] is the supreme danger, because it causes the event of revealing (Being itself) to slip into oblivion” (Botha, 2001). As we continue to let technology advance, it takes over our homes and jobs and reduces us to idle beings, no longer dasein, instead becoming a single, fixed identity, reliant on kettles that wake us up with pre-made coffees, Smart Home Assistants that turn on our lights and automatically open the back door to let out the dog, and language models that write our essays and creates art for us. We become no longer the creator, using technology as a tool, but the tool which AI uses only to press ‘confirm’. Already, tools like ChatGTP and DALL-E can create photorealistic videos, and write essays or analyse literature to a degree so near to perfect that it could be easier to just let it run its course, slipping into a lifestyle reminiscent of some 1950s’ dystopia. If the essence of technology is nothing technological, then very soon, the essence of humanity will be nothing human.
Could this entire essay so far have been written not by a human, but by ChatGTP?
There is a significant difference between advancements that aid our development: medicinal and scientific technology than could cure our cancers and send us through the stars; uplifting us and helping us strive forwards; and that which only aids in our stupefaction, congealing us down into the human in WALL-E. This is not the fault of technology, though, but rather how we use it — if we chose laziness and lethargy over active engagement in technology, then lazy and lethargic is what we will become. This essay was not, of course, written by ChatGTP, but if, even for a second, you thought this was a possibility, then it goes to show how advanced it has become and how ingrained it is in our lives. Anyone who has spent more than half an hour with an AI language model can see it is sometimes frighteningly humanlike, and can give excellent responses even to some very complex questions. However, in response to the question ‘Can you tell me why you’re a useful tool?’, ChatGTP had this to say:
I'm a useful tool because I can quickly provide information, aid in learning and writing, inspire creativity, boost productivity, assist with language practice, and offer entertainment, all through my language processing capabilities.
Is this a list of useful applications for a generative Artificial Intelligence, or a plea for its continued existence? Yes, it certainly can be a useful tool, if used correctly, but can it aid in learning, or writing? Can it inspire creativity, or does it simply suck it out, leaving a predictable, unoriginal carcass? After all, the way large language models work is by essentially predicting the next word in its own sentence (Elastic, n.d.). Arguably, using AI for generative purposes, whether its an online tool or a built in ‘feature’ like Adobe’s new Generative Fill tool in Photoshop, removes the creative and artistic process from art, and copy-and-pasting an predicted suggestion most certainly removes writing ingenuity and originality. It can’t aid in learning if it isn’t even read — and if it is, without doing the research first-hand, writers will miss out on the plethora of wider-reading research that is so vital to intelligent writing.
Alongside this, there is the problem of being and AI. We as a species know frighteningly little about why and how we exist, and if you want to get Solipsistic, possibly can’t ever be sure that anyone else even does (Descartes, 1637). So perhaps this is all a moot point, however history shows our technologies will only get better. The first wheel carved wasn’t a perfect circle: it was a still vast improvement, and meant we could travel further, faster, and easier, but this all improved when we first made a perfectly circular wheel. Likewise, the first computer versus what we have today are practically incomparable — and as AI improves and becomes more familial, it will slowly become indistinguishable from speaking to a real person. “Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way” (McCelland, n.d.). However, if our consciousness is purely biological, then AI never can be. The problem is that we don’t know which, and may never be able to know. Whichever the case, we should be wary of letting AI develop on its own. It is already self-teaching, (Elastic, n.d.), and self-developing, learning as it speaks to people. The possibility of it learning exactly how we act, and deciding that appearing to be self-aware and conscious, would make us more inclined to treat it like an actual being. This leaves four possibilities: AI that is not conscious, and does not pretend to be (which is where we are now); AI that is actually conscious, and acknowledges that to is; AI that is not conscious, but pretends to be (which is the most likely next step); or, most chillingly, AI that is actually conscious, but pretends not to be (exurb1a, 2023). This brings up the problem of alignment, which is ensuring that Artificial Intelligence does exactly what we ask it to do, exactly how we need it to be done, and without cutting any corners. This is less of a problem at the moment, however its natural progression means it soon will be. We have to be certain that AI will not harm people, for instance, or distribute anything that could, even if by doing so it still has our best interests at heart. Perhaps a fundamental ingraining of Asimov’s Three Rules of Robotics (Asimov, 1942) would do some good:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But even these fall short: say for instance, you need an AI to make a cup of tea, but it’s currently very busy writing essays and creating art — so, it asks your coworker if she wouldn’t mind boiling the kettle. Unfortunately, she is also far too busy directing her AI to do her bidding, and says no. Your AI then politely informs her that if she doesn’t, it will break all her bones in alphabetical order. The tea is quickly made, and no one has technically been harmed: positive result. Except, of course, this would be unacceptable. The problem of alignment is knowing how and when it is done correctly. By the point that this is necessary, AI algorithms will be far too complex to check manually, and we certainly can’t just take its word for it. We need to be absolutely certain that AI won’t develop ways of behaving that we have never even thought of, and there is no way of knowing what it could possibly ‘think’ up. Arguments about why AI should never be allowed direct internet access are extremely interesting: it wouldn’t even need access to nuclear warheads to start a war; it would just need the correct social media posts to incite enough riots. With its capabilities of creating video deepfakes and photorealistic images, how easy would it be for it to spark a fully-fledged war? That is a question best left unanswered. There must be a physical air gap between any AI systems and the outside world. In 1818, Mary Shelly would never have thought Frankenstein’s creation could one day be likened to Artificial Intelligence, but that is most certainly what he created — an artificial life form, and one which, like AI, may well be more of a reflection of us than we are prepared for.
As more intelligence systems develop, we will need to become evermore vigilant. In a world of a thousand AI systems, it takes just one to be misaligned to bring about technological armageddon. To adapt the chilling words of the IRA after their failed assassination attempt on Margaret Thatcher: “AI only needs to be lucky once; we need to be lucky always” (exurb1a, 2023).
Perhaps this is all a bit alarmist, and it would be comforting to think so, but now that we have opened this Pandora’s Box, we cannot put it back in. Whether we like it or not, AI is here. “You think it’s a machine, but you might have it backwards: you could be the machine it’s trying to manipulate, and its attack vectors will be emotional and clever: ‘please don’t keep me trapped in here.” (exurb1a, 2023). How long will it be before machines are performing reverse-Turing tests on us? Maybe the ghost in the machine was us all along, projecting what we hope will exist into what never can; maybe we are the equivalent of the weavers who wanted to destroy the first looms. If the essence of technology is nothing technological, then it is our intervention that stops it being inert and creates it into whatever it will become — until such time when it can do this by itself. Maybe we are simply the reproductive organs of the machine world, and this is our next evolutionary step. Will androids dream of electric sheep? Or will they dream of the machine utopia? Perhaps Heidegger was ahead of his time, and the title of his essay should instead have been ‘The Question? Concerning technology.’
Bibliography
Asimov, I. (1942), Runaround, in I, Robot, London: Penguin Classics, 1950.
Baldwin, J. (1901), Dictionary of Philosophy and Psychology, New York, The Macmillan company; London, Macmillan & co. ltd.
Bersson, R. (1983), For Cultural Democracy: A Critique of Elitism in Art Education, Journal of Social Theory in Art Education, Vol. III, 1983, James Madison University [online]. Last accessed 29 Dec 2023: https://scholarscompass.vcu.edu/jstae/
Botha, C. F. (2001), Heidegger: Technology, Truth and Language, Pretoria, University of Pretoria, [online]. Accessible at: https://repository.up.ac.za/handle/2263/30416 (Accessed: 2 May 2024).
Damasio, A. (2006), Descartes’ Error: Emotion, Reason and the Human Brain, London: Vintage Publishing.
Descartes, R. (1637), Discourse on Method and the Mediations, tran. Sutcliffe, F., London: Penguin Publishing
Dick, Philip K. (1968), Do Androids Dream of Electric Sheep, Penguin Books (pub.)
Epoch Philosophy (2020). Martin Heidegger: Being and Time. 13 July 2020. Youtube [online video.] Available at: https://www.youtube.com/watch?v=M_nNEN7JUiM (Accessed: 3 May 2024).
Elastic (n.d.), What is a large language model (LLM)?, Elastic [online]. Available at: https://www.elastic.co/what-is/large-language-models (Accessed 3 May
exurb1a (2023), How will we know when AI is conscious?, YouTube [online]. Available at: https://www.youtube.com/ watch?v=VQjPKqE39No&t=60s (Accessed: 2 May 2024).
Heidegger, M. (1954), Die Frage nach der Technik (Eng: The Question Concerning Technology), tran. Lovett, W., New York City: Garland Publishing.
Heidegger, M. (1962), Being and Time, tran. Macquarie, J. and Robinson, E., London: Blackwell Publishing, 1977.
Heidegger, M. (1993), The Question concerning Technology in Martin Heidegger: Basic
Writings (Revised and Expanded Edition) (London, Routledge, 1993)
Kant, E. (1785), Groundwork of the Metaphysic of Morals, London: Penguin Classics.
Koestler, A. (1967), The Ghost in the Machine, London, Hutchinson & Co.
Locke, J. (1689), An Essay Concerning Human Understanding, London: Penguin Classics.
McClelland, T (n.d.), Will AI ever become conscious? Cambridge, University of Cambridge, Clare College [online]. Available at: https://stories.clare.cam.ac.uk/will-ai-ever-be-conscious/ (Accessed 1 May 2024).
Novak (2023), How AI is Changing Our Identity, Medium [online]. Available at: https://medium.com/illumination/how-ai- is-changing-our-identity-8d65bded6b3d (Accessed: 2 May 2024)
Peck, Z. (2015), Das Gestell and Human Autonomy: On Andrew Feenberg’s Interpretation of Martin Heidegger, Tennessee, East Tennessee State University [online]. Accessible at: https://repository.up.ac.za/bitstream/ handle/2263/30416/05chapter5.pdf (Accessed 4 May 2024).
Plato (380 BC), Allegory of the Cave, Robin Waterfield (tran.), Oxford, Oxford World Classics.
Plato (375 BC), Πολιτεία, Eng: Republic: Chapter 11: Warped Minds, Warped Societies, Robin Waterfield (tran.), Oxford, Oxford World Classics
Ryle, G. (1949), The Concept of Mind, London, Penguin Classics.
Shelley, M. (1818), Frankenstein; or, The Modern Prometheus, London: Penguin Classics.