The design psychology of having technology doing all the thinking.
Why am I ok with being replaced?
Working as a Design academic in the era of AI, I get to experience the lurking fear of the discipline becoming obsolete. Despite my hope that what I teach my students, which comes from decades of reading, writing, processing and debating knowledge, cannot be replaced by AI, I am not sure universities see it that way or will in the foreseeable future. AI is getting better (at doing certain things) and there is more pressure to concede and let it make my life ‘easier’. The question that always boggles me when I hear that argument is why I would want a machine to do the ‘thinking’ for me, especially when I don’t know where this ‘thinking’ comes from. The obscureness that surrounds AI is almost metaphysical – I know it works but I don’t know exactly how and what the dangers are. My guess is that If I surrender my thoughts, feelings and ideas to AI and they get lost in a black hole never to return to me, they will still be documented as mine. Where does that leave me?
Deskilling and de-thinking are not new phenomena, and they are not only AI related. A short history of deskilling goes back to the beginning of industrialisation when people started to rely on machines for goods, they were no longer producing. This went a step further with:
a. Bureaucracy, the organisational and logistical compartmentalisation of work, and
b. Disciplines, the disconnected from each other epistemic specialisations that came out of breaking down the production of knowledge.
As a result, I am expected not to know how my car, washing machine, computer or phone works. If they break, I can’t repair them. I can’t even open them without being liable1. What I do know though is this is rooted in an impromptu design psychology where a designed condition (technology) is making me lose my skills triggering the psychological side-effects of helplessness and dependency.
Treating technology as a black box exacerbates a blind faith in its capabilities, ethicalness and harmlessness despite evidence proving otherwise such in the case of social media coercion2, conflict minerals used in computers and phones, and data weaponisation. This did not happen overnight. Centuries of deskilling have led to normalising the comfort of something I have no idea what it’s made of, where it came from and who built it. Being willingly ignorant entails a reluctance to recognise the political and psychosocial implications of technology as well as the consequent loss of agency to hold it accountable.
The discomfort I experience with AI is an anomaly to the conditioning I have been exposed to since I am a by-product of bureaucracy and service economy. What I see around me is a dissociation between technology and its side-effects intentionally orchestrated by late capitalism. The speed by which feeling, thinking, memories and experiences get replaced by responses from AI leave no time to process the subsequent externalisation of all the above. In this light, all experience becomes mediated therefore not one’s own. What this removes from human experience is the impact a perception, thought, feeling or event makes on the brain when processed and intertwined with previous experiential imprints. Brain plasticity, which no matter the claims made by AI proponents, is still far from being fully decoded. While brain plasticity depends on the various encounters I face daily, if I don’t let it respond to them on its own, it will work less3. My over-reliance on AI/technology will also prohibit my resilience to grow by facing difficult emotions and problem solving on my own.
Not staying with the trouble4 is not as pleasant as at sounds. All the negative feelings that AI and technology-driven distractions help repress, come back as depression, insomnia and anxiety. As seen recently, Large Language Models (LLMs) have been associated with mental health conditions such as AI psychosis. Being in a mental state of esoteric dysfunction gets amply exploited by corporations like Amazon, Block and recently Atlassian who use AI as an excuse to restructure due to global economic pressures and their own pursue of endless growth. Furthermore, the Verizon CEO’s projections that 20-30% of the American working population will become unemployed presupposes that these people won’t have the agency to fight back due to being ‘unusable’, a term coined by my good friend and philosopher Anthony H Fry, for all the reasons presented here.
Yes, the challenges are known, at least to some of us, but how do I respond? How do I hold on to my agency, skills and problem-solving capacity? The first thing to acknowledge is that, either I like it or not, I am posthuman. Technological mediation is part of human life, from energy to pacemakers. What becomes urgent is being aware of how much agency I am/or not giving away and recognising how technology interferes with skills, critical thinking, cognitive growth, emotional resilience and connection with other humans and nature.
What I call intentional design psychology becomes an active effort to confront the impromptu design psychology of dissociation and its ramifications, as described above, by contesting the silos created by modernity between nature, society, and technology. This through reconfiguring ‘reality’ by mapping socio-technical, ideological, political and economic systems as the sources that keep dissociation alive and stepping outside the restrictions of imposed identities driven by late capitalism. Comprehending the mechanisms behind the obscureness of technology and the role it plays within a massively disturbed ecosystem will reveal the extent of human fragility and the ongoing separation of humans from each other, their material life and nature. By reclaiming knowledge and understanding of how technology affects everyday life, the designs that remove human agency and sustain dissociation can be contested. They can then be replaced by new designs generating psychologies of interdependence.
For this to happen, designers, psychologists and all theorists and practitioners contributing to this condition must be repaired, by accepting their minority superiority (objective, White, male), embodied individualism, techno-dependence, disconnection from past and locality, and lack of skills to cope with the threat of climate change, in which they are embedded. The call is to stop losing skills and start acting on what can be saved; species, livelihoods, friendships, resources, life. New designs of living won’t arise from AI but from transforming eco-fear and eco-anxiety into action that will enable adaptation to derive from lived experience, localities and collectives. Technology can facilitate this effort as the support act but not the leading lady.
Perzanowski, A. (2022). The Right to Repair: Reclaiming the Things We Own. Cambridge University Press.
See the work of Jonathan Haidt and Jean Twenge
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.
Haraway, D. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.


