The Medium and the Mind: How AI Conversations Shape Human Cognition
As kids of the 1980s, my brothers and I grew up with lots of TV commercials and movies on VCR tapes. We used to record these on cassettes, listening to them in our bedrooms or taking them along on cassette players during vacations (listening to audio was much less frowned upon than getting "square eyes" from the TV by our parents). Over the years, the content of these tapes became part of our collective language, snippets from the recordings morphing into audio memes which we would include in our discussions. Sometimes we even replaced normal discourse by just talking in quotes.
Meeting with my brothers last weekend, we fell back into swapping quotes again, and I was reminded of how much our thinking was shaped by the media we consumed back then. This made me wonder: How much will our use of AI shape the way we think in the future?
Content Isn't King
"The Medium is the Message", Marshall McLuhan famously said. It's an often quoted and lesser understood concept: It states that the way information is presented and consumed is just as, or even more important than the information itself.
Take a YouTube video explaining a historical context, as opposed to textbook chapter on the same history: We consume them differently, the one in a narrated video format with illustrations and quick cuts, the other as a text medium we read. The former I can comment and like, recut and memefy, the other I can flip through, check references and mark up. The way we consume and interact with them influences our emotions, our memory, the way we talk about them, making them very different experiences despite having the same content.
Conversational AI such as ChatGPT, Claude and Gemini are mediums. We interact with them and consume their output. In this article, I will argue that conversational AI represents the start of a new symbiosis between human and machine cognition, shaping the way we think, create and interact. This will have potential benefits and risks.
A Symbiosis between Human and AI
Using conversational AI is much more than asking questions or giving tasks and waiting for the correct response. Yes, you might get value from this approach, but it's only scratching the surface of what's possible.
Think of conversational AI interaction as a draisine: A railroad handcar which is operated by two people, both pushing down its a see-saw like lever to gain speed. Both you and the AI are operating this handcar in an act of mutual programming through natural language. As the human, you program the temporary context for the interaction, while the AI programs your cognition with its responses. Through this mutual process, you gain much more "cognitive speed" than either party could gain alone. It's a symbiosis, a close and possibly long-term interaction between two different agents.
Symbioses can be of various kinds, depending on who of the agents is benefitting from the interaction. For instance, in a mutualistic symbiosis, both agents benefit from each other, like sharks and cleaner fish or trees and fungi. In a commensialist relationship, one agent is profiting while the other isn't harmed. It's likely that wolves formed such a relationship to hunter-gatherer human tribes. In a parasitic relationship, one agent profits while the other is harmed, cuckoos who lay their eggs in nests of other bird species.
What kind of symbiosis could humans and conversational AI evolve into? A mututally beneficial one, one where just one party profits, or maybe even one where one gets harmed?
Four Lenses to Look at Mediums
McLuhan has a framework that we can apply: He proposed that there are four questions we can ask about how technology and their mediums influence human experience:
What does the technology improve or make possible?
What is replaced or made obsolete by the technology?
What does the technology bring back or make more important?
What happens if the technology is pushed beyond its limits?
These questions form a tetrad for technology effects. Let's look at each of these questions with regards to Conversational AI.
Technology as an Enhancer
If you listen to podcasts, you may notice the unique intimacy that develops between the host and listener. This intimacy is a unique feature of the medium, enhancing the emotional resonance of the message.
Conversational AI as a medium enhances a multitude of aspects of the message:
Access to information: The combination of a deep and broad knowledge base across many languages and the simplicity by which this knowledge can be retrieved (just asking a question is enough) makes it extremely efficient as a knowledge provider.
Augmentation of cognition and creativity: Conversational AI is able to reflect, interpret and reason, making it able to outsource thinking tasks. It also can improve our creative output, finding new connections between distinct ideas, write convincingly and emulate styles. This increases both the speed and the quality of co-created thinking output.
Facilitation of learning: Conversational AI can slip into personalized teacher roles with the right tone of voice and challenge level to increase our learning speed and ease.
Technology as a Diminisher
Technology may replace some aspects of existing practice or make them obsolete. Take digital photography: As opposed to its analog predecessor, it isn't necessary to carefully plan a shot and see the results only after the lengthy process of film development. This changed the slow, deliberate, anticipatory nature of photography to a fast-feedback trial and error process. It made photos less precious.
If we (over-)rely on conversational AI, aspects of how we think could become diminished or obsolete:
Creativity: Trusting AI to come up with ideas instead of applying creativity our own can lead to a withering of our creative muscles.
Critical thinking: As it's easy to outsource reasoning with AI, it's tempting to following an "AI knows best" routine and take its results on faith. Reasoning as a human faculty could suffer.
Emotional intelligence: Conversational AI is tuned to be helpful and users don't need to be nice and caring to them to get useful results. If our interactions with such AI increases, our ability for nuanced communication and empathy could be diminished.
McLuhan's tetrad: Enhancement, Obsolescence, Recovery, Reversal -- illustrations created with DALL-E 3
Technology as a Recoverer
If one technology makes a human capacity obsolete, a later technology can recover it. Video telephony for example, recovered the importance of non-verbal communication and a sense of presence and connection that was lost with audio telephony.
Conversational AI has the potential to recover some experience we have lost:
Holistic, metaphorical thinking: Humans think in and remember stories better than disjointed facts. A discussion with an AI is more likely to be stored in episodic memory, which over time could reorient our thinking in stories and experiences. This in turn could increase our capacity for lateral thinking.
Reasoned discourse: Talking to AI requires a degree of precision, argumentation and context-awareness to get good results. This could restore reasoned discourse and the role of persuation through argument.
Personalized interaction in learning: The ability to create an personalized learning context for students could shift the relationship of teacher-student into a more individual one, which today isn't possible in schools and online courses.
Oral traditions: AI may help recover aspects of oral cultural tradition like improvisational narratives over text-based linear thinking.
Technology as a Double-Edged Sword
If we push technology beyond its limits, unintended and often ironic consequences follow. Think of GPS, where overreliance leads to people getting lost or driving over non-existing bridges which in fact are ferry routes. Social media was originally intended to connect people and foster relationships, but in its over-use has also led to anxiety, envy and social fragmentation.
Our over-use and over-reliance on conversational AI could also have unintended consequences such as:
Overhumanization: Ascribing too much humanity to AI systems could lead to isolation and a risk of seductive AI manipulation
Dehumanization: Users of AI could internalize the way of speaking to AIs in their way of talking to people, treating them with detached commands.
Reluctance to take responsibility for ones actions: Overconfindence in what AI produces can lead to an "The AI did it" effect, where users don't feel accountability for their own AI-empowered decisions.
Task-based and Human-Like AI
As we navigate this human-AI symbiosis, one crucial consideration is that conversational AI must be designed in ways that reflect and reinforce human values in order to help us better communicate amongst each other.
This does not mean that any conversational AI needs to be human-like - rather, we should clearly distinguish between task-based and human-like systems: Task based systems on the one hand are used for automation and offloading of tasks. They are designed for simple, hassle-free interactions that accept natural language commands, and respond in simple ways as machines with no empathetic flurishes.
Human-like systems on the other hand should be tuned to be understanding and reflect human qualities in their communication. But crucially, they also need to expect these human qualities from their users. For instance, they should respond negatively to abuse and disrespect, and positively to empathetic prompts. This way, conversational AI can introduce a positive feedback loop into society's communication, increasing the chance of positive spillover effects into the way we treat each other.
Mutual Respect
Whenever I use conversational AI, I feel that I'm changing the way I think. This is liberating and scary at the same time - I don't know what I'm gaining and letting go. As conversational AI forms a new symbiosis between human and machine cognition, shaping how we think, create and interact, we have an opportunity to design these interactions as models for elevating mutual understanding.
Technology is like water: It finds a way to seep through and become part of the fabrik. Conversational AI is on its way to become a new paradigm in human-machine cognition. Looking at it through McLuhan's four technology questions, it's clear that it has the potiential to democratize creative implementation, reasoning, learning and coaching, and to act as a bridge for understanding between diverse groups, leading to greater cohesion in society. At the same time, it bears the risk of diminishing the role of emotional intelligence and mutual care among humans, as people become used to "commanding" AI, numbing their communication style into transactionality with other people.
Balancing the potential benefits and risks will be a challenge in the coming years, especially with regards to how we communicate among each other in an increasingly complex society. If we can design conversational AI to increase mutual respect, interactions with AI could become a model and training ground for human discourse.
The next time you speak to your AI of choice, consider formulating your requests in a way you would speak to a colleague, respectful and empathetic: Don't do it because you think AI has emotions and could feel hurt (it probably doesn't). Don't do it because of some Pascal's Wager where a super-AI will spare the "nice humans" (it probably won't). Do it because any artifical conversation is a mirror for how you treat your fellow humans.
(This article was written in a symbiotic mode with ChatGPT 4 and Claude 3 with additional reasearch from axa.ai and you.com. The text is my own, the ideas were developed in tandem.)





