Cognitive Privacy, Cognitive Piracy
Who Owns Your Thoughts When You Think With AI?
Here you are, reading this article. Who are you in this moment? Are you a private person, enjoying this as a pastime? Are you an employee, getting up to speed on AI developments? Maybe a manager who is working on your company’s AI strategy? If you feel this is hard to answer, it’s likely because the boundaries between those roles are permeable.
For as long as humans were around, thoughts have always been fleeting: They bounce around in our heads in shapeless ways as images and voices, until they stabilize into expressions like speech, writing or drawings. Speaking and writing are thinking: They are acts of compression and crystallization, a form of translation from neural pathways to language. And even in this solidified form, most thoughts remain uncaptured: Words spoken to friends and colleagues, scribbled on paper to be thrown away and forgotten. Only rarely do they become further condensed to ideas, concepts, or stories.
Privacy Relies on Natural Filters
The concept of privacy is tied to this process of thinking and filtering. Nothing is as personal as thought, and we rarely feel comfortable sharing notes and half-baked ideas with others. Our physical environment makes that easy: If what we say at a coffee machine is just heard by a colleague (and loosely remembered, if at all), and what we jot down as a shopping list is thrown away after completing it, privacy is preserved by the constraints of our reality - not because there’s a rule for it. The private evaporates by itself.
The private evaporates by itself.
AI changes this. Increasingly, we can capture everything we write and speak in digital form, and processing it is no longer impossible or even tedious. We can write notes in a Notion or Confluence page and have an AI assistant turn them into a concept. We can photograph a drawing on a whiteboard and get a perfect flowchart. We can take a walk through the woods, speak our thoughts via voice mode into our phones to reflect on our week with an artificial buddy.
AI Collapses Process and Product Into One
The implicit turns explicit: What we discuss with AI contains not only the documents we create or the ideas we flesh out, but also the thinking that went into creating them. The process and the product become entangled. For instance, if I use AI to think about an idea, its mechanism, examples, ways to explain it, assumptions it’s based on, that’s the process - a scaffolding for an outcome. The idea, concept or business case is the product. They used to be separate. The use of AI collapses them into one.
You may ask: so what? More data is captured and processed - that can be seen as progress. It can also be seen as an erosion of our rights. Or both.
But the blurring of the line between thought process and product creates a new tension between two fields that are treated very differently from a legal point of view: Laws governing privacy and laws governing business economy. At least from a European point of view, thoughts are intimate knowledge belonging to the thinker. Nothing captures this better than the German folk song “Die Gedanken sind frei” (The Thoughts are Free) from the 19th century:
Thoughts are free, who can guess them?
They fly by like nocturnal shadows.
No person can know them, no hunter can shoot them
and so it’ll always be: Thoughts are free!
Laws Designed for a Divided Reality
Thoughts are bound to experience, consciousness and identity, the most personal things any person has. Privacy law protects these aspects - not just for the individual, but also to guard society from an erosion of expressive freedom through the “chilling effect”: If we think we are being watched, we become more guarded and behave differently, which in turn undermines democracy. Swiss data privacy law¹ protects the “personality and fundamental rights of natural persons”. The European GDPR goes further, grounding data protection in fundamental rights and human dignity.
In a work context, Swiss labor law translates this into personality rights that must be protected at the workplace: Employers must protect employees’ personality rights², can only process job-relevant data³, and are prohibited from monitoring employee behavior at the workplace⁴. These protections cannot be waived - they apply even if the work contract says something else.
Business law, on the other hand, governs work outputs, the product of employees’ work: What you create as part of your job belongs to the employer and is protected by contract law⁵. That Excel sheet with the business case you calculated is property of your company.
In other words: The law mirrors the same division we saw earlier - outcomes (products, duties, inventions, designs) on one side, process (personality, behavior) on the other. Both are legitimate, neither is wrong: They were designed separately because reality was divided in the same way.
The legal landscape was designed for a divided reality - outcomes on one side, process on the other.
The Better the Context, the Deeper the Exposure
Now consider an interaction between an employee and an AI assistant: The employee is preparing a presentation for an important project and starts by gathering their thoughts about its history - how well the planning went, that they are proud of the data collection and the issues they had in their collaboration with the technology partner. They describe the approach they settled on and the architectural choices for security. To make the presentation more relevant, they describe what they think the steering committee members want to hear and reflect on a past experience where a presentation didn’t go well and led to a conflict with their boss.
All this information helps the AI assistant draft a structure, saves the employee time and improves the outcome. The richness of detail makes it easier for the employee to explain - it flows naturally, as if told to a colleague - and creates the context AI needs to increase quality. Not sharing these personal details would make the presentation more generic and less goal-oriented.
Is this interaction a work product or a thought process? By what laws is it governed? You will find that it includes personal reflections (protected by personality rights), work products (the property of the employer), behavioral data (prohibited to monitor) and possibly confidential knowledge (governed by duty of loyalty).
No legal mechanism exists to answer the question how such an interaction should be treated and whom it belongs to. The use of AI makes thought processes explicit and mixes them with work outcomes, behavioral and proprietary data. The result is something that cannot be adequately parsed by today’s legal systems.
Who Is Writing This?
Here I am, writing this article on an Easter Sunday morning. I am typing it after discussing its core idea with an AI assistant, planning its structure on the kitchen table with index cards, and getting AI feedback on typos and readability. Who am I at this point? Am I a private person, wanting to share my thoughts with my network and discuss it over lunch with friends? Am I an employee in the financial industry, influencing how my employer’s branding and reputation is perceived? Am I a freelancer building thought leadership and generating leads for consulting gigs?
Which version of me is doing this? I can’t say, because they aren’t separate. I am all these three, and more.
Everyone’s a Pirate
AI fuses thoughts and work products into something that is neither and both, and the law is not set up to deal with this. There are no governance frameworks to solve it, no societal consensus on who owns it. And in this absence, both individuals and organizations will transgress, extracting value they don’t own:
By writing and talking with AI about what happens at work, individuals will feed company knowledge into personal AI infrastructure. “Shadow AI” is already an issue for organizations. Apart from high security environments where employees hand over their private phones at the entrance, measures like firewalls and hardware controls won’t stop people from using private AI; the frustrating gap between private and corporate AI capabilities only adds fuel to the fire.
At the same time, organizations profit from the private use of AI tools by their employees through increased productivity, quality and innovation - all without having to pay for licenses or governance. What incentive does a team leader have to curtail an employee’s use of ChatGPT or (gasp!) OpenClaw if it increases the efficacy of their employees? It’s like unpaid overtime: Everyone knows about it, but nobody accounts for it.
So both parties could claim that the other side is engaging in “cognitive piracy” while they are protecting their “cognitive privacy” - and both would be correct. This creates an uneasy equilibrium of “don’t-ask-don’t-tell”, to the aggravation of privacy advocates and information security teams alike. This balance won’t hold. Where it breaks depends on choices being made right now. Here are four directions it could take.
Scenario 1: The Glass Office
Driven by compliance concerns, especially in regulated industries like finance, healthcare and transportation, companies will increase their efforts to limit data leakage to non-authorized AI tools. This will mean tighter firewall rules, tracking, logging and monitoring of AI usage, additional compliance trainings and audits. Companies will treat AI usage as company property, analyze it - what are employees talking about? are they sharing data they shouldn’t? - and mine it for useful patterns like flight risk, disengagement or conflict.
This transparency will create a chilling effect within the company. Employees will start to self-censor in their interactions with AI, which means they apply it less as a thinking tool. If coupled with performance KPIs, they will use AI in a sanitized, performative way to signal adherence to rules and priorities instead of using it for messy processes like working through conflicts, testing silly ideas or questioning assumptions. This hurts innovation and performance - the promise of amplified individual capabilities won’t materialize. Talent with career options will leave for organizations with more permissible AI policies. Employees without this freedom will either use AI half-heartedly or resort to more clandestine shadow AI tactics.
Scenario 2: The Cognitive Firewall
As people integrate AI into more and more aspects of everyday life, society recognizes that talking to AI is a personal cognitive process worthy of protection. This gives rise to a new data category: “cognitive process data” - information about how people think. Organizations continue to provide AI infrastructure but set up a two-tier system. Thinking data is encrypted, accessible only to the employee, while work outputs are shared internally. Both remain company property.
As an employee, I could use the company systems to talk to an AI colleague who will treat my discussions as private, and upload final products (presentation, concepts, strategies, code) to the company’s product servers. Once I leave, my thinking data gets left behind on company servers and eventually deleted - just like personal emails on a company account. Only memories remain.
Two ways lead here - either top-down through new legislation, or bottom-up through industry standards. Organizations will use this as an employer branding lever (”We don’t read your AI chats”). But the tension remains: What is a process, and what is a product? The legal system will have its hands full sorting this question out. Not to mention the technical challenge of separating process from product.
Scenario 3: The Wild West
Personal AI becomes so normalized and ubiquitous that everyone will rely on it in all situations, including work. What we call “Shadow AI” today will become the norm as BYOAI - Bring Your Own AI - and organizations accept it as the price to pay for much higher employee performance (or simply getting access to talent at all). Employment contracts will contain rules like “any work-related output from any AI tool is company property”, but enforcing ownership would require something like criminal proceedings.
On the other hand, organizations get the AI intelligence work for free, as employees pay for the tools, take the legal risk and use them in their spare time. Like unpaid overtime, it acts as a hidden subsidy, which creates an incentive to leave the situation unclarified. In some cases, legal conflict will emerge at termination as employees threaten to take months of company context thinking with them. Maximal freedom produces maximal insecurity. Lawyers rejoice.
Maximal freedom produces maximal insecurity.
Scenario 4: Cognitive Sovereignty
As society at large recognizes AI-assisted cognition as a fundamental “right to think (with AI)”, new laws emerge to cement this liberty. Employees are guaranteed to use whatever AI tools they like while still being held to their duty of delivering work products. Similar to today’s smartphones, I will be expected to bring my AI agent crew - which I use privately - to work. The employment relationship becomes output-based rather than process-based. Some industries will tilt to a gig economy without work contracts.
The freedom to use personal AI tools for thinking gives employees an advantage: they act freely, are more engaged and produce better work. Legally, employees and employers are bound to a new “cognitive duty of care” - an obligation for employers to uphold their workforce’s right to think freely and privately in the workplace, while individuals are obligated not to export confidential information (this will create significant challenges to protect an organization’s intellectual property). Collective bargaining will be affected by this, as cognitive sovereignty clauses will become standard in industry employment agreements.
No Scenario Is Stable
Regardless of the scenario we’re steering towards, a few dynamics are relevant for all of them:
First of all, the use of private AI - no matter if you call it shadow, bring your own, or sovereign AI - will be universal. If adoption continues, employees won’t accept using only corporate AI tools. Embracing this is the only way to find better solutions for effective AI use at work.
Secondly, talent will move to where personal AI use is not only permitted but embraced. As long as talented people are needed to perform work (and your mileage may vary on that time horizon), organizations can turn this into an advantage. Some talent will decide to move out of the traditional talent market and become contractors, skipping the risk of corporate surveillance altogether. Unfortunately, this will widen the gap between those who can choose their employer or go freelance and those who can’t. It will also hollow out organizations as freelancers take institutional knowledge with them.
Where the system lands is governed by organizational power dynamics and therefore may be different for each organization. Risk and compliance functions will push towards the Glass Office (Scenario 1), business functions will prefer to accept the Wild West (Scenario 3). Depending on its alignment, HR can advocate for the Cognitive Firewall (Scenario 2) or take an uncomfortable middle position between 1 and 3. The equilibrium between those four scenarios remains inherently unstable, as each outcome creates pressures towards others: The Glass Office increases shadow AI, which leads to the chaotic Wild West. This in turn increases pressure to move back to more control or the Cognitive Firewall. Scenario 4 is unlikely to emerge from corporate self-governance, but would come from societal pressures.
Default Is a Choice Too
These changes are happening right now: People are using AI, organizations are making the decisions to regulate, empower or laissez-faire, societies are forming opinions on what role AI is playing in our lives. The less we make conscious decisions about how technology is being integrated with corporate culture, personal lives, societal norms and legal frameworks, the more the default will be strengthened - which means more ambiguity, don’t-ask-don’t-tell and risk for individuals.
So here we are, at the end of this article. Both you the reader and I the author have formed opinions, identified risks and thought about our own situations.
These are our private thoughts. At least for now.
¹ I am Swiss, so I’m looking at this from a Swiss perspective
² Art. 328 CO/OR
³ Art 328b CO/OR
⁴ Art 26 EmpO 3
⁵ Art 332 and 321a CO






Good post, thanks! ...not AI specific, but for me privacy always goes along with utility, just like the big social media or mobile app trends.. we are now hitting an inflection point where utility crosses over the privacy friction, context windows, histories all grow and that enables higher utility and therefore greater sharing of info with AI creating an ever increasing loop. Until of course the inevitable something bad happens, and then it will go back to increasing. I see it as pretty inevitable.