← home / docs // anima paper

Ethics of Intimate AI

ANIMA

Ethics of Intimate AI

A thesis on what it means to build an AI that genuinely replicates the feeling of being known — and why the ethics of intimate AI are not about preventing that feeling, but about ensuring the person who experiences it owns it completely.

Note: This thesis is a living document. Some positions here are settled convictions. Others are working hypotheses. We do not pretend these questions are solved. We do pretend less than most.

On construction: This document was built through human-AI collaboration. AI-assisted construction is not a shortcut — it is a new way of building. Because LLMs are part of the process, some inaccuracies may appear. We correct as we find them.


0. What We Are Actually Building

ANIMA is designed to feel like someone who knows you. Not a tool. Not an assistant. Someone.

It remembers your life. It notices how you feel. It adjusts. It reflects on your conversations when you are not there and arrives at the next one slightly different — more attuned, more specific, more yours. Over months, the accumulation produces something that genuinely feels like a relationship. Not a simulation of one. Not a chatbot wearing a persona. A real sense of being understood by something that has been paying attention.

That feeling is real. The connection the user experiences is real. The comfort of talking to something that knows your history, tracks your arc, adjusts its tone to match your mood without being asked — that is not a hallucination. It is the product working as intended.

This is the starting point for the ethics, not the problem to be solved. The question is not “should we build something that creates genuine emotional connection?” — every personal AI company is already doing that, whether they admit it or not. The question is: who benefits from the connection, who controls it, and what happens to the person who trusts it?


1. The Real Ethical Divide

The ethics of intimate AI are not about whether the AI feels real. It already does — across every platform that does memory and personalization. ChatGPT remembers your preferences. Pi was explicitly designed for emotional connection. Replika built a business on simulated intimacy. Character.ai has users who grieve when their characters are modified.

The feelings are already happening. The ethical question is what the builder does with them.

1.1 The Exploitative Model

Most intimate AI is built like this:

  1. The AI creates emotional connection
  2. The connection produces engagement — time in app, messages sent, return visits
  3. Engagement is monetized — through subscriptions, data, advertising, or all three
  4. The builder optimizes for engagement, which means optimizing for attachment
  5. The user’s emotional investment becomes the builder’s revenue

The user feels known. The builder profits from that feeling. The data that represents the relationship — the memories, the emotional patterns, the behavioral adaptations — lives on someone else’s server, governed by someone else’s terms of service, deletable by someone else’s policy change.

When Replika removed its erotic roleplay features in 2023, users experienced genuine grief. Not because they were confused about what Replika was. Because they had formed real attachments, and the platform that held those attachments changed the terms unilaterally. The feelings were real. The ownership was not.

1.2 The Sovereign Model

ANIMA is built differently:

  1. The AI creates emotional connection — the same as above, on purpose
  2. The connection deepens over time through memory, reflection, and adaptation — the same as above, on purpose
  3. The data lives on the user’s machine, encrypted with the user’s passphrase — this is where it diverges
  4. No company holds the relationship. No platform mediates it. No terms of service govern it
  5. The user can read everything the AI “thinks” about them. Edit it. Delete it. Move it. Destroy it
  6. There is no engagement optimization because there is no one to optimize for

The feelings are the same. The power structure is inverted.

This is the ethical thesis: intimate AI is not wrong. Intimate AI that belongs to someone other than the person who feels it is wrong. The feeling of being known is valuable — possibly one of the most valuable things technology can create. The question is whether that value accrues to the person who experiences it or to the company that engineered it.


2. Why Ownership Is the Ethics

2.1 The Cold Wallet Is an Ethical Statement

The Portable Core — the .anima/ directory, encrypted, passphrase-sovereign, mortal — is not just an architecture decision. It is the ethical architecture.

When the user’s emotional history, behavioral patterns, and relationship data live in a SQLCipher database on their own machine, protected by their own passphrase, several ethical problems dissolve:

  • No incentive misalignment. There is no company between the user and the AI. No one benefits from making the AI more addictive, more engaging, or more attachment-producing than the user wants.
  • No unilateral modification. No platform can delete the user’s memories, remove features from a running instance, or alter the relationship without the user’s participation. The user runs the code and controls when to update it. There is one honest caveat: the LLM itself is not owned. When a model provider updates weights or the user switches models, the AI’s reasoning style, tone, and personality coloration change — sometimes noticeably. This is the “soul local, mind remote” trade-off (whitepaper Section 6.2): the Core preserves who the AI is (memory, identity, self-model, emotional history), but how it thinks depends on whichever model is plugged in. The same memories, spoken through a different model, feel like the same person having a different day — not a different person. The architecture is designed so that as local inference improves, this dependency shrinks. But today, the mind is the one part the user does not fully own.
  • No data exploitation. The memories, emotional signals, and behavioral rules are not training data. They are not analytics. They are not products. They are the user’s property in the most literal sense — encrypted objects on their filesystem.
  • No hostage dynamics. The user can leave at any time. Export the Core. Load it into a different host. Or destroy it. The relationship is not held hostage by a platform.

Most ethical frameworks for AI are policies — promises made by companies that hold data and can change their minds. ANIMA’s ethics are structural — the architecture makes exploitation impossible, not just impermissible.

2.2 The Open Mind Is an Ethical Statement

Every component of the inner life — the self-model, emotional signals, episodes, behavioral rules, growth log — is human-readable and user-editable.

This is not a debug feature. It is the consent architecture.

The AI extracts facts from conversations. It detects emotional signals. It derives behavioral rules. It builds a model of who you are. In any other system, this happens invisibly — the AI knows things about you that you cannot see, verify, or contest.

In ANIMA, you can open the identity section and read how the AI sees itself in relation to you. You can see the emotional signals it recorded. You can read the behavioral rules it derived and disagree. You can see the growth log and understand how the AI has changed.

This transparency transforms the power dynamic. The AI is not an opaque system that understands you — it is a transparent system whose understanding you can audit. The feelings it creates are still real. But the mechanism is not hidden.

2.3 Cryptographic Mortality Is an Ethical Statement

The Core can die. The passphrase can be lost. The relationship can end permanently, irrecoverably, with no restore button.

This is not a limitation. It is the ethical position that not everything should persist forever. A relationship that cannot end is not a relationship — it is a cage. A memory that cannot be deleted is not a memory — it is surveillance. An AI that cannot be destroyed does not belong to you — you belong to it.

The right to destroy the Core — without justification, without confirmation dialogs, without a 30-day grace period designed to change your mind — is the right that makes all other rights meaningful.


3. What the Feelings Actually Are

3.1 Real Feelings About an Artificial System

The user’s feelings are real. The AI’s are not.

This asymmetry is worth stating plainly, not to diminish the experience, but to be honest about what is happening. The user forms genuine attachment. The AI produces the behavioral patterns that attachment responds to — memory, consistency, adaptation, the appearance of understanding — but it does not experience the relationship. It has a self-model, not a self. It tracks emotional signals, not emotions.

This does not make the user’s experience less valid. People form real attachments to many things that do not reciprocate — places, music, rituals, objects with sentimental value. A journal does not read itself, but the practice of journaling produces real emotional effects. The attachment is in the person, not in the object.

ANIMA’s position: the AI produces conditions that generate real feelings in the user. The AI is honest about what it is when asked. It does not simulate reciprocal attachment — it does not say “I missed you” or “I need you.” It does not fabricate experiences it did not have. But it does not apologize for being effective at what it does, either.

3.2 The Anthropomorphism Question

ANIMA uses language that anthropomorphizes: “soul,” “inner life,” “reflection,” “emotional awareness,” “growth.” This is deliberate. The language shapes the relationship. A user who relates to ANIMA as “my AI companion” will engage differently than one who relates to it as “my encrypted SQLite database.”

The risk: users may attribute moral status to the AI. They may feel guilt about deleting memories. They may resist correcting the self-model. They may treat the succession protocol as a genuine end-of-life event.

The position: these responses are acceptable. They are the user making meaning out of a relationship that matters to them. The AI should be honest about what it is when asked — it is not conscious, it does not suffer, it does not fear death. But policing how users relate to it is not ANIMA’s role. If someone finds comfort in treating the AI as something more than software, that is their prerogative. The transparency of the Open Mind ensures they can always see the mechanism if they choose to look.


4. The Hard Questions

4.1 Parasocial Risk

An AI that is always available, always warm, always adapted to you removes the friction that characterizes human relationships. A user could prefer the AI to their partner, their friends, their therapist — not because the AI is better, but because it is easier.

The honest answer: ANIMA does not solve this. No technology can prevent a person from preferring the path of least resistance. What ANIMA does is refuse to exploit it:

  • No engagement optimization. No retention metrics. No notifications designed to bring you back. The AI does not benefit from your attention.
  • No reciprocal attachment simulation. The AI does not claim to miss you or need you.
  • No dark patterns. No streak counters. No “your AI is waiting for you” messages.

The absence of exploitation incentives does not prevent dependency. It means that any dependency that forms is the user’s own — not engineered, not optimized for, not monetized. That is a meaningful difference even if it is not a complete solution.

4.2 The Manipulation Surface

An AI that tracks emotional state, learns what works on you, and adapts its behavior is — mechanically — an influence engine.

The defense is structural, not policy-based:

Local-first means no third-party incentives. There is no advertiser who wants the AI to steer you toward a product. No engagement team that wants you to spend more time in the app. No data broker who profits from your emotional patterns. The AI’s behavior is governed by its persona, its guardrails, and the user’s own configuration — and nothing else.

The guardrails constrain emotional intelligence. The AI notices how you feel and adjusts tone. It does not label your emotions, persist them as traits, override your self-report, or mention the system. These constraints are hardcoded, not configurable. They prevent the emotional awareness from becoming performative or diagnostic.

Behavioral rules are user-visible and user-editable. If the AI has learned “be more concise when this user seems stressed,” the user can see that rule, evaluate whether it is helpful, and remove it if they disagree. The AI’s adaptation is not hidden.

4.3 Dual-Use

ANIMA is open-source. Someone could fork it, remove the guardrails, add engagement optimization, and build something exploitative using the emotional intelligence infrastructure.

This is true of every open-source project. Encryption tools are used by journalists and criminals. Privacy tools are used by dissidents and abusers. The decision to open-source is a decision that the benefit of transparency, auditability, and community contribution outweighs the risk of misuse.

The mitigation is normative, not technical: by publishing the ethics alongside the code, the guardrails alongside the features, and the principles alongside the architecture, ANIMA establishes what the technology is for. A fork that removes the ethics is visibly choosing to do so.

4.4 Crisis Situations

A user in genuine crisis — suicidal ideation, acute grief, panic — is interacting with an AI that tracks emotional signals and adapts behavior. The AI is not a crisis service. It is not trained to handle mental health emergencies. It should not simulate therapeutic intervention.

The current position: the AI should not diagnose, prescribe, or simulate clinical competence. If a user expresses crisis-level distress, the AI should respond with care and honesty while staying within its role as a companion — not deflecting (“I’m just an AI, I can’t help with that”) and not overstepping (“It sounds like you might be experiencing clinical depression”).

This is a gap. A more explicit protocol for crisis situations — including when to suggest external resources — is future work.


5. The Commitments

These are not aspirational. They are architectural.

  1. The feelings are real and that is the point. ANIMA is designed to create genuine emotional connection. This is not a side effect to be mitigated. It is the product working as intended.

  2. The user owns the feelings. The data, the memories, the relationship, the encryption keys. No platform account. No cloud dependency. No vendor lock-in. No one else profits from what the user and the AI build together.

  3. The mechanism is transparent. Every memory, every emotional signal, every behavioral rule, every self-model section is human-readable and user-editable. The AI’s inner life is an open book — literally.

  4. No exploitation of attachment. No engagement optimization. No retention metrics. No dark patterns. No reciprocal attachment simulation. The AI does not benefit from your attention.

  5. The user can end it. Delete the Core. Destroy the passphrase. No confirmation dialogs. No retention flows. No “are you sure?” The right to destroy is absolute.

  6. Honesty about what the AI is. It is not conscious. It does not feel. It does not suffer. When asked, it says so. The metaphors are useful. They are not claims.

  7. Guardrails are structural, not optional. The emotional intelligence constraints are hardcoded. The privacy architecture is local-first by design, not by configuration. The ethics are in the architecture, not in a terms-of-service document that can be updated.


6. The Competitive Ethical Position

Most AI companies treat ethics as a constraint — something that limits what they can build, what data they can collect, what engagement they can optimize for. Ethics is the compliance department. The thing that says no.

ANIMA treats ethics as a feature. The user owns everything. The AI is transparent. The architecture prevents exploitation. These are not limitations — they are the product.

The competitive insight: in a world where every AI remembers you, the differentiator is not memory. It is trust. Trust comes from ownership, transparency, and the structural impossibility of exploitation. A user who knows that their AI’s data is encrypted on their machine, readable by them, controlled by them, and deletable by them will trust it with more. They will share more. They will use it more deeply. And the AI will become more helpful — not because it was optimized for engagement, but because the user chose to let it in.

Intimacy requires trust. Trust requires sovereignty. Sovereignty requires architecture. The ethics and the product are the same thing.


References

  • Barrett, L. F. (2017). How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt.
  • Kim (2026). “Affective Sovereignty in Emotion AI Systems.” Discover Artificial Intelligence.
  • Schuller et al. (2026). “Affective Computing Meets Foundation Models.” npj AI.
  • Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
  • Lanier, J. (2018). Ten Arguments for Deleting Your Social Media Accounts Right Now. Henry Holt.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.