Mnevra logo Mnevra

Privacy

Your memories
are yours.

Mnevra only works if you trust it. So we've built it to be trustworthy by design — not by promise.

An illustration of a locked personal journal

The short version

  • Your data is never used to train anyone's AI models.
  • Every request is scoped to you at the database layer. Even a bug in our code can't leak another person's data.
  • You can export everything Mnevra holds about you, in structured form, at any time.
  • You can delete your account and your data will be permanently removed.
  • You can bring your own AI key, and your prompts go directly to that provider under your account.

What we store

What actually lives in Mnevra.

The shortest way to describe it: your conversations, the entities extracted from them, and the files you choose to upload.

Conversations

Every message you send and every response Mnevra gives, with timestamps. These are the raw material your knowledge graph is built from.

The knowledge graph

People, places, events, memories, goals, reminders — and the relationships between them. This is what Mnevra uses to answer questions and write your story.

Photos you attach

Images you upload into chat, plus any EXIF metadata (date, GPS) Mnevra reads to tag them. Stored against the right memory.

Files you import

PDFs, Word docs, URLs and other files you pass to Mnevra for extraction. The extracted entities become part of the graph; the original files are kept unless you delete them.

Your account

Email, name, password hash, avatar, timezone and (optionally) your own AI provider keys. That's it.

Usage logs

Which AI model answered which message, token counts, and cost. Used for billing and debugging. No third party ever sees these.

How we protect it

Privacy, built into the shape of the app.

Promises are cheap. We've tried to bake the important guarantees into the structure so they hold even when we make mistakes.

Row-level security

Every query to the database is scoped to your account ID. Even if a bug somewhere in the app tried to read another person's data, the database would refuse.

No training on user data

Your memories are never used to train Mnevra's models or anyone else's. Full stop. Model providers we call (Anthropic, OpenAI) are configured to not retain prompts.

Encrypted in transit and at rest

All requests use TLS. All stored data — including your conversations, memories, and imported files — sits encrypted at rest.

Face ID app lock

Lock Mnevra behind Face ID (or your device passcode) so your memories stay private even on an unlocked phone.

Private memories

Mark any memory as private and Mnevra uses it in conversation but keeps it out of every export — biographies, data exports, everything.

Bring your own keys

Plug in your own Anthropic or OpenAI key and your prompts flow straight through your own account with the provider. Mnevra still stores the messages locally, but AI billing is yours.

Your rights

You own your memory graph.

At any point, you can take it with you or get rid of it.

Export everything

One tap in Settings exports all your data — conversations, entities, memories, photos, imports — in structured form. Yours to keep.

Delete anything

Individual memories, imports, conversations or photos can be deleted at any time. Deletion is permanent.

Delete everything

Close your account and your data is removed from our active systems within 30 days and from backups within 90.

Questions about any of this?

Write to us at mnevra@div.nz. Real humans read it.

Our full privacy policy and terms will be published before Mnevra is generally available.

Start remembering.

Mnevra is arriving on iPhone and Mac soon. Be among the first to use it.

Coming Soon on the

App Store