Last updated: March 2026
How Saasis Protects Your Data
Most bots store everyone's data in one database. We don't. This page explains exactly how our architecture works.
Shared databases
The typical bot architecture is a single database with a users table, a reminders table, maybe a passwords table. Every user's data sits in the same tables, separated by a WHERE user_id = ? clause that the application code hopefully remembers to include.
A single SQL injection, a missed filter, or a leaked database credential exposes everyone. Even “encrypted” databases protect nothing if the key sits next to the data on the same server, which it usually does.
Saasis uses a different architecture.
Per-user isolation
One database per user, per plugin. Every user gets their own encrypted SQLite database file for each plugin that stores data. Not rows in a shared table with a user ID column, but a physically separate file on disk, encrypted with a key derived specifically for that user.
If you use Reminders and Crypto Tracker, that's two encrypted files just for you. Another user has their own two files. The files are encrypted with different keys. There is no shared database where a missing WHERE clause could leak someone else's data, because someone else's data is in a different file entirely.
This is more expensive to operate than a shared database. We chose it because the alternative, trusting application code to always filter correctly, is exactly the kind of thing that fails.
Plugin sandboxing
Plugins never touch the database. Having separate files is only half the protection. If a plugin can open any file, the isolation is organizational, not enforced. So plugins don't get access at all.
Each plugin runs in its own container, but the plugin container has no access to the encrypted user database files and no encryption keys. It runs on a read-only filesystem with dropped capabilities. It can execute plugin logic and nothing else.
So how does it read and write data?
Through a dedicated companion container we call a database sidecar. Every plugin that stores data gets one. The sidecar holds the encryption keys (in memory, never on disk), manages the encryption engine, and translates structured requests into encrypted database operations.
The plugin sends a request like “find all reminders where category is daily” through a message bus. The sidecar receives it, verifies the caller's identity token, opens the encrypted file for that specific user, runs the query, and sends the results back.
The plugin never sees the encryption key. Never opens a database file. Never runs SQL. If an attacker compromises a plugin container, they get a read-only sandbox with no keys, no data, and no way to reach the database files.
Message flow
Let's trace what happens when you send remind me to call mom at 5pm on Discord.
Every interaction follows this path. The message crosses multiple isolation layers before any data is written.
Defense in depth
No single layer is the security. Multiple layers protect your data, and an attacker must traverse all of them to reach anything useful.
An attacker who breaks into a plugin container still faces channel isolation, the sidecar's token verification, encryption at rest, and the fact that the keys aren't even in the same container.
Prompt injection
The AI is not a trust boundary. Saasis uses an LLM to route your messages to the right plugin and extract parameters from natural language. This means prompt injection is a real attack surface. A user could craft a message to trick the LLM into routing to the wrong plugin or extracting unexpected parameters.
We designed the architecture so that it doesn't matter if the LLM is fully compromised. The LLM can influence two things: which plugin handles your message, and what parameters it passes. It cannot influence your identity.
You can trace "a1b2c3" from MessageHandler → token → sidecar → filename. The LLM output has no intersection with this chain.
Your identity flows through an entirely separate channel. The core service builds your globalUserId from platform-verified credentials before the LLM ever sees your message. That identity is signed into a short-lived HMAC token that travels alongside the request. The database sidecar verifies this token and uses it to determine which encrypted file to open. The LLM's output has zero intersection with this chain.
So if a prompt injection succeeds and the LLM routes you to the wrong plugin, that plugin still receives your identity, opens your database file, and can only operate on your data. If the LLM injects a fake user_id into the parameters, the plugin ignores it because every database call uses the signed context, not LLM-supplied fields. If you get routed to the password manager, it still requires your master password before returning anything.
Prompt injection in this architecture is, at worst, a usability annoyance (you get an irrelevant response). It is not a data breach vector.
Dashboard security
The dashboard is not a separate system. Web dashboards are typically the weakest part of a bot's security. They often bypass the bot's architecture entirely and query a shared database directly, creating a parallel access path with different (usually weaker) protections.
Ours doesn't. The dashboard uses the exact same isolation model as chat. Every API call constructs a MessageContext from the authenticated session, binds your globalUserId, and routes through the same plugin bridge and sidecar flow. The same identity tokens, the same per-user encrypted databases, the same capability execution.
Authentication uses session cookies (httpOnly, secure, sameSite=strict) with magic link login. No passwords are stored. Each plugin explicitly whitelists which capabilities the dashboard can invoke. Capabilities prefixed with admin: require an additional admin flag. Out of 50+ routes, only 7 are public (health checks, plugin catalog, static assets).
If you can't reach another user's data through chat, you can't reach it through the dashboard either. It's the same code path.
Encryption tiers
The system-wide encryption described above is server-side: we derive the keys, so we could theoretically access the data. For most plugins (reminders, price alerts, notes) this is a reasonable tradeoff because the alternative (making you enter a password for every bot interaction) would make the bot unusable.
The password manager adds a second encryption layer on top. Your master password derives a separate key via Argon2id (64MB memory cost, 3 iterations). This derived key encrypts your vault entries with AES-256-GCM before they reach the database. We cannot decrypt your stored credentials, even with full database access. (Service names like “Netflix” or “Gmail” are stored in plaintext so you can ask “what is my Netflix password” without unlocking the entire vault first. The actual passwords, usernames, and URLs are not.)
To be clear about what this is and isn't: this is not a zero-knowledge architecture. In a chat bot context, your master password is sent to our server (over HTTPS) where key derivation happens. The password is never stored, but the derived key is held in server memory for 15 minutes so you don't have to re-authenticate on every operation. After 15 minutes of inactivity, it is cleared and you'll need to enter your master password again.
How you authenticate determines what third parties can see:
Both paths: Argon2id derivation / AES-256-GCM encryption / derived key held 15 min then cleared
We think this distinction matters. Most of your data is encrypted with server-derived keys (we could technically read it). Your passwords are encrypted with a key derived from your master password (we cannot, unless we intercept it during authentication). We'd rather explain the tradeoff than let you assume everything is zero-knowledge when it isn't.
Infrastructure
The master encryption key (from which all per-plugin and per-user keys are derived) is stored in HashiCorp Vault, not in environment variables or config files. The core service authenticates to Vault via AppRole at startup, retrieves the key, and distributes derived per-plugin keys to each sidecar over the internal message bus. No sidecar ever sees the master key itself.
All external traffic passes through a Cloudflare Tunnel. No ports are exposed to the internet. Every service binds to localhost only. The server's IP address is never revealed. External connections are TLS-encrypted end-to-end by Cloudflare.
Internal container-to-container traffic is encrypted at two levels. The message bus uses TLS with a self-signed internal CA (plaintext port disabled entirely). The container overlay network adds a second layer of IPsec encryption on all inter-container frames (in Swarm mode). Each service authenticates to the message bus with unique credentials and ACL-scoped permissions.
Identity-bearing messages are also HMAC-SHA256 signed at the application layer using a dedicated signing key (separate from the encryption master key). Each signed message includes a timestamp to prevent replay attacks.
Your messages are never logged in production. The logging system replaces message content with character counts ([User: 142 chars]) and system prompts with hashes. This is enforced at the logging layer, not by convention.
Known limitations
The system is designed to survive a compromised plugin container. That's the primary threat model and the most likely attack vector. Here's what it doesn't protect against:
Core compromise - If an attacker gains access to the core service or host, they have the master key and can derive all encryption keys (including from sidecar memory). This is an inherent limitation of server-side encryption. However, reaching the core requires chaining multiple exploits: there are no exposed ports (all traffic passes through Cloudflare Tunnel), every container runs with dropped capabilities on a read-only filesystem, and Redis ACLs prevent plugin containers from accessing core channels. The most realistic path would be a supply chain attack on a dependency, which would still be contained by container isolation. We are exploring hardware-isolated key management (such as HSMs or trusted execution environments like Intel SGX) as a future mitigation, which would prevent the master key from being read even by a compromised host or core process.
Backup retention - When you delete your account, live data is purged immediately. Encrypted backups expire within 5 weeks. We do not yet support selective per-user deletion from backup snapshots.
No system is perfectly secure. Ours is designed so that the most likely attack (a vulnerable plugin) yields nothing useful, and escalation requires compromising increasingly hardened components with increasingly narrow access.
Data rights
Deletion: Delete your account from the dashboard or via chat. 30-day grace period to change your mind. After that, all data is permanently purged across every plugin and database. Encrypted backups naturally expire within our retention window (up to 5 weeks). We do not yet support selective deletion from backup snapshots.
Export: Download all your data in a portable format. Re-authentication required for security. Password vaults stay encrypted in the export; you decrypt them locally with your master password.
GDPR: We support GDPR data subject rights including access, portability, and erasure. See our privacy policy for details.