Saasis Logo

Saasis

HomePlugins
Dashboard

Last updated: March 2026

How Saasis Protects Your Data

Most bots store everyone's data in one database. We don't. This page explains exactly how our architecture works.

On this page
Shared databasesPer-user isolationPlugin sandboxingMessage flowDefense in depthPrompt injectionDashboard securityEncryption tiersInfrastructureKnown limitationsData rights
On this page
01Shared databases02Per-user isolation03Plugin sandboxing04Message flow05Defense in depth06Prompt injection07Dashboard security08Encryption tiers09Infrastructure10Known limitations11Data rights

Shared databases

The typical bot architecture is a single database with a users table, a reminders table, maybe a passwords table. Every user's data sits in the same tables, separated by a WHERE user_id = ? clause that the application code hopefully remembers to include.

A single SQL injection, a missed filter, or a leaked database credential exposes everyone. Even “encrypted” databases protect nothing if the key sits next to the data on the same server, which it usually does.

Saasis uses a different architecture.

Per-user isolation

One database per user, per plugin. Every user gets their own encrypted SQLite database file for each plugin that stores data. Not rows in a shared table with a user ID column, but a physically separate file on disk, encrypted with a key derived specifically for that user.

Reminders
Crypto
Passwords
User A
a.db
a.db
a.db
User B
b.db
b.db
b.db
User C
c.db
c.db
c.db
3 users across 3 plugins = 9 separate encrypted files. Each with its own derived key.

If you use Reminders and Crypto Tracker, that's two encrypted files just for you. Another user has their own two files. The files are encrypted with different keys. There is no shared database where a missing WHERE clause could leak someone else's data, because someone else's data is in a different file entirely.

This is more expensive to operate than a shared database. We chose it because the alternative, trusting application code to always filter correctly, is exactly the kind of thing that fails.

Plugin sandboxing

Plugins never touch the database. Having separate files is only half the protection. If a plugin can open any file, the isolation is organizational, not enforced. So plugins don't get access at all.

Each plugin runs in its own container, but the plugin container has no access to the encrypted user database files and no encryption keys. It runs on a read-only filesystem with dropped capabilities. It can execute plugin logic and nothing else.

So how does it read and write data?

Through a dedicated companion container we call a database sidecar. Every plugin that stores data gets one. The sidecar holds the encryption keys (in memory, never on disk), manages the encryption engine, and translates structured requests into encrypted database operations.

Plugin Container
✓Plugin application code
✓Read-only filesystem
✕No access to user database files
✕No database encryption or decryption
Message
Bus
Message Bus
DB Sidecar
✓Encryption keys (memory only)
✓Encryption engine (sqleet/ChaCha20)
✓Connection pool (per-user files)
✓Identity token verification
✓Keys cleared on shutdown
user-a.db
user-b.db
user-c.db

The plugin sends a request like “find all reminders where category is daily” through a message bus. The sidecar receives it, verifies the caller's identity token, opens the encrypted file for that specific user, runs the query, and sends the results back.

The plugin never sees the encryption key. Never opens a database file. Never runs SQL. If an attacker compromises a plugin container, they get a read-only sandbox with no keys, no data, and no way to reach the database files.

Message flow

Let's trace what happens when you send remind me to call mom at 5pm on Discord.

1
Discord
Authenticates you with your Discord credentials. We never see them.
2
Platform Adapter
Receives verified message, constructs request context
3
Identity Service
Maps your Discord ID to your Saasis UUID
4
Agent System
Routes your message to the Reminders plugin
5
Plugin Bridge
Signs a short-lived identity token for database access
6
Message Bus
Routes to Reminders channel only (ACL-enforced)
7
Reminders Plugin
Processes command, sends structured write request
8
Message Bus
Routes to Reminders sidecar only (ACL-enforced)
9
DB Sidecar
Verifies token, derives your key, opens your database file
10
Encrypted File
Reminder written to your-uuid.db, encrypted with ChaCha20-Poly1305

Every interaction follows this path. The message crosses multiple isolation layers before any data is written.

Defense in depth

No single layer is the security. Multiple layers protect your data, and an attacker must traverse all of them to reach anything useful.

1
Platform verification - Discord, Telegram, and WhatsApp verify your identity before messages reach us. We never see your platform credentials.
2
Identity mapping - A cross-platform UUID system maps your platform ID to an internal identifier. You can't claim to be a different user by switching platforms.
3
Immutable context - The core service builds the request context from verified platform data. Plugins receive it read-only and cannot modify your identity.
4
Channel isolation - Each of the 55+ message bus users has individually scoped ACL credentials. A compromised Reminders plugin can't subscribe to Crypto Tracker channels or publish on core channels.
5
Container sandboxing - Every container runs as a non-root user with all Linux capabilities dropped, privilege escalation blocked, a read-only filesystem, and enforced memory/CPU limits.
6
Scoped database access - The sidecar only opens the database file belonging to the authenticated user. There is no API for querying another user's file.
7
Encryption at rest - Every database file is encrypted with ChaCha20-Poly1305 via sqleet. The key is derived per-user and held only in sidecar memory.
8
Input validation - All plugin inputs are validated before processing. The agent can only call registered capabilities with validated parameters.

An attacker who breaks into a plugin container still faces channel isolation, the sidecar's token verification, encryption at rest, and the fact that the keys aren't even in the same container.

Prompt injection

The AI is not a trust boundary. Saasis uses an LLM to route your messages to the right plugin and extract parameters from natural language. This means prompt injection is a real attack surface. A user could craft a message to trick the LLM into routing to the wrong plugin or extracting unexpected parameters.

We designed the architecture so that it doesn't matter if the LLM is fully compromised. The LLM can influence two things: which plugin handles your message, and what parameters it passes. It cannot influence your identity.

Your message arrives
"remind me to call mom at 5pm"
MessageHandler
Resolves identity and builds context before the LLM runs
MessageContext {
globalUserId: "a1b2c3-..." ← resolved from Discord ID
content: "remind me to call mom..."
platform, userId, isAdmin, ...
}
Identity path / server memory
Server reads globalUserId from the context and signs a token. The LLM never sees these fields.
TokenPayload {
globalUserId: "a1b2c3-..."
pluginName: "reminder"
exp: now + 30s
}
→ HMAC-SHA256 signed
LLM path / text only
Receives only content from the context. No identity fields in the prompt.
// LLM output:
{
agent: "reminder"
params: { text, time }
}
→ no identity fields in output
Plugin Bridge (server code)
Constructs the request. The LLM never touches this object.
PluginRequest {
context: { ← from MessageContext
identityToken: HMAC(globalUserId)
globalUserId: "a1b2c3-..."
}
data: { ← from LLM output
text: "call mom"
time: "5pm"
}
}
DB Sidecar
Determines which file to open
✓ identityToken (from context)
→ verify HMAC signature
→ extract globalUserId: "a1b2c3-..."
→ openUserDb("a1b2c3-...")
✗ request.data (used for SQL query only)
cannot influence which file is opened
a1b2c3-....db

You can trace "a1b2c3" from MessageHandler → token → sidecar → filename. The LLM output has no intersection with this chain.

Your identity flows through an entirely separate channel. The core service builds your globalUserId from platform-verified credentials before the LLM ever sees your message. That identity is signed into a short-lived HMAC token that travels alongside the request. The database sidecar verifies this token and uses it to determine which encrypted file to open. The LLM's output has zero intersection with this chain.

So if a prompt injection succeeds and the LLM routes you to the wrong plugin, that plugin still receives your identity, opens your database file, and can only operate on your data. If the LLM injects a fake user_id into the parameters, the plugin ignores it because every database call uses the signed context, not LLM-supplied fields. If you get routed to the password manager, it still requires your master password before returning anything.

Prompt injection in this architecture is, at worst, a usability annoyance (you get an irrelevant response). It is not a data breach vector.

Dashboard security

The dashboard is not a separate system. Web dashboards are typically the weakest part of a bot's security. They often bypass the bot's architecture entirely and query a shared database directly, creating a parallel access path with different (usually weaker) protections.

Ours doesn't. The dashboard uses the exact same isolation model as chat. Every API call constructs a MessageContext from the authenticated session, binds your globalUserId, and routes through the same plugin bridge and sidecar flow. The same identity tokens, the same per-user encrypted databases, the same capability execution.

Authentication uses session cookies (httpOnly, secure, sameSite=strict) with magic link login. No passwords are stored. Each plugin explicitly whitelists which capabilities the dashboard can invoke. Capabilities prefixed with admin: require an additional admin flag. Out of 50+ routes, only 7 are public (health checks, plugin catalog, static assets).

If you can't reach another user's data through chat, you can't reach it through the dashboard either. It's the same code path.

Encryption tiers

The system-wide encryption described above is server-side: we derive the keys, so we could theoretically access the data. For most plugins (reminders, price alerts, notes) this is a reasonable tradeoff because the alternative (making you enter a password for every bot interaction) would make the bot unusable.

The password manager adds a second encryption layer on top. Your master password derives a separate key via Argon2id (64MB memory cost, 3 iterations). This derived key encrypts your vault entries with AES-256-GCM before they reach the database. We cannot decrypt your stored credentials, even with full database access. (Service names like “Netflix” or “Gmail” are stored in plaintext so you can ask “what is my Netflix password” without unlocking the entire vault first. The actual passwords, usernames, and URLs are not.)

To be clear about what this is and isn't: this is not a zero-knowledge architecture. In a chat bot context, your master password is sent to our server (over HTTPS) where key derivation happens. The password is never stored, but the derived key is held in server memory for 15 minutes so you don't have to re-authenticate on every operation. After 15 minutes of inactivity, it is cleared and you'll need to enter your master password again.

How you authenticate determines what third parties can see:

Via chat
1Master password sent over HTTPS
2Passes through AI routing system
3Third-party LLM provider sees the prompt
4Vault handler derives key via Argon2id
✓Providers don't train on API data
✓No identity fields in prompt
!Delete your message after authenticating
Via dashboard (recommended)
1Master password sent over HTTPS
2Direct to vault capability handler
3No LLM involved at any step
4Vault handler derives key via Argon2id
✓Bypasses AI routing entirely
✓No third-party LLM exposure
✓Same encryption model

Both paths: Argon2id derivation / AES-256-GCM encryption / derived key held 15 min then cleared

We think this distinction matters. Most of your data is encrypted with server-derived keys (we could technically read it). Your passwords are encrypted with a key derived from your master password (we cannot, unless we intercept it during authentication). We'd rather explain the tradeoff than let you assume everything is zero-knowledge when it isn't.

Infrastructure

The master encryption key (from which all per-plugin and per-user keys are derived) is stored in HashiCorp Vault, not in environment variables or config files. The core service authenticates to Vault via AppRole at startup, retrieves the key, and distributes derived per-plugin keys to each sidecar over the internal message bus. No sidecar ever sees the master key itself.

All external traffic passes through a Cloudflare Tunnel. No ports are exposed to the internet. Every service binds to localhost only. The server's IP address is never revealed. External connections are TLS-encrypted end-to-end by Cloudflare.

Internal container-to-container traffic is encrypted at two levels. The message bus uses TLS with a self-signed internal CA (plaintext port disabled entirely). The container overlay network adds a second layer of IPsec encryption on all inter-container frames (in Swarm mode). Each service authenticates to the message bus with unique credentials and ACL-scoped permissions.

Identity-bearing messages are also HMAC-SHA256 signed at the application layer using a dedicated signing key (separate from the encryption master key). Each signed message includes a timestamp to prevent replay attacks.

Your messages are never logged in production. The logging system replaces message content with character counts ([User: 142 chars]) and system prompts with hashes. This is enforced at the logging layer, not by convention.

Known limitations

The system is designed to survive a compromised plugin container. That's the primary threat model and the most likely attack vector. Here's what it doesn't protect against:

•

Core compromise - If an attacker gains access to the core service or host, they have the master key and can derive all encryption keys (including from sidecar memory). This is an inherent limitation of server-side encryption. However, reaching the core requires chaining multiple exploits: there are no exposed ports (all traffic passes through Cloudflare Tunnel), every container runs with dropped capabilities on a read-only filesystem, and Redis ACLs prevent plugin containers from accessing core channels. The most realistic path would be a supply chain attack on a dependency, which would still be contained by container isolation. We are exploring hardware-isolated key management (such as HSMs or trusted execution environments like Intel SGX) as a future mitigation, which would prevent the master key from being read even by a compromised host or core process.

•

Backup retention - When you delete your account, live data is purged immediately. Encrypted backups expire within 5 weeks. We do not yet support selective per-user deletion from backup snapshots.

No system is perfectly secure. Ours is designed so that the most likely attack (a vulnerable plugin) yields nothing useful, and escalation requires compromising increasingly hardened components with increasingly narrow access.

Data rights

Deletion: Delete your account from the dashboard or via chat. 30-day grace period to change your mind. After that, all data is permanently purged across every plugin and database. Encrypted backups naturally expire within our retention window (up to 5 weeks). We do not yet support selective deletion from backup snapshots.

Export: Download all your data in a portable format. Re-authentication required for security. Password vaults stay encrypted in the export; you decrypt them locally with your master password.

GDPR: We support GDPR data subject rights including access, portability, and erasure. See our privacy policy for details.

Responsible Disclosure

Found a vulnerability? Reach out at [email protected]. We respond within 48 hours. If you have general questions, visit our contact page.

Saasis Logo

Saasis

PrivacyTermsContact

© 2026 Saasis. All rights reserved.