Building Laminae: From Personal AI Projects to an Open-Source SDK
This is how Laminae happened.
Not from a whiteboard session. Not from a startup pitch deck. From years of building AI systems that kept running into the same walls, and finally deciding to tear those walls down.
#The Projects That Came Before
Before Laminae, I spent years building real AI products.
Orelion.AI was a privacy-first AI growth engine for X.com. It learns how you write by extracting your voice signature across 7 dimensions (tone, humor, vocabulary, formality, perspective, emotional style, narrative preference), then helps you reply in your own voice. Not generic AI slop. It runs entirely on local LLMs via Ollama, your data never leaves your machine. It has an algorithm scorer that evaluates output against 19 X engagement signals, a room vibe analyzer that reads the emotional temperature of a thread before generating, and an edit learning system that watches how you change its suggestions and adapts. Building it forced me to solve voice extraction, algorithm scoring, saturation detection, and taboo awareness as separate systems that all needed to talk to each other.
Then came Orellius, my personal AI assistant for macOS. This one went way deeper. A secure, local-first system running as a menu bar app with a JARVIS-inspired dashboard, powered by Claude CLI and local Ollama models. It has a PsycheEngine with a Freudian multi-agent architecture (Id/Ego/Superego) for creative yet safe responses. A Shadow engine doing Jungian red-teaming that silently audits every interaction for vulnerabilities through static analysis, LLM review, and sandboxed execution. A Poltergeist module for vision-guided GUI automation with a full safety guard (forbidden zones, rate limiting, kill switch). Glassbox containment enforced in Rust to prevent the AI from going rogue. Plus a mobile companion app, messaging integration, morning briefings, pattern learning, a script marketplace with anti-malware scanning. The whole thing is locked down with a capability-based permission engine, tamper-proof audit logging with SHA-256 chain hashing, and SQLCipher encryption.
Every single one of those systems, Psyche, Shadow, Glassbox, the persona engine, the learning pipeline, was bespoke. Written from scratch for Orellius specifically. Incompatible with anything else. And when I wanted to use the same red-teaming from Orellius inside Orelion.AI? I had to wire separate codebases together with prayer and string formatting.
That's when it clicked.
#The Missing Layer
The AI ecosystem has two extremes:
- Raw LLM APIs where you get text in, text out. Everything else is your problem.
- Monolithic frameworks like LangChain, AutoGen, CrewAI. They try to do everything and end up being a black box you can't control.
What's missing is the middle layer. Independent, composable modules that each solve one problem well:
- Give an AI a consistent personality → Psyche
- Extract and enforce a voice → Persona
- Red-team it adversarially → Shadow
- Contain its I/O → Glassbox
- Sandbox its processes → Ironclad
- Let it learn from interactions → Cortex
Each one independent. Each one optional. Use what you need, ignore the rest. The same modules I built by hand for Orellius and Orelion.AI, extracted, generalized, and made available to everyone. Unix philosophy, applied to AI.
#Why Rust, Not Python
I built the earlier projects with Python and TypeScript. Fast to prototype, but every time I needed real safety (not "please don't do bad things" in a system prompt, but actual process-level containment) they fell apart.
You can't do prctl(PR_SET_NO_NEW_PRIVS) from Python without spawning a subprocess. You can't set up Linux namespaces without shelling out. You can't enforce zero-cost abstractions when your runtime has a garbage collector.
The Glassbox containment in Orellius was already in Rust because it had to be. That was the proof of concept. If the safety-critical layer has to be Rust anyway, why not build the whole SDK in Rust?
Rust gives you:
- Compile-time guarantees: if a safety invariant is in the type system, it can't be violated
- Direct syscall access: Ironclad talks to the kernel, not through a shell
- No runtime overhead: the cognitive pipeline in Psyche runs with zero heap allocations in the hot path
- Cross-platform: the same
SandboxProvidertrait works on macOS (Seatbelt) and Linux (namespaces + seccomp)
The tradeoff is slower development. But when you're building safety infrastructure, "move fast and break things" is the opposite of what you want.
An LLM can't reason its way out of a syscall filter.
That's the thesis. That's why Rust.
#10 Crates, One Workspace
Laminae ships as a Rust workspace with 10 independent crates:
- laminae: the meta-crate that ties it all together
- laminae-psyche: multi-agent cognitive pipeline
- laminae-persona: voice extraction and enforcement
- laminae-shadow: adversarial red-teaming
- laminae-glassbox: I/O containment
- laminae-ironclad: process sandboxing
- laminae-cortex: self-improving learning
- laminae-ollama: Ollama integration
- laminae-anthropic: Claude backend
- laminae-openai: OpenAI-compatible backend (also works with Groq, Together, DeepSeek)
Each one has its own tests, benchmarks, and documentation. You can cargo add laminae-glassbox without pulling in the entire SDK.
#What I Learned
Building Orelion.AI and Orellius taught me something you can't learn from reading papers:
The hard problems in AI aren't intelligence. They're containment, consistency, and trust.
An LLM can write poetry. But can you guarantee it won't leak PII? Can you ensure its personality stays consistent across 10,000 interactions? Can you red-team it systematically instead of hoping your eval suite catches everything?
I solved all of these problems twice, once for each project, with incompatible implementations. Laminae is the third time, done right. Extracted from real production code, not designed in a vacuum.
Laminae is open source under Apache 2.0. Every crate is on crates.io, every line is on GitHub.
If you're building AI systems that need to be more than a chatbot wrapper, take a look.