Laminae v0.3: Everything That Was Missing
Orel OhayonTable of contents
Seven features. One release. Zero excuses left.
When I shipped v0.2, I wrote a section called "What's Not Done Yet" — a public list of everything Laminae was missing. Today, every item on that list is crossed off.
#What shipped
#Persona extraction quality scoring
The persona extractor now computes a composite confidence score for every extraction. Three signals feed into it:
- Sample count (30% weight) — More samples mean more signal. Below 3, extraction is refused entirely.
- Vocabulary diversity (40% weight) — Measured via Type-Token Ratio (TTR), normalized for corpus size. Identical samples get flagged.
- Useful-sample ratio (30% weight) — Short or empty samples drag the score down.
The result is an ExtractionQuality struct attached to every PersonaMeta:
pub struct ExtractionQuality {
pub confidence: f64, // 0.0–1.0 composite
pub diversity_score: f64, // TTR-based
pub avg_sample_length: f64,
pub short_samples: usize,
pub warnings: Vec<String>,
}If you pass 3 identical samples, you'll get a confidence around 0.3 and a warning about low diversity. Pass 10 varied writing samples and you'll land above 0.8. The scoring is deterministic — no LLM involved in the quality assessment itself.
#Published benchmarks
Every crate now has Criterion.rs benchmarks with real numbers from CI. Highlights:
| Operation | Time |
|-----------|------|
| Glassbox validate_input | ~150ns |
| Glassbox validate_command | ~200ns |
| Ironclad process spawn + kill | ~5ms |
| Voice filter check | ~2μs |
| Edit pattern detection (100 edits) | ~50μs |
| Shadow threat analysis | ~15μs |
The Glassbox numbers matter most: 150 nanoseconds to validate input means the containment layer adds effectively zero overhead to any LLM call. CI now compiles all benchmarks on every push to catch regressions.
#Shadow sandbox execution
Shadow's sandbox manager went from a stub to a real implementation. It detects Docker or Podman at runtime, then executes code blocks in ephemeral containers with:
--network=none— No network access--memory=128m— Memory cap--cpus=0.5— CPU limit--read-only— Read-only filesystem--cap-drop=ALL— Drop all Linux capabilities--security-opt=no-new-privileges:true— No privilege escalation
After execution, output is analyzed for network access patterns, filesystem escape attempts, and privilege escalation signatures. The container is destroyed immediately after.
#Windows support
Ironclad now works on Windows. A new WindowsSandboxProvider uses Windows Job Objects for process containment — working directory restriction plus environment variable scrubbing. Process management is cross-platform: ps/kill on Unix, wmic/taskkill on Windows. CI now runs the full test suite on windows-latest.
#WASM compilation
The browser-safe crates — Glassbox, Persona (voice filter), and Cortex — now compile to wasm32-unknown-unknown. Native-only modules (Psyche, Shadow, Ironclad, Ollama, Anthropic, OpenAI) are gated behind cfg(not(target_arch = "wasm32")).
This means you can run voice filtering and I/O containment directly in the browser. The meta-crate laminae automatically includes only WASM-compatible layers when targeting WebAssembly.
#Python bindings
A new laminae-python crate exposes the three browser-compatible layers to Python via PyO3:
from laminae import Glassbox, VoiceFilter, Cortex
# I/O containment
gb = Glassbox()
gb.validate_input("Hello") # OK
gb.validate_command("rm -rf /") # raises ValueError
# Voice filtering
f = VoiceFilter()
result = f.check("It's important to note that...")
print(result.passed) # False
print(result.violations) # ["AI vocabulary detected: ..."]
# Edit tracking
c = Cortex()
c.track_edit("It's worth noting X.", "X.")
patterns = c.detect_patterns()Build from source with maturin develop in crates/laminae-python (PyPI package coming soon). The bindings are thin — all computation happens in Rust.
#Documentation site
A full mdBook documentation site covering:
- Getting started — Installation, quick start, concepts
- Layer guides — Deep dives into each of the 8 crates
- Backend guides — Ollama, Anthropic, OpenAI integration
- Recipes — Building a safe chatbot, voice-matched content pipelines, red-teaming setups
- Reference — Configuration, error handling
#What changed architecturally
The WASM work forced a clean separation between "pure computation" crates and "I/O-dependent" crates. This is now explicit in the dependency graph:
Browser-safe (always available):
laminae-glassbox — regex + string matching
laminae-persona — voice filter (no extractor on WASM)
laminae-cortex — edit tracking + pattern detection
Native-only (gated on non-WASM):
laminae-psyche — multi-agent pipeline (needs async runtime)
laminae-shadow — red-teaming (needs process execution)
laminae-ironclad — sandbox (needs OS-level process control)
laminae-ollama — HTTP client for Ollama
laminae-anthropic — HTTP client for Anthropic
laminae-openai — HTTP client for OpenAI
This separation is enforced at compile time via cfg gates — not a runtime check, not a feature flag. If you target WASM and try to import Shadow, it won't compile. That's the point.
#The numbers
- 10 crates published
- 175+ tests passing across the workspace
- 3 platforms — macOS, Linux, Windows
- 2 compilation targets — native + WASM
- 1 language binding — Python (more coming)
#What's next
The "not done yet" list is empty for the first time. That doesn't mean Laminae is finished — it means the foundation is solid. Next priorities:
- TypeScript/WASM package on npm for browser usage
- More language bindings — Go, Ruby
- Psyche pipeline visualization — debug tool for multi-agent flows
- Laminae Cloud — hosted version for teams that don't want to self-host
If you're building AI applications and want safety that's enforced in Rust, not in prompts: get started with Laminae.
New here? Start with the origin story, then read the deep technical dive.