blog

Writing

Technical depth on AI governance, MCP security, and the architecture decisions behind Neurelay.

Mar 31, 2026

We Scaled the Benchmark to 200 Prompts. The Model Wasn't the Problem.

Expanding from 60 prompts across 3 domains to 200 across 5 regulated verticals confirmed what we suspected: right-sizing the model matters, but the governance layer is what makes any of it production-safe.

SLM Benchmark

Mar 17, 2026

Your AI Agent Has Access to 47 Tools. Who Approved That?

MCP gives AI agents a standard way to discover and call tools. But discovery without governance means every agent has access to everything. The tool boundary is the next compliance gap — and it needs the same layered thinking we applied to data.

MCP AI Governance

Mar 10, 2026

When an SLM Routes Every Request, PII Recall Drops to Zero — Why Layered Architecture Wins for Enterprise AI

A 1.5B model classified credit card numbers as 'not sensitive.' The same model, used only for ambiguous cases behind a deterministic layer, improved routing accuracy to 95%. Right-sizing isn't just about model size — it's about knowing where each layer belongs.

SLM Prompt Routing

Mar 3, 2026

How We Pushed PII Recall from 76% to 98% — Right-Sized Models, No Fine-Tuning, No LLMs

Statistical NER + five pattern recognizers + one threshold change. No fine-tuning. No GPU. No data leaving your perimeter unmasked. A practical guide to enterprise PII detection under GDPR.

PII Detection GDPR

Sep 10, 2025

The Philosophy of Secure AI: Let LLMs Think, Let Tools Execute

For a secure AI native, the real security challenge isn't keeping bad actors out — it's keeping powerful AI from making dangerous moves by accident.

AI Native Enterprise AI

Sep 1, 2025

A New Security Playbook for the AI-Native Enterprise

Unlocking enterprise data with AI requires a new approach to security and governance. This is the playbook for enabling innovation, safely.

AI Native Enterprise AI