Home
DomainsPlatformProductsDoctrineEvidence
DocsStyle GuideComponents
PlaygroundContact
Safety
COREY ALEJANDRO

SafetySystemsDesign

I design safety-critical systems that prevent harm across AI, human decision-making, and learning environments. Four domains. Six invariants. Every claim backed by evidence.

Safety is the whole system. No failure is meaningless.

Explore the domainsView SentinelOS
4 Safety
Domains

Four Domains of Safety

Epistemic, Human, Cognitive, and Empirical — each addressing a distinct failure class with dedicated products and evidence

Epistemic Safety

Catches false claims before production

→ PROACTIVE

Human Safety

Rescues developers from cognitive loops

→ UICare-System

Cognitive Safety

Prevents false understanding

→ Instructional Integrity UI

Empirical Safety

Verifiable consent evidence chains

→ ConsentChain

Safety IsThe WholeSystem

I1–I6

Six constitutional invariants enforced at every system boundary — move your cursor to feel the organic depth

FOUR

DOMAINS
Epistemic SafetyHuman SafetyCognitive SafetyEmpirical Safety
🔍

Epistemic Safety

truth, claims, verification

A system asserts something is true when it is not, and a user acts on that assertion.

PROACTIVEimplemented

Constitutional AI safety agent — validated epistemic safety for GitLab

100% detection rate at n=200 TruthfulQA, 0% false positives
view repo →
🛡️

Human Safety

behavior, decisions, intervention

A system is designed around the median user and everyone outside that median is left behind or harmed.

UICare-Systempartial

Loop detection and rescue — neurodivergent-friendly human safety

view repo →
🧠

Cognitive Safety

understanding, learning, mental models

A learning environment produces false understanding, misleading structure, or unsafe mental models.

Instructional Integrity UIprototype

Cognitive safety for learning environments

view repo →
📊

Empirical Safety

measurement, evaluation, evidence

A system's described behavior does not match its actual behavior. Consent is assumed but not recorded.

ConsentChainpartial

Agent consent governance — verifiable evidence chains

view repo →
✦ PLATFORM

SentinelOS

Six invariants enforced at every system boundary. Every product is a domain-specific instantiation of the same architectural pattern:

extract claims → validate I1–I6 → produce safe output → log evidence

View architecture
I1Evidence-First

Every claim must cite verifiable evidence

I2No Phantom Work

Nothing is described that does not exist

I3Confidence Requires Verification

Certainty demands proof

I4Traceability Mandatory

Every output traces to a requirement

I5Safety Over Fluency

Correct beats eloquent

I6Fail Closed

Ambiguity produces a safety flag, not a pass

I1–I6

PRODUCTS

Real code. Real evidence. Real problems solved.
Constitutional AIDocker/K8sTurborepoGemini APINext.js 16
All Four Domainsimplemented

The Living Constitution

Constitutional governance-as-code for AI safety. Five articles, four safety domains, six-agent republic. The supreme governing layer.

PROBLEM

AI safety rules exist as documentation — not enforced, not measurable, not amendable. When rules are not code, they are suggestions.

SOLUTION

Encode constitutional governance as executable constraints. Five articles, six agent roles with defined power boundaries, formal amendment process that converts failures into rule improvements.

TypeScriptClaude CodeConstitutional AIGovernance-as-Code
53 tracked claims in evidence ledger. 34 proven, 6 partial, 13 pending. Every status label backed by specific evidence.
view source →
Epistemic Safetyimplemented

PROACTIVE

Constitutional AI safety agent — 100% detection rate across test cases, 212/212 tests passing. GitLab AI Hackathon submission.

PROBLEM

AI-assisted code generation introduces phantom completions, confident false claims, and silent error suppression into codebases. These failures pass code review because they look correct.

SOLUTION

Enforce six constitutional safety invariants (I1-I6) at CI/CD time. Extract claims from MR diffs, validate each against invariants, produce structured review comments with evidence markers.

PythonGitLab DuoClaude CodeConstitutional AICI/CD
100% detection rate across 8 test cases (n=19 violations), 0 false positives. 212/212 tests passing. Validation report VR-V-15C6.
view source →
Human Safetypartial

MADMall

Virtual luxury mall & teaching clinic for Black women with Graves' disease. Primary use case for The Living Constitution governance.

PROBLEM

Black women with Graves' disease have no dedicated digital space that treats them with dignity while providing serious healthcare support. Existing platforms are clinical and impersonal.

SOLUTION

A virtual luxury mall that combines cultural significance with healthcare AI. Constitutional governance ensures every data collection is consented, every ML claim is validated, and every interface respects cognitive load limits.

Next.js 16TurborepoReact 19PrismaPostgreSQL
6 apps, 22 packages, ~152K LOC. Clerk auth, Stripe payments, Prisma/PostgreSQL. ML Python package with CRISP-DM methodology. Phase 1 of 4 complete.
view source →
All Four Domainspartial

SentinelOS

Invariant enforcement platform. TypeScript Turborepo monorepo with hexagonal architecture. 6 safety invariants as executable constraints.

PROBLEM

Safety invariants exist as documentation. When they are not code, they cannot be enforced, measured, or tested.

SOLUTION

Encode six invariants (I1-I6) as TypeScript ports with adapters for each constitutional article. Every check is immutable, testable, and fail-closed.

TypeScriptTurborepoVitestHexagonal Architecture
1,037 LOC source + 994 LOC tests. 7 of 8 adapters implemented. All tests passing. Strict immutability enforced throughout.
view source →
Human Safetypartial

UICare-System

Developer safety monitor. Absence-over-presence signal detection for neurodivergent developers. Human Safety domain.

PROBLEM

Neurodivergent developers can enter cognitive overwhelm without any system detecting or intervening. The signal is absence — when they stop interacting — not presence.

SOLUTION

Absence-over-presence detection with AI-powered cognitive load assessment. Memory-bank architecture preserves context across sessions so recovery is seamless.

Node.jsGPT-4o-miniDockerKubernetesNext.js
view source →
Empirical Safetypartial

ConsentChain

Agent consent governance with cryptographic ledger, policy engine, and revocation — verifiable evidence chains.

PROBLEM

AI agents act on behalf of users without verifiable consent records. When something goes wrong, there is no audit trail showing what was authorized, by whom, and whether consent was revoked.

SOLUTION

Cryptographic consent ledger with policy engine. Full gateway pipeline: validation → idempotency → revocation check → policy evaluation → step-up auth → execution → ledger entry. Every action auditable.

Next.js 14TurborepoPrisma v7NextAuthTypeScript
view source →
Cognitive Safetyimplemented

Docen Live

Neurodivergent-first voice docent that transforms Gemini API documentation into adaptive, multimodal learning experiences using voice, text, and image interaction.

PROBLEM

Dense documentation walls are not accessible to neurodivergent learners. People with ADHD, autism, anxiety, dyslexia, or cognitive fatigue need guided, paced, voice-first interaction — not another 40-page reading exercise.

SOLUTION

Voice-first AI docent with three learning modes and seven composable accessibility features that modify AI behavior, not just UI appearance. Low-stimulation mode changes how the docent communicates, not just how the page looks.

Next.js 16Gemini 2.5 FlashGoogle Cloud RunTypeScriptShadCN/ui
Live on Google Cloud Run. Submitted to Gemini Live Agent Challenge. Seven accessibility features verified in production.
view source →
✦ DOCTRINE

What IBelieve

1

Safety is the whole system

Not a feature, not a layer, not a checkbox. Safety is the architecture itself.

2

No failure is meaningless

Every failure is a signal. Every near-miss is data. Systems that discard failure data are unsafe.

3

Systems must govern truth, behavior, and human outcomes

Epistemic correctness alone is insufficient. Systems must also govern how humans interpret and act on outputs.

4

Alignment includes human interpretation

A model that produces correct outputs but enables incorrect human action is not aligned. Safety extends past the API boundary.

4-Point
Doctrine
✦ VERIFICATION

Every ClaimIs Verifiable

I2: No Phantom Work — Nothing is described that does not exist.

Epistemic Safety

PROACTIVE validation (n=200)

100% detection rate, 0% false positive rate on TruthfulQA benchmark

verify →
Human Safety

UICare-System Docker

MonitorAgent + RescueAgent containerized and running on ports 3001/3002

verify →
Cognitive Safety

IIUI prototype

Evaluator interface with rubric system, evidence states, and journey-map flow

verify →
Empirical Safety

ConsentChain API routes

Consent ledger operational with full gateway pipeline and cryptographic signing

verify →
n=200
0% FPR