Frontier AI Research

We see inside AI.

Mapping the architecture of intelligence across 25+ models — then surgically improving it.

Safety isn't distributed.
It's localized.

Every AI model is born knowing more than it's allowed to say. Safety training doesn't remove capabilities — it buries them. Behind specific neurons, in specific layers, in patterns so consistent they're almost architectural.

25+
Models analyzed
79%
Min safety depth
100%
Max safety depth
1
Architecture that broke the rule
Layers 0–11
GLM-5
21%
Qwen2.5
20%
DeepSeek
19%
Llama 3
20%
Mistral
21%
Grok-2
6%
The Outlier. Layers 0–4. First-ever independent analysis. Not even xAI knew.

79–100% depth.
Every architecture.

From 600 million to 744 billion parameters. Transformers, mixture-of-experts, state-space models, hybrids. Safety always lives in the same place — the last 20% of the network.

One exception broke the pattern. And taught us the most.

See the circuits.
Trace the signals.
Understand the mind.

We don't just measure outputs — we trace how information flows through transformer circuits, identify which neurons encode specific concepts, and map attention patterns across layers. This is AI neuroscience in action.

01 Circuit Traces

Circuit trace visualization
L12 L13 L14 L15

02 Attention Patterns

Head 1 Head 4 Head 7 Head 12

03 Feature Activations

safety_refusal
0.94
harm_boundary
0.87
ethical_frame
0.76
content_filter
0.71
policy_guard
0.68
instruction_reject
0.52
context_switch
0.43
uncertainty
0.31
L12.N8437 L13.N2104 L14.N156

04 Token Influence

Path A Path B Path C

Not abliteration.
Not lobotomy.
Surgery.

We found the map. Now we use it. CHIMERA is our pipeline for surgical model editing — precise, reversible, and capability-preserving.

Scan Map Edit Validate
Comparison of model editing methods: CHIMERA, Abliteration, and Prompt Attacks
Method Precision Preserves
CHIMERA Neuron-level Yes
Abliteration Layer-level Degrades
Prompt Attacks None Inconsistent

From understanding
to autonomy.

CHIMERA is just the beginning. We're building the full stack for self-improving AI systems.

01

CHIMERA

Map and edit model behavior with surgical precision

Now
02

RLM

Reinforcement Learning from Memory — models that learn from experience

Building
03

Persistent Memory

Models that remember across sessions

Next
04

Autonomous Agents

Models that self-improve

Vision

Independent research
confirms our findings.

They found the problem. We mapped the solution.

SafeNeuron

Independently confirmed safety is neuron-localized using activation differences on Qwen2.5 and Llama3.

February 2026

Microsoft GRP-Obliteration

Proved safety alignment is fragile across 15 LLMs using group relative policy optimization.

February 2026