AI User Safety Initiative

The SAFE AI Use Sprint — Waitlist

Most AI workshops teach you to build faster.
This one teaches you to build well.

Two live sessions. One workflow built on your real work. A method for critical evaluation that stays with you long after the Sprint ends.

There is a version of AI adoption that looks productive on the surface and quietly costs you something underneath. You automate tasks you shouldn't have delegated. You accept outputs you didn't critically evaluate. You get faster at the wrong things — and you don't notice until the wrong thing has already happened. In a client deliverable. A decision. A document that went out with your name on it.

I've watched people adopt AI the way they adopted every other tool: copy what the fast movers are doing, optimise for output, ask questions later. And I understand the pressure. When everyone around you seems to be moving, standing still feels like falling behind.

"The moments where AI made things worse were never the moments where I moved slowly. They were the moments where I moved without a filter."

No criteria for what task to hand over. No method for evaluating what came back. No sense of where thinking needed to stay in the loop. The problem was never the tool. It was the absence of a system around the tool.

That's the gap the SAFE AI Use Sprint was built to close. Not just "here is how to build a workflow" — but here is how to build one you can actually trust. One you understand well enough to explain, defend, and improve.

Built on the SAFE Framework

S
Structure
Define what AI should handle — and what it shouldn't
A
Accelerate
Use AI to enhance what you already do well
F
Filter
Never accept AI output at face value — build the evaluation instinct
E
Empower
Build capability, not dependency — own it completely
What you walk away with
A complete AI workflow selected using a proven four-criteria framework, mapped to your real work
A live automation running without you clicking — built in the session, connected to your actual tools
The Specify / Redirect / Calibrate method — a diagnostic framework for making AI output consistently usable
AI as a thinking partner — the skill almost nobody teaches: using AI to stress-test decisions, not just execute tasks
No code. No prerequisites. Just a way of building that doesn't ask you to switch off your judgment to get results

If you've been waiting for an AI workshop that takes your thinking seriously —

Join the waitlist. You'll be first to know when the next Sprint opens, and you'll receive subscriber recognition pricing when it does.

Join the Waitlist

Waitlist members receive priority access + subscriber pricing. No spam. Ever.