Shipyard · Workshop

Your team uses AI.
They just don't use it together.

Shipyard is a two-day workshop where you and your product team build four AI agents that cover your entire development lifecycle — from idea validation to deployment.

In-person. At your office. Led by Fame.

The real bottleneck

AI didn't make your team faster.
It made the chaos harder to see.

Every developer on your team uses Copilot to some degree and everyone uses it differently. There are no shared standards for AI-assisted code. No review process. No deployment safety checks. And everyone in the wider team, want's to build and contribute.

The founder is still the de facto technical decision-maker — triaging ideas, reviewing architecture, catching mistakes. That was fine at five people. It does not scale to fifteen.

The problem isn't that your team lacks AI tools. The problem is that nobody agreed on how to use them.

A mid-level developer costs £400–600 per day. If inconsistent AI usage causes even one day of rework per week across your team, that is £24,000+ per year in wasted capacity. And for most teams, one day per week is conservative.

Most AI training teaches individuals to use tools. Shipyard is different: your whole team, in the same room, building agents that connect into a single pipeline. Not a course. Not a video series. Not one person learning and hoping the knowledge spreads.

Four agents. One pipeline. Your whole SDLC.

In two days, your team builds the AI agents that run your development lifecycle.

Idea Validator PM Agent Dev Agent QA Agent Deployable
01

Idea Validator

Should we build this?

Any team member - designer, PM, founder - validates whether an idea is worth engineering time before it touches the backlog. Kills bad ideas early. Protects your sprint.

02

PM Agent

Scope it. Spec it. Ship it.

Takes validated ideas and breaks them into scoped tickets with acceptance criteria. Flags dependencies, risks, and unknowns. Your PMs stop writing tickets from scratch — they review and refine what the agent drafts.

03

Dev Agent

Every PR gets a consistent review.

Reviews pull requests against your team's own standards. Validates architecture decisions before they ship — not just the ones the senior dev has time for. Your team defines the standards. The agent enforces them consistently.

Example: A developer opens a PR that introduces a new API endpoint without input validation or rate limiting. The Dev Agent flags both omissions, references your team's agreed security checklist, and suggests specific fixes — before any human reviewer spends time on it.

04

QA Agent

Nothing ships without a second opinion.

Generates test cases from acceptance criteria. Flags edge cases your team would miss under deadline pressure. Runs pre-deploy safety checks against your team's own release criteria.

Nobody get's left behind, watching from the side. Everyone has hands on the keyboard. Everyone builds.

This is the part where most teams say "we should do something like this."
Book a call and find out if Shipyard is the right fit.

Book a discovery call

The format

Two days. Your office. Your team's real problems.

This is not a course. There is no slide deck and no homework. It is a working session led by a fractional CTO, run at your office, with the people who build and ship your product.

The workshop teaches your team to build agents using scenarios from your world. Implementation against your codebase is the natural next step — and most teams start the following week.

Day 1

Map and build

We audit how your team currently uses AI tools. We surface the gaps, the inconsistencies, the unspoken standards. Then we build the first two agents — Idea Validator and PM Agent — using the ideas you surface in your pre-workshop questionnaire, not abstract scenarios. By mid-morning, everyone has a prompt running that does something useful. The abstract becomes concrete fast.

Day 2

Ship and commit

We build the Dev Agent and QA Agent. We run a full pipeline scenario based on your product domain. We test, iterate, and pressure-test every agent against realistic work. Then we lock in ownership — who runs what, where it lives, and what success looks like at 30 days.

The deliverables

Everything your team needs to keep shipping on Monday.

Working tools

  • 4 working agent prompts — tested and iterated, not theoretical templates
  • Workflow diagram — your team's process with every agent touchpoint mapped

Real output

  • 3–5 tickets scoped and reviewed through the pipeline
  • One complete feature spec — run through the full pipeline from idea to deployable

Team standards

  • Ticket templates, test plan templates, and validation checklists
  • Team standards document — surfaced during the Dev Agent session

Adoption plan

  • Adoption grid — agent, owner, trigger, where it lives, 30-day signal
  • Individual commitment cards and signed team decision sheet
  • Monday actions — one specific, named action per person

Adoption is built into the format. By Day 2, every person has named what they will use, when they will start, and who checks in with them. The signed commitment sheet is not a formality — it is the mechanism that prevents "interesting workshop, back to normal on Monday."

Plus a post-workshop check-in call — scheduled for 30 days out. By then, your team will have used the agents on real work. The call is a diagnostic: what stuck, what did not, and what to do about it. The goal is simple: at 30 days, every agent should have a named owner running it on live work. That is what adoption looks like. That is what the check-in is for.

Your facilitator

I kept seeing the same problem. So I built the fix.

Shipyard is led by Fame Razak, a London-based fractional CTO who works with product teams at the point where individual AI tool adoption needs to become a shared, structured workflow.

I built this workshop because the same problem kept appearing across every team I embedded with: they had the tools, but nobody had agreed on how to use them together. The result was always the same — inconsistent output, rework, and a founder stuck as the bottleneck.

Shipyard is built on the same technical leadership approach I bring to fractional CTO work — compressed into two focused days with your whole team in the room.

From teams I have worked with

"Fame introduced us to enterprise-level solutions and connected us with a credible outsourced technical partner."

— Managing Director, London Digital Agency

"Fame's architectural decisions and technical leadership helped us scale from MVP to a platform handling thousands of daily active users."

— Funded Startup Founder, London

Shipyard is built on the same diagnostic and structured approach I use when embedded with teams as a fractional CTO — the same questions, the same standards-first methodology — compressed into two focused days.

Is this for your team?

Shipyard works for two kinds of teams.

Agency teams (10–50 people)

You have lost pitches to agencies half your size because they move faster. Clients are asking about AI in RFPs and your answer is "we are exploring it." Enterprise prospects want to see structured AI workflows before they sign. Your developers use AI tools, but there is no repeatable standard across client work.

You do not need to become technical. You need your team to build something together that you can point to in the next pitch and say: this is how we work now.

Startup teams (2–20 people)

The founder is the only person who knows the architecture. Your developers adopted AI tools individually — each one using them differently, with no shared process. You are shipping fast but accumulating decisions nobody documented. AI-generated code ships without proper review.

You need a structured pipeline that gives your team shared standards without slowing them down. You leave with a shared technical standard your team will actually follow — because they built it, not because you mandated it. And you need it built in days, not quarters.

Pricing

Two tiers. No hidden costs.

£5,000

Up to 6 people

For early-stage teams building their first shared AI development workflow. Includes two full days, all deliverables, and a post-workshop check-in call.

That is £833 per person — against the £24k+ your team wastes every year in inconsistent AI-assisted rework.

Book a discovery call

£8,000

Up to 10 people

For teams who need more seats at the table — developers, PMs, designers, and leadership. Includes two full days, all deliverables, and a post-workshop check-in call.

That is £800 per person — against the £24k+ your team wastes every year in inconsistent AI-assisted rework.

Book a discovery call

Based on one day of rework per week per developer at mid-market day rates — conservative for most teams.

Both tiers include the same workshop, the same deliverables, and the same post-workshop support. The only difference is team size.

Questions

Common questions

What if our team is not very technical? +

Shipyard covers the full development lifecycle, not just code. The Idea Validator and PM Agent are built for non-developers — PMs, designers, founders. Nobody sits in the corner watching. Everyone builds.

What tools do the agents run on? +

The primary tool is Claude Code. The principles and patterns transfer directly to Cursor, Copilot, Codex, and other AI development tools. You are not locked into a single platform.

What do we need to prepare? +

Each attendee completes a 20-minute questionnaire before the workshop — real ideas from your backlog, current AI usage, and workflow frustrations. Laptops need Claude Code installed and tested. We run a tech check call the week before.

Do the agents connect to our codebase during the workshop? +

The workshop teaches your team to build agents using scenarios relevant to your product. We do not connect to your production codebase during the two days — this is deliberate: your team learns transferable patterns they can apply to any codebase, any tool, any future project. Your team will be able to go away after this and start implementing against their own codebase the following week.

Can the agents be updated after the workshop — and who maintains them? +

Yes. The agents are yours — plain prompts that live in your tools, with no vendor dependency and no subscription. Your team can modify, extend, and improve them as your standards evolve. That is the point: you own the system, not me. The adoption grid assigns a named owner to each agent — so maintenance is someone's job, not everyone's afterthought. The post-workshop check-in covers troubleshooting anything that has surfaced in the first few weeks.

What if our team is fully remote or distributed? +

The workshop is designed for in-person because the format depends on shared physical space — whiteboard sessions, paired exercises, and wall artifacts that the team references throughout both days. If your team is distributed, the discovery call is the right place to discuss what an adapted format could look like.

Is there a minimum team size? +

The exercises work best with at least 3 people. For teams of 2, the discovery call is the right place to discuss whether Shipyard or a different format is the better fit.

What is a discovery call — am I committing to anything? +

No commitment. It is a 30-minute conversation about your team's current workflow, where the gaps are, and whether Shipyard is the right fit. No pitch deck. No pressure.

Your team is already using AI.
The question is whether they are using it together.

One discovery call. We will talk through your team's current workflow, where the gaps are, and whether Shipyard is the right fit. No pitch deck. No pressure.