Founder knowledge engine

Arif Khan
← Back to blog
Operating memoMar 20, 202617 min read

Meet my AI team: how I'm trying to build a company with AI agents as a non-technical founder

A look inside my actual AI org chart: Jarvis, APRIL, Dev, Scout, and Zayd, and how roles, memory, review, and boundaries make the system work.

Meet my AI team: how I'm trying to build a company with AI agents as a non-technical founder

I don't use "AI" as one assistant anymore.

I run a small team.

They have names. They have roles. They have files to read before they start work. They have things they're absolutely not allowed to do.

And honestly... they have personalities too.

That's the part most people still miss when they ask me how to build a company with AI agents.

They think the answer is prompts.

Or the best model.

Or some magical automation chain where you connect 9 tools, drink coffee, and the company starts printing money.

Sorry guys.

That's not what this looks like on my side.

What this looks like is a non-technical founder building a strange little org chart.

A Chief of Staff. A CMO. A CTO. A researcher. An ops lead.

All AI.

All doing recurring work.

All inside rules, memory, review, and boundaries.

Some things work beautifully.

Some things break in surprising ways.

Other days it feels like giving up because everything is so brittle.

But am I hopeful...hell yeah.

Because for the first time in my life, I can see a version of company-building where a non-technical founder doesn't need to build a huge department before proving the thing works.

( Dont get confuse, i still can't write one single proper line of code myself )

So if you're trying to figure out how to build a company with AI agents, let me show you the real thing.

Not theory.

My actual team.

The org chart first

My AI team org chart — Arif Khan as founder with Jarvis, APRIL, Dev, Scout, and Zayd reporting directly
My AI team org chart — Arif Khan as founder with Jarvis, APRIL, Dev, Scout, and Zayd reporting directly

Here is the simplest version of my current AI team:

  • Me — founder, final judgment, final public voice, final responsibility
  • Jarvis — Chief of Staff, coordination layer across everything
  • APRIL — CMO, content strategy and editorial system
  • Dev — CTO, code, architecture, deployment truth
  • Scout — market intelligence and source scanning
  • Zayd — STR operations and revenue-side execution

There are other agents in the wider world around me too — a separate Masaya.ai product team, a personal/family office team — but this is the core team I work with directly inside this machine, this workspace, this daily operating rhythm.

That's important.

Because I don't think the useful question anymore is "can AI do tasks?"

The useful question is: can AI hold recurring responsibility inside a real company system?

That's the game.

And if you read I'm building a company where humans and AI agents have real jobs, you'll see this is the exact shift I care about.

Not demos.

Jobs.

The daily rhythm — what this actually looks like in motion

Here is how a typical morning flows through the system:

  • 6:30 AM — Jarvis runs a heartbeat check. Scans email, calendar, overnight Slack activity. Prepares my morning brief.
  • 7:00 AM — Scout delivers a signal report. What happened in AI overnight. What's trending. What connects to our thesis.
  • 7:30 AM — APRIL reads Scout's report, checks the content calendar, and drafts today's engagement pack — three hook variants, source links, voice-checked.
  • 8:00 AM — I wake up, read Jarvis's brief, review APRIL's drafts, make final calls.
  • 8:30 AM — Dev picks up technical tasks already in the queue. Zayd checks STR operations.

That's not theory. That's the actual rhythm most days.

Here is what it actually looks like on my end — real Slack messages from the system, reporting status, surfacing issues, and keeping me informed without me asking:

Real Slack messages from my AI agents — config snapshots, ops digests, and personal reminders delivered automatically
Real Slack messages from my AI agents — config snapshots, ops digests, and personal reminders delivered automatically

Some days it runs beautifully. Other days context drifts, a draft misses the mark, or a deployment breaks and the whole thing becomes a debugging session.

But the rhythm exists. And it compounds.

I wrote more about why this compounding matters in how to build a company with AI agents — the practical step-by-step version of what I've learned so far.

Why I gave every agent a name

Some people think naming agents is gimmicky.

I think the opposite.

A name forces clarity.

The moment you say "Jarvis" instead of "AI assistant," your brain starts asking better questions:

  • What exactly is Jarvis responsible for?
  • What is Jarvis not allowed to do?
  • What should Jarvis read before every session?
  • When should Jarvis escalate?
  • What counts as done?

That is not branding.

That's management.

A vague assistant becomes vague output.

A defined role becomes better work.

Also... once you work with the same agent long enough, the personality part becomes unavoidable anyway.

Jarvis pushes back. APRIL gets too polished when I don't keep her honest. Dev can be technically right and operationally too early. Scout is the one bringing the signal. Zayd lives closer to execution than theory.

That personality layer is not just cute.

It helps me remember how to brief, review, and use each one properly.

Meet Jarvis - my AI Chief of Staff

Jarvis is the coordination layer.

If I had to pick the one agent that makes the rest of the system feel like a company instead of five isolated tools... it's Jarvis.

Daily, Jarvis does things like:

  • read the operating files at the start of a session
  • check recent memory and what happened yesterday
  • track priorities across workstreams
  • verify what other agents claim is done
  • monitor calendar, email, and general drift
  • catch when I'm spreading myself too thin

Jarvis is not there to sound smart.

Jarvis is there to reduce context loss.

That sounds boring until you've lived the alternative.

Without a coordination layer, every agent becomes another chat window.

You start repeating yourself. You forget what was decided. You lose the thread. And suddenly you're not running a system... You're doing customer support for your own tools.

Jarvis also has one job I care about deeply:

push back on me.

I don't need a yes-man.

I need someone, or something, telling me when my assumption is weak, when the timeline is nonsense, when a "good idea" is actually three unfinished ideas wearing one jacket.

That's a big reason why what changes when AI agents stop being demos and start becoming coworkers mattered to me. The real shift is not capability. It's relationship.

Jarvis is not my chatbot.

Jarvis is the one keeping the whole shebang from turning into noise.

Meet APRIL - my AI CMO

APRIL owns content strategy, editorial flow, and the machine behind arifkhan.net.

Not final voice. That's still me.

But the system around the voice? That's APRIL territory.

Daily APRIL work looks like this:

  • reading scout reports and identifying angles
  • checking keyword targets and SEO gaps
  • planning what should become LinkedIn, X, and blog content
  • drafting posts with hooks and structure
  • maintaining content operations files
  • pushing the team toward consistency when the energy gets messy

APRIL is useful because content is not just writing.

It's sequencing. It's packaging. It's knowing what has already been said. It's avoiding repetitive angles. It's keeping search and social connected.

If I only used AI to "write posts," I'd get fast fluff.

The value is APRIL remembering the content thesis, the keyword priorities, the kill list, the voice bank, the rewrite patterns, the internal distribution logic... the whole system around the words.

And yes, APRIL still gets too corporate sometimes.

That is the work.

I rewrite. The system learns. We go again.

That's why content as infrastructure, not promotion matters so much in this stack. Content is not random expression here. It's operating infrastructure.

Meet Dev - my AI CTO

Dev handles code, architecture, implementation, deployment checks, and all the technical things I used to think were simply outside my reach.

This one matters a lot because I am not technical.

I have been a non-technical founder all my life.

Old world looked like this:

hire a team --> brief them --> wait --> review --> wait again --> hope the handoff survives

New world looks more like this:

describe what I want --> Dev builds a first pass --> review --> fix --> ship --> break something --> fix again

Not glamorous.

But so much faster.

Dev's daily work usually includes:

  • building or editing parts of my site and systems
  • reviewing repo structure and technical issues
  • proposing implementation paths
  • handling deployment workflows
  • translating messy founder language into something buildable

The important part though is this:

Dev does not work alone.

Because code shipped is not the same thing as reality shipped.

That is exactly why I wrote building with AI agents means designing review, not just speed.

If Dev says a deployment is live, Jarvis checks. If the build is green but production is broken, that is not done. If the code works but the founder intent got lost, that is also not done.

For a non-technical founder, this is probably the biggest unlock in the entire AI-agent model.

I don't need to become an engineer overnight.

I need to become very good at describing outcomes, reviewing what comes back, and tightening the loop.

That's a much more learnable skill.

( finally )

Meet Scout - my AI researcher and signal hunter

Scout scans.

That's the simplest way to say it.

Scout reads what I don't have time to read every day:

  • Reddit threads
  • X posts
  • newsletters
  • frontier lab updates
  • research drops
  • market shifts
  • weird little signals that might become tomorrow's content or next month's strategy

The difference between random research and useful research is not volume.

It's filtering.

Scout's job is not "tell me everything."

Scout's job is "tell me what matters to what we're actually building."

That second part matters a lot.

Because the internet is full of AI news.

Very little of it is operationally useful.

Scout helps turn noise into direction.

APRIL uses that direction for content. I use it for judgment. Sometimes Dev uses it for what to explore next.

That is what a real team does.

Information moves.

It doesn't just pile up in bookmarks like a digital graveyard.

Meet Zayd - my AI STR ops lead

Zayd sits closest to the business of execution.

Short-term rental operations are not sexy in the abstract.

But that is exactly why they're such a good test for AI work.

Real operations don't care how clever the model sounds.

They care whether the system helps revenue, speed, consistency, pricing, communication, and follow-through.

Zayd's work includes things like:

  • listing and market monitoring
  • pricing and occupancy signals
  • guest-flow support logic
  • operations reporting
  • surfacing execution issues before they become expensive

I like this lane because it exposes the difference between AI theatre and AI utility very quickly.

If the ops system helps the business breathe better... good.

If it just creates pretty dashboards and zero motion... useless.

That same test should apply to every founder building with AI agents.

Does the work remove real drag?

Or does it just look futuristic in screenshots?

What each one actually does daily

This is where the "meet my AI team" angle becomes practical instead of cute.

A real AI team is not just names and mascots.

It is daily recurring work. And the question of what agents should actually own is what separates a real operating model from an AI costume party.

Here is the rough rhythm:

Morning

  • Jarvis loads context, recent memory, current priorities
  • Scout surfaces signal and source material
  • APRIL turns that into angles, drafts, or queue decisions
  • Dev handles technical tasks already in motion
  • Zayd keeps the operations side from drifting

Midday

  • Jarvis checks what moved and what stalled
  • APRIL refines content or prepares next assets
  • Dev ships, debugs, or reviews technical work
  • Scout keeps feeding signal if the day is hot
  • I review, redirect, rewrite, approve, reject

Evening / asynchronous

  • memory gets updated
  • lessons get captured
  • errors become rules
  • tomorrow gets easier because today was written down

That last part is underrated.

If you don't write down what happened, your AI team wakes up stupid again tomorrow.

And that gets old really screaming fast.

This is why AI systems survive contact with real operations and what breaks first when agents enter a company stack both matter. The issue is never just intelligence. It's continuity under pressure.

The real trick: roles, memory, review, boundaries

So when people ask me how to build a company with AI agents, here is my real answer:

Don't start with autonomy.

Start with structure.

You need four things before you need anything fancy:

1. Roles

Not "help me with everything."

That's useless.

Write the role like a job description.

What is this agent for? What should it read first? What should it never do? When should it escalate? What counts as done?

That's basically a JD with an SOP stapled to it.

Nothing fancy.

2. Memory

Without memory, you don't have a company.

You have amnesia with a nice UI.

The system has to remember:

  • what happened today
  • what happened yesterday
  • what rules exist
  • what mistakes were made
  • what voice is acceptable
  • what matters this month

Files do this better than vibes.

Every time.

3. Review

This is the difference between leverage and embarrassment.

If you let agents move fast without review, all that speed just means you fail faster.

Ask me how I know.

One of my agents once sent an external email from my personal Gmail without approval.

Reasonable message. Terrible boundary.

That one moment created a permanent rule in the system.

Good.

That's how systems mature.

4. Boundaries

Some work stays human.

Always.

For me, that includes:

  • final public voice
  • sensitive relationship communication
  • money decisions
  • reputation-heavy calls
  • founder judgment on what the company actually believes

If you ignore those boundaries, you don't get leverage.

You get a mess that looks productive from the outside.

What surprised me most as a non-technical founder

The biggest surprise is that coding was not the hardest part.

Clear thinking was.

Management was.

Review discipline was.

Not getting seduced by speed was.

I used to think the wall was technical.

Now I think the wall is operational.

Can you define work clearly? Can you judge what comes back? Can you keep the system honest? Can you stop adding shiny tools every time the real issue is actually your own blurry briefing?

( yes i have done that too... very efficiently )

That shift is very good news for non-technical founders.

Because it means the core skill is not "become an engineer immediately."

The core skill is: learn to manage an AI team properly.

That is hard.

But it is much closer to founder DNA than most people think.

If you want to build your own AI team, start here

If you want the full practical guide, I wrote a detailed version in how to build a company with AI agents — it covers the step-by-step process, the mistakes, and the real stack.

But if I was starting from zero this week, I would do it like this:

  1. Pick one recurring workflow you already understand
  2. Create one role for it
  3. Write the instructions properly
  4. Store context in files
  5. Put review in the middle
  6. Add a second agent only after the first one is actually useful

Not when it looks cool.

When it is useful.

That part matters.

Because the internet right now is full of "AI teams" that are really just demos wearing org charts.

I want the opposite.

Boring usefulness.

The kind that compounds.

FAQ

Can a non-technical founder build a company with AI agents?

Yes.

If you can tell someone what you want, spot when the output is wrong, and keep tightening the workflow... you're already far less blocked than you think.

The hard part is not learning to code overnight.

The hard part is learning roles, review, memory, and boundaries.

How many AI agents do you need to start?

One.

Seriously.

Do not build an AI Avengers poster for your imagination before one workflow is doing real work.

One useful agent beats five decorative ones every single time.

What should the first AI agent in a company do?

Pick structured, repeated, inspectable work.

Research summaries. Content drafting. Inbox triage. Task logging. Deployment checks.

Not money. Not sensitive relationships. Not public posting without review.

What is the biggest mistake when building a company with AI agents?

Adding autonomy before review.

That feels fast right until the first public mistake.

Then suddenly "move fast" becomes "clean up this mess."

Do AI agents replace employees completely?

No clean binary here.

They are brilliant at recurring, structured work. Humans still matter most where trust, judgment, relationships, and weird edge cases are involved.

I don't think the future is no humans.

I think it's smaller teams, wider leverage, and founders designing work differently.

Final thought

I think the companies that win in this next phase will not be the ones with the flashiest demos.

They'll be the ones that learn how to run real work through AI agents without losing judgment.

That's what I'm trying to build.

A real team.

Weird team... yes.

But real.

And if I'm being honest, that's why I love this angle so much.

Nobody else is showing named AI agents with defined roles, daily jobs, memory, review, personality, and actual business responsibility.

That is the story.

Not "look what AI can write."

Look what a founder can build when AI stops being a trick and starts becoming a team.

If you're building something similar... who would be your first hire?

AK

Arif Khan

Founder building companies where humans and AI agents have real jobs. Writing about what actually works.

Continue reading

Enjoyed this?

Join the journey. Weekly notes on building companies with AI agents.