I'm building a company where humans and AI agents have real jobs
The shift I care about is not AI as a clever tool. It is a human-agent operating model where agents carry recurring responsibilities inside a real review system.
I'm building a company where humans and AI agents have real jobs
Most people still talk about AI agents as if they are either assistants or theatre.
They are either useful for a clever one-off task, or they are presented as if a prompt thread magically became a company. I am interested in neither version.
What I care about is something more operational.
I want to build companies where humans and AI agents have recurring jobs, clear boundaries, and visible review loops. Not pretend jobs. Not mascot titles. Real recurring work.
That does not mean the agents are "running the company" alone. It means the human-agent operating model is changing.
Inside my world, some of that is already visible. Some agent roles handle recurring coordination, finance prep, and content operations. Other roles are being hardened around products like Masaya. The point is not the naming. The point is continuity, ownership, and accountability.
If an agent does something once, that is a demo.
If an agent picks up the same kind of responsibility again and again, works inside memory, survives review, and improves the flow of the company, that starts to look a lot more like a job.
Real jobs need real boundaries
The mistake people make here is to confuse role language with role design.
Giving an agent a title is easy. Designing a role is hard.
A real role needs:
- a recurring scope
- a definition of done
- a review boundary
- a memory of prior work
- a human who can approve, redirect, or kill the output
Without that, you do not have an operating model. You have branded prompting.
That is why I care so much about workflows, queues, review gates, and memory files. I do not want agents to sound impressive. I want them to reduce drag inside a real company.
The human role becomes more important, not less
This is where a lot of the commentary goes wrong.
When people hear "agents have real jobs," they assume the human disappears. In practice, the opposite happens.
The more recurring work an agent takes on, the more important human judgment becomes:
- What should be delegated at all?
- What can move without review?
- What should never ship unchecked?
- What counts as a correction instead of a failure?
The future I am interested in is not human absence. It is better human leverage.
Humans still hold narrative judgment, relationship risk, strategic direction, and final responsibility. Agents make the system wider, faster, and more persistent. They do not remove the need for adult supervision.
I am not trying to prove that agents can look clever on command.
I am trying to prove that a company can be designed differently when agents are treated as recurring contributors inside a disciplined system of review, memory, and ownership.
That is a much harder claim.
It is also a much more interesting one.
Key takeaways
- A role becomes real when the work is recurring, reviewable, and remembered.
- Agents make human judgment more important, not less.
- The interesting shift is not tool use. It is operating design.