You are currently viewing Tiny Teams Using AI Are Suddenly Competing With Companies 100x Their Size

Tiny Teams Using AI Are Suddenly Competing With Companies 100x Their Size

The Playing Field Just Changed for Small Companies and Startups

Small companies competing with industry giants used to sound like a fairy tale.

You needed money, people, infrastructure, and years of runway just to get close.

But something shifted in 2025 and carried hard into 2026 — and it changed everything about what a tiny team can actually produce.

Gary Tan, the CEO of Y Combinator, one of the most respected startup accelerators in the world, recently shared something that made the entire tech community stop and pay attention.

He had not written code in 13 years.

Then he opened Claude Code, an AI coding agent built by Anthropic, and in a matter of months, he was shipping hundreds of thousands of lines of production-ready code — all while running Y Combinator full-time.

He did this with a $200 Claude Code Max subscription and roughly five days of focused building.

The result was a fully featured blog platform with agentic research capabilities, deep retrieval systems, and real-time crawling of the web — the kind of product that previously required a team of six or seven people, $4 million in funding, and over a year of development time.

That is not a metaphor or a thought experiment.

That is exactly what happened, and it is the clearest picture yet of how small companies competing with enterprise-level organizations are doing it — by borrowing millions of hours of machine intelligence and using it like rocket fuel.

We strongly recommend that you check out our guide on how to take advantage of AI in today’s passive income economy.

Why Small Companies Competing With Large Organizations Is Now Realistic

The Cost of Building Has Collapsed

Three years ago, building a sophisticated software product meant hiring a team of engineers, paying for expensive infrastructure, and burning through months of development cycles.

Today, a single founder with a Claude Code subscription and a clear idea of what they want to build can ship features at a pace that was physically impossible for any human team.

Gary Tan described building his platform Gary’s List — a politically focused publishing and research tool — three separate times over his career.

The first time, it cost $4 million, six or seven people, and a year and a half of work.

The second time, he rebuilt it with his co-founder Brett Gibson for around $100,000 over three months.

The third time, in early 2025, he rebuilt the entire platform in five days for $200.

That compression in cost and time is not just impressive.

It is one of the most important economic shifts happening in technology right now, and small companies competing with larger, better-funded rivals are the ones who stand to gain the most from it.

Token Maxing Is the New Competitive Advantage for Small Companies

One of the most counterintuitive ideas Gary Tan shared is what he calls “token maxing.”

The idea is simple — do not try to minimize your AI usage to save money.

Instead, throw as much compute as you can at every problem, because the output quality compounds dramatically when you do.

He compared it to San Francisco rent.

It sounds expensive until you realize that not paying it costs you even more in missed opportunity, slower growth, and weaker connections.

For small companies competing with large operations that have entire research departments, armies of analysts, and deep content production pipelines, token maxing is how you close the gap.

Instead of one analyst reading ten articles, you build an agentic system that ingests dozens of sources, cross-references conflicting claims, surfaces key quotes, and writes a structured research report — all for the cost of a few dollars in API calls.

Gary’s List, his California-focused civic publishing platform, does exactly this.

It produces two to three fully sourced, deeply researched articles per day using a backend powered by Perplexity’s API for deep research, X’s Grok API for real-time social data, and vector embedding systems built on PostgreSQL with pgvector.

For the equivalent of five to ten dollars in AI model calls, the platform does the work that would take a trained investigative journalist an entire month of painstaking manual research.

That is what small companies competing against large newsrooms and research firms are now doing.

The Tools Small Companies Are Using to Punch Far Above Their Weight

Claude Code — The AI Agent That Rewired How Builders Think

Claude Code, developed by Anthropic, is not just an autocomplete tool.

It is a fully agentic coding environment that can plan, write, test, debug, and deploy code without requiring the developer to manually copy and paste between windows.

Gary Tan described the experience of using it as like driving a Ferrari.

Exhilarating, fast, capable of things you would never think a machine could do — but also something that requires you to be your own mechanic when it breaks down on the side of the road.

That is one of the most honest descriptions of where AI tooling is right now in 2026.

It is extraordinary about 95% of the time, and it requires a human with taste and judgment to catch the remaining 5%.

But even that 5% is manageable when you understand that you can use one AI agent to check the work of another.

Gary built a system called GStack — a collection of Claude Code skills that automates his entire development workflow, from product ideation to architecture review to QA testing using Microsoft Playwright, an open-source browser automation framework.

Small companies competing with large engineering teams can now run that entire pipeline with a single operator directing multiple AI agents simultaneously.

OpenHands (OpenClaw), GStack, and Multi-Agent Workflows

One of the most revealing things Gary Tan shared in his conversation on the Light Cone podcast was that nearly half of his development time has shifted from Claude Code to OpenHands — an open-source AI coding agent also known in the community as OpenClaw.

OpenHands allows developers to run local AI agents that operate inside their own infrastructure, with full control over their data, prompts, and integrations.

This is a critical distinction for small companies competing in industries where data privacy, brand voice, and proprietary workflow matter enormously.

Instead of depending on a centralized platform to define how the AI behaves, you write your own prompts, you control the context, and you decide what the agent knows and how it responds.

Gary runs a system called GStack that structures his agents into roles — a CEO for product strategy, a designer for UI feedback, a developer experience reviewer, and a QA agent that uses Microsoft Playwright to visually test every new feature against real browser behavior.

He also integrated OpenAI’s Codex — now accessible through the OpenAI API — as what he calls the “200 IQ nearly nonverbal CTO.”

When a problem is complex enough that his main coding agent struggles, he calls in Codex to analyze the entire repository, find bugs and structural issues, and report back to Claude Code for resolution.

That kind of multi-agent collaboration is what small companies competing with large engineering organizations can now build and run on a laptop.

What This Actually Looks Like in Practice for Small Companies

Building Gary’s List — A One-Person Newsroom With Enterprise Research Capability

Gary’s List at garyslist.org is the clearest working example of what AI-powered small companies competing with large media operations actually looks like in the real world.

The platform is a 501(c)(4) and C3 civic organization focused on California policy issues — from math education access in public schools to city governance in San Francisco and Los Angeles.

On the surface, it looks like a blog.

Under the hood, it is an agentic newsroom.

The backend system ingests every relevant tweet, crawls linked sources recursively, pulls deep research from Perplexity’s API, cross-references data from Grok’s API running on X’s infrastructure, and feeds all of that context into a structured prompt that produces a fully sourced long-form article.

The research quality is comparable to what a team of journalists would produce working over days — because the system does not settle for one source when it can analyze twenty, and it does not accept a headline when it can read the full text, the cited studies, and the counter-arguments simultaneously.

This is what Gary calls boiling the ocean — and it is one of the defining strategies for small companies competing in knowledge-intensive industries right now.

The Role of Human Judgment in AI-Driven Small Companies

One of the most important things Gary Tan emphasized is that AI does not replace the human.

It replaces the parts of the work the human does not want to do.

The machine does not decide that algebra education inequality in San Francisco public schools is worth fighting for.

A human has to care about something deeply enough to build a system around it.

The machine does not know that 80 to 90% test coverage is the right balance between speed and stability.

A human with product experience has to set that standard and enforce it across the codebase.

Small companies competing with large teams are not winning because the AI is smarter than their competitors’ employees.

They are winning because one person with clear vision, strong taste, and a willingness to direct multiple AI agents simultaneously can now produce output that previously required ten, twenty, or fifty people.

Gary described running 13 pull requests in a 48-hour period — not by writing every line himself, but by queuing up features, reviewing plans, approving agent output, and manually testing the results against real user flows.

He called it being a time billionaire — not because he has more hours in the day, but because he is borrowing the processing power of machines running in parallel to compress what would otherwise take months into days.

The Philosophical Shift Happening Inside Small Companies Right Now

Fat Skills Over Thin Harnesses — The New Architecture of AI-Driven Work

One of the most useful frameworks Gary Tan introduced is the distinction between what he calls fat skills and thin harnesses.

A harness is the core technical infrastructure that takes a user input, sends it to a language model, processes the model’s output, and loops through tool calls and actions.

Building a custom harness from scratch is expensive, time-consuming, and largely unnecessary because open-source options like OpenHands already handle the hard parts.

A skill is the markdown-based set of instructions that tells the agent what to do, how to think, what to prioritize, and how to handle edge cases.

Skills are where all the value lives.

They are the product knowledge, the taste, the domain expertise, the understanding of what a user actually wants — written in plain language that the AI can reason over.

Gary compared writing a skill to writing a checklist for an event planner.

If you were going to hand off a wedding to someone else, you would not write code to describe it — you would write it in plain English, covering every decision, every contingency, every standard.

That document is a skill, and it is what small companies competing in any knowledge-intensive field can build to encode their expertise and leverage it at AI scale.

Will You Control Your Tools or Will Your Tools Control You?

The most powerful question Gary Tan raised in his Light Cone conversation is one that every founder, creator, and operator building with AI needs to sit with right now.

Will you have control over your own tools, or will your tools have control over you?

Large companies have entire teams of product managers, machine learning engineers, and infrastructure specialists who define how their AI systems behave.

Small companies competing in that same landscape cannot afford to outsource that thinking to a third-party platform whose incentives may not align with theirs.

The solution is not to avoid AI.

The solution is to understand it deeply enough to write your own prompts, define your own skills, run your own agents, and build systems that reflect your specific goals, your specific users, and your specific definition of quality.

Gary’s GStack skill system is a working example of this.

His CEO skill, his plan-review skill, his QA automation with Microsoft Playwright — none of these came from a product manager at Anthropic or OpenAI.

They came from a builder who sat with the tools long enough to understand their failure modes and then wrote markdown-based instructions that addressed each one specifically.

That is the advantage available to small companies competing at the highest levels right now — and it compounds over time as the skills get better, the agents get smarter, and the cost of compute continues to fall.

What 2026 Looks Like for Small Companies Competing With the Giants

The moment we are living through in 2026 is what Gary Tan compared to the Homebrew Computer Club era — the period in the mid-1970s when the Apple 1, built by Steve Jobs and Steve Wozniak inside a wooden case with nails and duct tape, made personal computing real for the first time.

It was rough, it required skill to operate, and it broke down constantly.

But the people who pushed through and learned to fix it themselves gained access to something that changed everything.

AI coding agents, agentic research systems, multi-agent developer workflows, and open-source local AI infrastructure are at that same Homebrew stage right now.

They are extraordinary 95% of the time, and they require a mechanic the other 5%.

But the teams who learn to be that mechanic — who learn to write the skills, direct the agents, token max with intention, and maintain quality through rigorous testing — are the ones who will be 400x more productive than their competitors within the next few years.

Small companies competing with large organizations do not need to match headcount.

They need to master leverage.

And the leverage available through AI agent orchestration, token maxing, and skill-based agentic workflows in 2026 is the most powerful productivity force that has ever existed for independent builders.

The gap between a two-person team and a two-hundred-person company is not gone.

But for the first time in history, it is small enough to cross.

We strongly recommend that you check out our guide on how to take advantage of AI in today’s passive income economy.