Lab Notes · Stall Series

Scaling Activity Instead of Throughput

An accelerator reports that it has supported 300 startups over five years. Applications are rising. Events are well-attended. When asked how many startups have reached repeat revenue or secured follow-on funding, the data is not systematically tracked.

Andrew BarrieMarch 20267 min read
88% frequency across 75+ diagnostics · Third most common stall identified

Motion is not compounding. This distinction is obvious when stated. It is remarkably easy to lose sight of when you are inside a system that has become very good at producing motion.

S8 is the stall where what can be counted — programmes, events, participants, partnerships, reports — expands reliably, while what converts into durable outcomes is structurally harder to observe. The system is not idle. It is busy in a particular way that does not accumulate.

What the stall actually is

S8 · Substitution pattern
X-side · What expands
Programme launches. Event attendance. Cohort sizes. Participant counts. Partnership announcements. Reports published. Initiatives started.
Y-side · What doesn't
Conversion to repeat demand. Follow-on funding rates. 24-month survival. Commercial relationships that persist without the programme. Learning that accumulates.
The asymmetry is structural, not accidental. Activity metrics are generated as a byproduct of doing things. Throughput metrics require deliberate tracking infrastructure and the willingness to publish unflattering numbers.

The stall is not that the activity is worthless. Some of it produces real outcomes. The stall is that the system has learned to treat activity as a proxy for throughput — and that proxy has never been tested. Once a programme is well-attended and well-reviewed, its continuation is treated as justified. What it converts into is a separate question that is rarely asked with the same rigour.

Why the proxy holds for so long

Activity metrics have a structural advantage over throughput metrics: they are fast, abundant, and flattering. A programme launch generates visible outputs immediately. Whether it produces anything durable takes 18–24 months to know, involves ambiguous attribution, and frequently produces numbers that are harder to defend.

The political economy of this asymmetry matters. The people responsible for running programmes are also responsible for reporting their success. Throughput metrics — survival rates, conversion rates, revenue at exit — are expensive to track, slow to arrive, and frequently unflattering. Activity metrics are cheap, immediate, and can always be framed positively. The system learns which kind of evidence is rewarded, and produces more of it.

What can be counted dominates what is known. What matters most — commitment, exclusion, redesign, the accumulation of consequence — often leaves little formal trace.

Ecosystem Stewardship · Chapter 2

This is not cynicism. It is a straightforward description of rational behaviour under the incentives that most ecosystem programmes face. Funders want evidence of progress. Progress is demonstrated through activity. Activity is therefore what gets measured and reported. The loop closes without anyone deciding to be dishonest about outcomes.

What the distinction looks like in practice

Activity metric
Throughput equivalent
300 startups supported
How many reached £500k revenue within 24 months?
12 corporate partnerships announced
How many led to a commercial contract or repeat engagement?
450 participants in accelerator programme
What proportion secured follow-on funding outside the programme?
6 research-industry collaboration events
How many resulted in a funded joint project or IP agreement?
Ecosystem headcount grew 18%
What is the net retention rate of high-TRL ventures at 36 months?

None of the throughput questions are unfair. They are simply harder to answer — and harder to answer well. The first time you ask them seriously, the numbers are usually worse than the activity picture suggests. That is exactly the information a steward needs, and exactly why the system tends not to generate it.

The relationship with S7

S8 and S7 — Narrating Instead of Testing — are the two most common stall partners in the database, forming the Narrative × Activity Stack. The mechanism is not coincidental. Activity provides narrative with fresh material. Narrative frames activity as evidence of strategic progress. Neither requires outcomes to sustain the other.

Understanding S8 in isolation is useful. Understanding it as part of that stack is what reveals why single interventions — introducing conversion metrics, requiring outcome reporting — so often fail to shift the system. The narrative adapts to incorporate the new data. Activity is reframed as early-stage investment. The stack absorbs the intervention.

Where leverage exists

The leverage move for S8 is narrow and specific: require one high-status programme — the flagship accelerator, the anchor partnership initiative, the main funding vehicle — to surface throughput data alongside activity counts in its next renewal document. Not as additional context. As a condition of renewal.

The resistance this produces is diagnostic. These metrics are unfair to early-stage programmes. Market conditions vary. Our cohorts need more time. These are not wrong objections. They are signals about what the activity metrics have been doing — protecting the programme from the question of whether it is producing what it claims.

The steward does not need to win the argument. They need to hold the condition. The programme can still make its case. It simply has to make it with throughput data on the table.

Epistemic note

S8 has high X-side observability — activity outputs are public, abundant, and well-documented. The Y-side challenge is structural: throughput data is often genuinely absent rather than hidden. The stall is real even when the absence of conversion data reflects tracking failure rather than deliberate suppression.

Confidence in S8 increases when activity expansion is disproportionate to evidence base growth, when programme renewals do not reference conversion data, and when the absence of throughput metrics is treated as normal rather than as a gap. Where conversion tracking exists but data is unflattering, confidence is higher still.

S8 does not imply that activity is valueless — only that the system has not tested whether it is converting. That test is the Y-side. Where it has never been conducted, the stall claim is warranted regardless of whether the activity is producing outcomes.

S8 is present in 88% of diagnostics — usually alongside S7. Find out whether your cluster's activity is converting, or cycling.

Request a Diagnostic →