An accelerator reports that it has supported 300 startups over five years. Applications are rising. Events are well-attended. When asked how many startups have reached repeat revenue or secured follow-on funding, the data is not systematically tracked.
Motion is not compounding. This distinction is obvious when stated. It is remarkably easy to lose sight of when you are inside a system that has become very good at producing motion.
S8 is the stall where what can be counted — programmes, events, participants, partnerships, reports — expands reliably, while what converts into durable outcomes is structurally harder to observe. The system is not idle. It is busy in a particular way that does not accumulate.
The stall is not that the activity is worthless. Some of it produces real outcomes. The stall is that the system has learned to treat activity as a proxy for throughput — and that proxy has never been tested. Once a programme is well-attended and well-reviewed, its continuation is treated as justified. What it converts into is a separate question that is rarely asked with the same rigour.
Activity metrics have a structural advantage over throughput metrics: they are fast, abundant, and flattering. A programme launch generates visible outputs immediately. Whether it produces anything durable takes 18–24 months to know, involves ambiguous attribution, and frequently produces numbers that are harder to defend.
The political economy of this asymmetry matters. The people responsible for running programmes are also responsible for reporting their success. Throughput metrics — survival rates, conversion rates, revenue at exit — are expensive to track, slow to arrive, and frequently unflattering. Activity metrics are cheap, immediate, and can always be framed positively. The system learns which kind of evidence is rewarded, and produces more of it.
What can be counted dominates what is known. What matters most — commitment, exclusion, redesign, the accumulation of consequence — often leaves little formal trace.
Ecosystem Stewardship · Chapter 2This is not cynicism. It is a straightforward description of rational behaviour under the incentives that most ecosystem programmes face. Funders want evidence of progress. Progress is demonstrated through activity. Activity is therefore what gets measured and reported. The loop closes without anyone deciding to be dishonest about outcomes.
None of the throughput questions are unfair. They are simply harder to answer — and harder to answer well. The first time you ask them seriously, the numbers are usually worse than the activity picture suggests. That is exactly the information a steward needs, and exactly why the system tends not to generate it.
S8 and S7 — Narrating Instead of Testing — are the two most common stall partners in the database, forming the Narrative × Activity Stack. The mechanism is not coincidental. Activity provides narrative with fresh material. Narrative frames activity as evidence of strategic progress. Neither requires outcomes to sustain the other.
Understanding S8 in isolation is useful. Understanding it as part of that stack is what reveals why single interventions — introducing conversion metrics, requiring outcome reporting — so often fail to shift the system. The narrative adapts to incorporate the new data. Activity is reframed as early-stage investment. The stack absorbs the intervention.
The leverage move for S8 is narrow and specific: require one high-status programme — the flagship accelerator, the anchor partnership initiative, the main funding vehicle — to surface throughput data alongside activity counts in its next renewal document. Not as additional context. As a condition of renewal.
The resistance this produces is diagnostic. These metrics are unfair to early-stage programmes. Market conditions vary. Our cohorts need more time. These are not wrong objections. They are signals about what the activity metrics have been doing — protecting the programme from the question of whether it is producing what it claims.
The steward does not need to win the argument. They need to hold the condition. The programme can still make its case. It simply has to make it with throughput data on the table.
S8 has high X-side observability — activity outputs are public, abundant, and well-documented. The Y-side challenge is structural: throughput data is often genuinely absent rather than hidden. The stall is real even when the absence of conversion data reflects tracking failure rather than deliberate suppression.
Confidence in S8 increases when activity expansion is disproportionate to evidence base growth, when programme renewals do not reference conversion data, and when the absence of throughput metrics is treated as normal rather than as a gap. Where conversion tracking exists but data is unflattering, confidence is higher still.
S8 does not imply that activity is valueless — only that the system has not tested whether it is converting. That test is the Y-side. Where it has never been conducted, the stall claim is warranted regardless of whether the activity is producing outcomes.
S8 is present in 88% of diagnostics — usually alongside S7. Find out whether your cluster's activity is converting, or cycling.
Request a Diagnostic →