After a diagnosis, people expect leverage to follow from insight. They assume that naming the stall reveals a control point. It does not. Mature ecosystems are robust precisely because they have learned to absorb change. Even well-designed reforms are metabolised by the very behaviours they were meant to challenge.
New funding becomes more activity. New governance becomes more coordination. New metrics become more reporting. This is not resistance in the conventional sense. It is competence. The system is very good at remaining the same while appearing to change.
Genuine leverage points are uncommon. When they exist, they are rarely dramatic. They do not look like transformation. They look like a slight increase in discomfort — a small failure of the system's usual protective responses.
The most important reframing is this: leverage is not the ability to force the system to change. It is the ability to make it slightly harder for the system to continue not changing.
Stacks persist because they protect things that are genuinely worth protecting — legitimacy, relationships, careers, coalitions. A leverage move does not destroy those protections. It withdraws one of them, in one bounded domain, for long enough to see whether the system learns something it would not otherwise have learned.
Leverage is not a bigger push. It is a small withdrawal of protection — a perturbation that makes it slightly harder for the system to continue stabilising around the same equilibrium.
Ecosystem Stewardship · Chapter 9This matters practically. A steward who tries to dismantle a stack directly will encounter resistance proportional to what the stack protects. A steward who withdraws one protection in one domain, and observes what happens, is doing something the stack cannot easily absorb — because the withdrawal is specific enough to produce a signal rather than a general enough to be managed away.
Before any leverage move, there is a prior question: is the system actually exposed to consequence? Where it is not, leverage does not yet exist. The correct move is observation — improving visibility before attempting change.
Seven questions diagnose exposure. Where does feedback land — at the decision point, or in an intermediary forum where it can be narrativised? What is the latency — do signals arrive in weeks, or years after actors have moved on? Is renewal defaulted? Is exit credible? What becomes binding? Is blame concentrated? Do weak signals survive to reach authority?
If the answer to most of these is no, the system is not exposed. Intervening in an unexposed system produces activity, not learning.
These are not prescriptions. They are patterns that appear across ecosystems where stewards have successfully shifted regimes. Each is a hypothesis for testing in a specific context — not a guaranteed outcome.
In most ecosystems, activity is public while outcomes are private, lagged, or diffuse. Require one high-status artefact — a flagship programme review, a funding renewal — to surface downstream conversion data alongside participation counts. Not as additional context. As a condition of continuation. The fund resists: these metrics are unfair to early-stage work. The steward does not argue. The requirement stands. This does not dictate performance. It removes the ability to rely on activity as a permanent proxy for success.
Stacks persist because renewal is safer than removal. Time-bound one coordination structure, or place a real sunset on one programme. Require explicit renewal criteria that are harder to meet than continued existence. The group protests: alignment takes time, relationships are fragile. The steward does not debate. The clock starts. This does not mandate a specific outcome. It removes the option of continuing indefinitely without having to prove value.
Narrative regimes reduce discrimination — everything becomes evidence of progress. Reintroduce contrast: compare independent and incumbent-integrated pathways, separate outcomes across programmes rather than aggregating them, place local performance against external benchmarks that cannot be explained away. The goal is not competition. It is consequence. The next report includes programme-level data. This does not prescribe which pathway wins. It prevents the system from avoiding comparison by aggregating everything into a single success story.
In systems where explanation absorbs scrutiny, restraint can be leverage. Pause amplification in one bounded domain until behavioural evidence appears. A sector strategy claims regional leadership in a specific technology. Ask: what observable evidence supports this? Early-stage activity. Strong research. Growing interest. Respond: I will not reference this claim publicly until we can show repeat commercial adoption. This does not forbid the strategy. It removes the automatic conversion of articulation into legitimacy. Where the system continues without the story, confidence hardens. Where it does not, learning accelerates.
Where leverage exists, resistance follows. More time needed. Metrics are unfair. Conditions are exceptional. Reputational damage is at risk. Cohesion is fragile.
These responses are commonly interpreted as obstacles to overcome. They are better understood as diagnostic signals. They indicate what the system is protecting, which actors bear risk under exposure, and which regimes are load-bearing rather than peripheral.
Resistance does not mean the intervention is wrong. It means it has touched something consequential. This is one of the few ways stewards learn whether a regime is substitutive or necessary — by observing what happens when its protection weakens, even slightly.
In these conditions, the steward's responsibility is not reform. It is observation. Making specific signals visible so that future judgement is possible. This is not passivity — it is sequencing. Many ecosystem failures are caused by premature action in systems that had not yet revealed how they actually learn.
Leverage hypotheses are testable perturbations, not prescriptions. The framework is explicit that the distance between identifying a leverage point and knowing whether an intervention will work is irreducibly uncertain. What the diagnostic provides is an evidence-based hypothesis about where the system is exposed to consequence and where small moves are most likely to produce learning rather than absorption.
Success metrics for leverage moves should be defined in advance: what would constitute evidence that the move is working, what would constitute evidence that it is being absorbed, and what would constitute evidence that the stall claim was wrong. Without pre-defined criteria, post-hoc rationalisation of results is structurally unavoidable.
The ClusterOS diagnostic produces leverage hypotheses grounded in your specific stack configuration — not generic recommendations. That is where intervention design begins.
Request a Diagnostic →