Lucky Knowledge

A venture capitalist backs three B2B SaaS companies over six years. All three exit successfully. The investor develops a reputation for understanding B2B SaaS. LPs cite the track record. Founders seek them out. The investor writes a blog post about what they’ve learned: the importance of net revenue retention, the signals that distinguish good enterprise sales motions from bad ones, the founder archetypes that succeed in the space.

Here is a question that almost nobody asks: how would you know if that investor was wrong about all of it?

Not wrong about the outcomes. The outcomes are real. Three exits, real returns, real money. Wrong about the connection between what they believe they know and why those outcomes happened. Wrong in a way that the track record not only fails to reveal but actively conceals.

This is not a hypothetical problem. It is a structural feature of how the venture industry generates and validates knowledge. And I think it deserves more scrutiny than it gets.

The feedback loop that isn’t

Knowledge, in most domains, gets tested. A doctor prescribes a treatment and observes whether the patient improves. A structural engineer designs a bridge and it either holds or it doesn’t. The feedback is direct, relatively fast, and causally tight enough that beliefs get corrected over time. Wrong beliefs produce observable consequences that force revision.

Venture capital has almost none of this. The feedback loop is structurally broken in at least four ways.

First, outcomes are rare. A typical VC makes twenty to thirty investments per fund. Of those, maybe three to five produce meaningful returns. The sample size for learning is tiny, and the base rate for any specific pattern is too low to distinguish signal from noise.

Second, cycles are long. The time between an investment decision and its outcome is seven to ten years. By the time you know whether a bet was right, the market conditions, the competitive landscape, and the technology stack have all changed. The “lesson” from a 2015 investment is being applied to a 2025 decision in a different world.

Third, confounders are everywhere. A successful investment is the product of founder quality, market timing, competitive dynamics, macroeconomic conditions, hiring luck, regulatory shifts, and dozens of other variables that interact in ways nobody can fully decompose. Attributing the outcome to any single factor, let alone to the investor’s judgment, requires a causal confidence that the evidence cannot support.

Fourth, the industry rewards confident attribution regardless of whether the attribution is correct. LPs want a narrative. Founders want conviction. Media wants a story. The person who says “I backed three winners and I’m not sure I know why” does not get profiled in Forbes. The person who constructs a compelling post-hoc narrative does.

Under those conditions, beliefs about what works don’t get tested. They get reinforced.

Accidentally correct

There’s a problem in epistemology that illuminates this. In 1963, Edmund Gettier published a three-page paper that dismantled the classical definition of knowledge as “justified true belief.” His argument was deceptively simple: you can have a belief that is justified (you have good reasons for holding it), and true (it corresponds to reality), and still not have knowledge, because the justification and the truth are connected by accident rather than by the right kind of causal relationship.

Apply this to venture. An investor believes a company will return 10x. The belief is justified: the retention metrics are strong, revenue is growing, the NPS is high. The belief turns out to be true: the company does return 10x. But the return came from an acqui-hire. A larger company wanted the engineering team, not the product. The retention metrics, the revenue growth, the NPS, none of it mattered to the actual outcome. The investor was right, but right for reasons that had nothing to do with their justification.

Under standard epistemology, that’s not knowledge. It’s accidentally correct belief. But the investor’s track record records it as a win. Their “pattern recognition” absorbs it as confirmation. The next time they see strong retention metrics, they feel more confident, not less. The accidental correctness compounds into false certainty.

I want to be careful about scope here. I’m not arguing that all venture knowledge is illusory. Many successful investments are probably roughly correct in thesis. The founder really was exceptional. The market really was ready. The product really was differentiated. But “probably roughly correct” is doing a lot of work in that sentence, and the industry has no mechanism to distinguish the cases where the thesis was genuinely right from the cases where the outcome was right for unrelated reasons. The absence of that mechanism is the epistemological problem. Not the frequency of any specific type of error.

Why confidence makes it worse

If the only consequence of lucky knowledge were occasional misattribution, it would be a minor academic curiosity. What makes it consequential is that the venture industry actively selects for confident attribution.

The selection operates at every level. LPs allocate to managers who can articulate what they know and why it works. Founders choose investors who “get” their space. Partners within a firm defer to colleagues with domain track records. Media amplifies pattern-recognition narratives. At no point in this chain does anyone have an incentive to say “the sample size is too small to know whether this pattern is real.”

A Bayesian would describe it this way: the prior belief (this investor understands B2B SaaS) gets updated by observed outcomes (their investments made money). But the likelihood function, the probability of observing those outcomes given that the belief is true versus given that the belief is false, is extraordinarily noisy. Three successes in thirty investments, with seven-to-ten-year feedback cycles and pervasive confounders, barely moves the posterior. The evidence is consistent with genuine expertise. It’s also consistent with a moderately lucky portfolio in a rising market. The data can’t distinguish the two, and nobody has an incentive to point that out.

The result is an industry where confidence accumulates faster than evidence, and where the gap between the two is structurally invisible.

What would fix this

I don’t think there’s a clean answer, and I’m wary of the essay that diagnoses an intractable epistemic problem and then offers a tidy solution in the closing paragraphs. That would itself be a kind of unjustified confidence, so you won’t find it here.

But, I think the question is worth holding: if the core problem is that the feedback loop between belief and outcome is too long, too noisy, and too confounded to generate reliable knowledge, is there any institutional structure that shortens the loop, reduces the noise, or makes the confounders more visible?

The beginnings of an answer might look like structured decision points that make the justification explicit and testable at each investment stage rather than waiting a decade for an outcome that may have nothing to do with the original thesis. A system where you record what you believed, why you believed it, what evidence would change your mind, and then actually check. Not once, at exit, but repeatedly, at every point where a meaningful decision is made.

Whether that’s achievable at scale is a different question. Whether it really maps to VC is another. But, the fact that almost nobody in the industry is asking it tells you something about how comfortable the current arrangement is for the people inside it.

I include myself in that observation. I hold beliefs about what makes venture creation work. I (think I) have justifications for those beliefs. Some of those beliefs will turn out to be true, others maybe not. It’s not for me to decide. The honest question, the one I don’t think this industry asks often enough, is whether I’ll be able to tell the difference between the ones I got right and the ones I got lucky on.

I suspect the answer, for most of us, is no. Not because we’re not thoughtful. Because the conditions don’t allow it. And acknowledging that seems like a prerequisite for building something better.