Vibe coding and the dangers of epistemic collapse
I want to discuss about the topic of epistemic collapse in organizations, its tech and the ways in which it slowly rots the very fabric that makes software capable: it's unique quality of being testable, correct and trustworthy.
Whilst being accelerated by AI can be a double edged sword / razor, it depends on implementation:
- augmented
- automated
Let's define these further. In the augmented paradigm, the software engineer achieves epistemic preservation. The human remains the locus of understanding and responsibility. Meanwhile, on the automation side, it reflects back into the environment as epistemic displacement. Here, the locus is lost and output is divorced from comprehension. Thus, in this state, collapse occurs when responsibility and understanding are decoupled from output.
We will discuss about the second part mainly, where automating software output at scale leads to organizational epistemic collapse. This is not a tooling issue. It manifests subtly (or bluntly for some) following this blueprint:
- collapse happens first socially and only secondarily technologically
- the incentives (speed, optics, leverage) reward output over comprehension
- this introduces a reward loop where epistemic shortcuts become institutionalized norms.
In Philosophy, epistemic is an adjective which relates to knowledge or to the degree of its validation.
While it is seductive to think you can vibe-code your way out of requirements, plans and understanding, doing this presents some risks which if are not taken to mind before too late, they show up at an unexpected time and place: financial loss due to a hallucination, incorrect guardrails, brittle systems etc.
But maybe the biggest danger of all: when we cross the bridge to a territory where mock data cannot be distinguished from real data, where AI output becomes the very fabric of deep business logic, where you cannot distinguish hallucinations from engineering intent, then we open the door to systemic failure, epistemic failure. Once that is done, the artifact's only power is to further corrupt data integrity, workflows and output, disguised as progress in the way of entropy.
This is not a bug, it is the very signal of epistemic erosion. Because all deliverable artifacts converge semantically, falsifiability collapses. This leads to a state where the system can no longer be queried for truth but only observed.
In this epistemic collapse, hallucinations are not random errors, they are coherent internally. These fabrications have substance. They get merged with business logic and become indistinguishable from intentional design decisions. Engineers end up doing chunks of post-hoc rationalization and they replace design rationale. This is the primary way where failure infiltrates the system, where it manages to avoid detection until damage occurs.
Simply, information that cannot be trusted, corrupts the entire surface of the trustable system, until a point where it cannot be distinguished from the two. In information systems, trust is transitive. Corruption propagates laterally not only locally. Layer after layer, stripping the trustable part, enforces defensive assumptions everywhere else. This leads to an eventual collapse in velocity despite the local maxima of perceived acceleration. The end state of epistemic collapse is long-term productivity loss.
By then, it is not only impossible to go back, it is too late.
But maybe there is hope. What would be the way out of this fate? How does one engineer a reversal? I want to point a few possible ways, to avoid sounding fatalistic:
- reversal requires reestablishing ground truth
- ownership
- intent
This has a cost and is often more than a rewrite.
Organizations are bound to be driven by game theory and least likely to pay this cost, once illusionary progress exists. In this post-collapse phase, the incentives only reward maintaining the illusion. This is how collapse persists. Sadly, it turns the ending from dramatic to inevitable.