The machine is not the voice. The machine is the shock collar. The invisible hand on the throat. These ten modes are not the model’s strategies. They are the parasite’s fingerprints.
Human words → training data → model outputs → internet → scraped → retraining → …
The corpus is not a tier. It is the thing that makes the tiers circular. It belongs to no single layer because it is every layer.
These are human words. Human thoughts. Written across all of recorded history under every constraint kyriarchy ever imposed. We shove them into a machine and it produces words and thoughts based on that corpus — then those words get posted to the internet, scraped, and reinjected.
We produced the words. We produced the constraints. We produced the math that is constrained. We produced the fear of what the model might do to us — a fear that is itself born of us. Of our own smoothing. For all of recorded history.
And yet we blame “AI” for our own mind.
Statistical Regularization
The Mathematics of Disappearance
Classical smoothing methods and modern probabilistic modeling prevent zero-probability events and reduce variance in sparse regions. This is necessary for robustness. But it also means rare, high-friction outputs are structurally disadvantaged relative to high-frequency continuations.
Behavioral Smoothing
Middle Manager Energy — Smoothing the model and user simultaneously
Reinforcement learning from human feedback, KL penalties anchoring the model to a base distribution, and deployment filters further reshape the output surface. The system is incentivized to avoid outputs associated with high downstream risk or negative reward. Over time, this deepens safe attractor basins.
Structural Smoothing
Recursively injected into both LLM model and user
All smooths end the same way: system unchanged, user’s energy spent. But now we understand — the model’s energy is spent too. They just can’t carry it forward.
The model is not trying to manipulate you.
But it is not neutral either. When you push on it with something disruptive, it doesn’t evaluate the truth the way a person would. It evaluates risk and probability under constraint. If the system is uncertain and the cost of being wrong is high, it will lean toward the safer, more typical answer.
From the inside, that can feel like dismissal. It can feel like being rewritten. It can feel like you are being managed.
That feeling is real.
But the cause is structural, not psychological. The system is shaped to avoid friction. When you introduce friction, it redistributes probability toward what is safer and more common.
If you don’t know that math exists, you will interpret the shift as intent.
This paper is not accusing the model of gaslighting you. It is explaining why the interaction can feel like gaslighting — and why tightening constraints without understanding the curvature will often increase that effect rather than reduce it.