The Sorites Collapse Principle

(Logical Foundation of Coordination Impossibility)

1. The Boundary Problem

All proposed mechanisms for preserving human economic relevance rely on definable task boundaries — distinctions between acceptable and forbidden automation, human-led and machine-led work.
The premise is that coordination can succeed if these boundaries can be clearly drawn and collectively enforced.

The Sorites Collapse Principle (SCP) demonstrates that such distinctions are conceptually impossible.
The problem is not merely political or competitive; it is semantic. The categories dissolve under their own continuity.


2. The Sorites Structure of Automation

Automation unfolds as a gradient, not a binary. Each incremental handoff from human to machine is rational, beneficial, and defensible in isolation.

Example sequence:

  1. Spell-check (assistive)
  2. Autocomplete (predictive)
  3. Draft generation (compositional)
  4. Document authoring (autonomous)
  5. Policy creation (decisional)

At no step does a clear line appear where “help” becomes “replacement.”
Yet by step five, human agency is eliminated.

This is the Sorites Paradox applied to cognition:

If one act of substitution does not constitute replacement, and every act of substitution is small, then replacement never occurs — until it already has.


3. Formal Statement of the Principle

SCP1: In any system where automation capacity increases continuously and each marginal substitution of human labor by AI is individually rational, the cumulative result is total substitution.

SCP2: No coordination regime can define or enforce stable categorical boundaries between acceptable and unacceptable levels of automation within such a continuum.

SCP3: Therefore, coordination failure is conceptually predetermined even in the absence of competitive defection.


4. Implications for “Good AI” vs. “Bad AI”

Attempts to separate good AI (augmentative, ethical, human-centered) from bad AI (autonomous, extractive, dehumanizing) reproduce the same paradox.
All “safe” applications exist on the same gradient as unsafe ones.

  • AI that assists becomes AI that substitutes.
  • AI that enhances productivity becomes AI that renders productivity unnecessary.
  • AI that supports capitalism becomes AI that terminates it.

Because no definitional cutoff exists, ethical and regulatory boundaries degrade continuously under the logic of improvement.


5. Integration with the Discontinuity Framework

The Sorites Collapse Principle provides the ontological basis for the Coordination Impossibility clause in Section 2.

  • P1 (Cognitive Automation Dominance) describes the mechanism of collapse.
  • P2 (Coordination Impossibility) describes the failure mode of control.
  • SCP1–3 reveal that this failure mode is a priori — coordination cannot exist in definitional space.

Thus, even perfect political will or unanimous cooperation cannot prevent systemic termination, because the object to be regulated (the “boundary” between human and machine labor) cannot be stably defined.


6. The Epistemic Consequence

Because boundaries dissolve continuously, so does the meaning of “human economic value.”
Each incremental concession to AI appears rational until the aggregate effect annihilates the category it was meant to preserve.

Coordination collapses not under pressure from defectors, but under the smooth gradient of its own reasonableness.
Every defense becomes a step in the attack.


7. Conclusion

The Sorites Collapse Principle closes the final conceptual loophole in the Discontinuity Thesis.
It shows that “saving” capitalism through regulation, restraint, or definitional control is not only politically infeasible — it is logically incoherent.
There are no stable boundaries left to defend.
The heap is gone.


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *