AI-Induced Economic Discontinuity Under Unit Cost Dominance: A Comparative Analysis of Systemic Risk Frameworks
Abstract
This article presents a comparative evaluation of dominant artificial intelligence risk frameworks and argues that economic discontinuity caused by AI-driven unit cost dominance over human cognitive labour constitutes the earliest and most structurally binding systemic failure mode. Unlike alignment-based, political-regulatory, or long-tail catastrophic risk models, the Discontinuity Thesis identifies a near-term breakdown of wage-mediated demand as a consequence of competitive adoption of low-cost artificial cognition. The analysis formalises the underlying mechanisms, contrasts them with competing doomer theories, and specifies explicit falsifiability conditions.
1. Scope and Exclusions
This analysis explicitly excludes:
- Claims regarding artificial general intelligence (AGI)
- Claims regarding machine consciousness or intentionality
- Claims regarding human extinction
- Claims regarding singular catastrophic events
- Claims regarding precise timelines
The analysis is restricted to economic systems behaviour under AI cost asymmetry.
2. Core Definitions
Cognitive Labour
Economically productive information processing activities, including but not limited to analysis, synthesis, writing, coding, modelling, planning, and decision support.
Unit Cost Dominance
A condition in which AI systems produce usable cognitive output at lower total cost per unit than human labour, including supervision and error correction.
Wage-Mediated Demand
An economic distribution system in which wages constitute the primary mechanism for allocating purchasing power to households.
Economic Discontinuity
A non-linear structural transition in which incremental technological adoption results in a sudden systemic failure of an existing economic equilibrium.
3. Overview of Dominant AI Risk Frameworks
Contemporary AI risk discourse is dominated by three analytical frameworks:
- Alignment Risk Frameworks – focus on misaligned objectives in highly capable AI systems.
- Political–Regulatory Risk Frameworks – focus on unemployment, inequality, and governance failure.
- Long-Tail Catastrophic Risk Frameworks – focus on low-probability, high-impact end states (e.g. bioweapons, totalitarian control, existential collapse).
Each framework identifies a legitimate class of risk. However, each implicitly assumes continued macroeconomic stability during AI capability scaling.
4. Unit Cost Dominance as the Primary Failure Mechanism
Unlike prior automation technologies, artificial intelligence targets cognitive labour directly, not peripheral physical or routine tasks.
Once unit cost dominance is achieved across a sufficiently large share of cognitive labour:
- Human labour demand declines structurally rather than cyclically
- Wage compression occurs across both skilled and semi-skilled occupations
- Labour market reabsorption mechanisms fail due to cross-domain substitution
This constitutes a qualitative break from historical automation patterns.
5. Competitive Selection and Coordination Failure
Economic actors operate under competitive selection:
- Firms that fail to adopt lower-cost cognition lose market share
- Jurisdictions that constrain AI adoption lose productive capacity
- Capital reallocates toward AI-intensive production functions
This generates a multi-level Prisoner’s Dilemma in which individually rational adoption produces collectively destabilising outcomes.
Political–regulatory solutions require global coordination at a scale and speed historically unsupported under competitive pressure.
6. Incremental Substitution and Delayed Recognition
Cognitive labour substitution occurs incrementally via task decomposition rather than occupational elimination.
This Sorites-style dynamic produces:
- Local optimisation incentives
- Delayed aggregate visibility
- Policy response lag
By the time displacement is statistically unambiguous, wage-mediated demand erosion is already advanced.
7. Comparison With Alignment Risk Models
Alignment frameworks focus on goal mis-specification, instrumental convergence, and loss of control.
These risks presuppose:
- Continued frontier development
- Long-term institutional coherence
- Sustained economic surplus
Economic discontinuity under unit cost dominance undermines these prerequisites.
Therefore, alignment risk is analytically subordinate to economic collapse risk in temporal ordering.
8. Comparison With Political and Redistribution Models
Political risk frameworks propose mitigation through:
- Regulation
- Universal Basic Income
- Public employment
- Redistribution
These mechanisms face three structural constraints:
- Deployment speed relative to AI substitution velocity
- Fiscal sustainability under shrinking wage bases
- Competitive defection by non-participating actors
Absent pre-emptive, globally coordinated implementation, these measures are reactive rather than preventative.
9. Comparison With Long-Tail Catastrophic Risk Models
Long-tail risk models require:
- Stable institutional governance
- Predictable economic coordination
- Long-term capital allocation
Economic discontinuity increases instability, fragmentation, and short-termism, reducing the probability that such scenarios unfold in their canonical form.
10. Formal Statement of the Discontinuity Thesis
If artificial intelligence achieves sustained unit cost dominance over human cognitive labour across a majority of economically valuable tasks, and if wage-mediated demand remains the primary distribution mechanism for purchasing power without sufficient re-inflation, then the existing economic system undergoes a non-linear collapse.
This outcome does not depend on AI intent, autonomy, or hostility.
11. Falsifiability Conditions
The Discontinuity Thesis is false if any of the following conditions are satisfied at scale:
- AI fails to sustain unit cost dominance in cognitive labour
- Wage-mediated demand is replaced by a stable alternative distribution mechanism
- New human-only labour domains emerge at sufficient scale and durability
- Ownership of AI productive capacity becomes broadly distributed
12. Conclusion
Economic discontinuity driven by AI unit cost dominance constitutes the earliest and most structurally binding AI risk. Alignment failures, political governance failures, and long-tail catastrophic scenarios are contingent upon a degree of economic continuity that unit cost dominance itself erodes.
Consequently, economic collapse under AI substitution should be treated as the primary baseline risk against which other AI risk models are evaluated.