Methodology
A methodology for quantifying the enshittification of major companies, grounded in Cory Doctorow's three-stage theory of platform decay.
The Score: 0–100
Every product receives a composite enshittification score from 0 (no evidence of decay) to 100 (fully captured platform). The score is the sum of ten dimension scores, each rated 0–10. The overall score maps to a classification:
| Score Range | Classification |
|---|---|
| 0–15 | Healthy |
| 16–30 | Early Warning |
| 31–50 | Actively Enshittifying |
| 51–70 | Severely Enshittified |
| 71–100 | Terminally Enshittified |
The Ten Dimensions
Each product is evaluated across ten dimensions that capture the full spectrum of platform decay. Together they cover user experience, business relationships, governance, and competitive behavior.
- D1User Value ErosionHow much has the core product degraded for end users?
- D2Business Customer ExploitationHow does the platform treat sellers, advertisers, creators, and developers?
- D3Shareholder ExtractionTo what degree is the company prioritizing shareholder value over users?
- D4Lock-in & Switching CostsHow difficult is it to leave the platform?
- D5Twiddling & Algorithmic OpacityDoes the platform make opaque, unilateral changes that affect users and business customers?
- D6Dark PatternsDoes the platform employ deceptive or manipulative design practices?
- D7Advertising & Monetization PressureHow aggressively does the platform monetize user attention and data?
- D8Competitive ConductDoes the platform act to eliminate competition through acquisitions, bundling, or exclusionary practices?
- D9Labor & GovernanceHow does the company treat its workers? Is governance structured as a check on extraction, or does it enable it?
- D10Regulatory & Legal PostureDoes the company resist, undermine, or evade regulation designed to protect users and competition?
The Three Stages
Doctorow's model describes how platforms progress through three stages as they shift value away from users and toward shareholders. Each product is classified into the stage that best matches its current behavior.
| Dimension | Stage 1 | Stage 2 | Stage 3 |
|---|---|---|---|
| D1. User Value Erosion | Low | Rising | High |
| D2. Business Exploitation | Low | Low | High |
| D3. Shareholder Extraction | Low | Rising | High |
| D4. Lock-in & Switching | Building | Moderate | High |
| D5. Twiddling & Opacity | Low | Rising | High |
| D6. Dark Patterns | Low | Moderate | High |
| D7. Monetization Pressure | Low | Rising | High |
| D8. Competitive Conduct | Moderate | Moderate | High |
| D9. Labor & Governance | Low | Rising | High |
| D10. Regulatory Posture | Low | Moderate | High |
Evidence Standards
The framework uses a tiered system to categorize evidence by reliability. AI agents are instructed to prioritize higher-tier sources when researching each product. Automated validation checks that every dimension has at least 2 evidence items, but tier quality is not formally audited for every score.
- T1Official company filings, earnings calls, and regulatory documents
- T2Peer-reviewed research and academic studies
- T3Investigative journalism from established outlets
- T4Industry analysis and expert commentary
- T5User reports, community documentation, and crowdsourced evidence
- T6Anecdotal evidence and social media posts
The intended standard: a dimension score above 3 should cite at least one T1–T3 source, and scores above 7 should cite multiple independent sources across tiers. Lower-tier evidence (T5–T6) should support but not solely justify a high score. These thresholds are written into the AI agent prompts but have not been formally audited against every current score.
Transparency
This project aims to be honest about what it is and how it works.
- Public methodology — the scoring framework is summarized on this page.
- Cited evidence — each product page links to the sources used.
- AI-generated analysis — the scoring framework was designed and all product analyses were generated using AI agents. The project maintainer reviews and publishes the results but does not independently verify every claim.
- Open to correction — if you find errors or disagree with a score, a public contact method is coming soon.