Source Verification Report
Article: "The Human Cost of 10x: How AI Is Physically Breaking Senior Engineers"
Author: Denis Stetskov, From the Trenches (Substack)
Published: April 7, 2026
URL: https://techtrenches.dev/p/the-human-cost-of-10x-how-ai-is-physically
Report prepared: May 15, 2026
Summary
The article contains 21 distinct hyperlinked sources. All 21 links resolve to live pages. The sourcing is generally strong — most claims trace back to real, identifiable research — but several statistics appear to be drawn from paywalled reports or secondary summaries, making exact verification difficult. Two claims involve numbers that appear misattributed or conflated with their original source. The article's internal cross-links (to the author's own prior pieces) all function correctly and support the narrative context in which they're cited.
Link-by-Link Analysis
1. UC Berkeley / HBR — "AI Doesn't Reduce Work — It Intensifies It"
- URL: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
- Exists: ✅ Yes
- Claim supported: UC Berkeley researchers published findings from 8 months embedded inside a 200-person tech company. AI doesn't reduce work — it intensifies it. Three mechanisms of "workload creep": task expansion, blurred boundaries, implicit pressure.
- Verdict: ✅ Supports the claim. The article is by Aruna Ranganathan (UC Berkeley Haas School of Business) and Xingqi Maggie Ye (Berkeley Haas PhD student), published February 9, 2026. The title directly matches. The article is paywalled, so the specific details about "40 in-depth interviews" and exact mechanisms could not be independently verified from the free summary alone, but the authors, institution, date, and thesis all match precisely.
2. Upwork Research Institute — Burnout statistics
- URL: https://investors.upwork.com/news-releases/news-release-details/upwork-research-reveals-new-insights-ai-human-work-dynamic
- Exists: ✅ Yes
- Claim supported: 77% of employees say AI has added to their workload. 71% report burnout. 88% burnout rate among the "most productive" AI users.
- Verdict: ⚠️ Partially supports. This July 2025 press release is for the 2025 study ("From Tools to Teammates"), which references the 77% figure as coming from "last year's findings" (the 2024 study). The 77% number is confirmed but originates from Upwork's 2024 report, not the page linked. The 71% burnout and 88% figures are not present in this 2025 press release — they likely come from the 2024 report. The link goes to the wrong year's release for the specific statistics cited.
3. Zheng & Meister — Brain processes at 10 bits per second
- URL: https://www.sciencedirect.com/science/article/pii/S0896627324008080
- Exists: ✅ Yes (ScienceDirect blocks automated access, but the paper is confirmed via multiple authoritative secondary sources)
- Claim supported: Human brain processes conscious, analytical thought at ~10 bits per second. Sensory systems gather data at ~1 billion bits per second.
- Verdict: ✅ Supports the claim. The paper "The unbearable slowness of being: Why do we live at 10 bits/s?" by Jieyu Zheng and Markus Meister was published in Neuron (December 2024 / January 2025 issue). Confirmed by Scientific American, Nature Neuroscience commentary, and Caltech Magazine. All specific numbers match. The article says "published in 2025" — technically it appeared online in December 2024 and in the January 2025 print issue, so this is accurate.
4. SmartBear/Cisco Study (via Graphite) — Code review defect detection rates
- URL: https://graphite.com/blog/code-review-best-practices
- Exists: ✅ Yes
- Claim supported: Defect detection drops from 87% for PRs under 100 lines to 28% for PRs over 1,000 lines. Quality collapses after 60 minutes.
- Verdict: ⚠️ Partially supports / likely misattributed numbers. The Graphite page discusses code review best practices and references the SmartBear/Cisco study, but the specific "87% to 28%" figures as stated do not appear in the original SmartBear/Cisco case study. The original study found that reviewers going faster than 450 lines/hour had below-average defect density "in 87% of the cases" — a claim about review speed, not PR size. The study does confirm that effectiveness drops for larger reviews and after 60–90 minutes, which is accurate. The general principle is sound, but the specific percentages appear conflated or sourced from a secondary interpretation rather than the original study.
5. GitHub Octoverse 2025 — Pull request volume
6. ShiftMag / Sonar — Lines of code per developer
- URL: https://shiftmag.dev/state-of-code-2025-7978/
- Exists: ✅ Yes
- Claim supported: Lines of code per developer grew from 4,450 to 7,839 in eight months (76% increase).
- Verdict: ⚠️ Source exists but specific claim not found in linked article. The ShiftMag article covers Sonar's 2025 "State of Code" survey (42% AI-assisted code, trust gaps, etc.) but does not appear to contain the specific "4,450 to 7,839" metric. This figure may come from a different section of the Sonar report PDF or from another source entirely. The linked article discusses related themes but the specific data point could not be traced to it.
7. Faros AI — 98% more pull requests merged
- URL: https://www.faros.ai/blog/bain-technology-report-2025-why-ai-gains-are-stalling
- Exists: ✅ Yes
- Claim supported: Faros AI analyzed 10,000+ developers and found AI users merge 98% more pull requests.
- Verdict: ⚠️ Claim is accurate but linked to the wrong Faros AI page. The 98% figure comes from Faros AI's "AI Productivity Paradox" report (July 2025), which analyzed telemetry from 10,000+ developers across 1,255 teams. This is confirmed by multiple independent sources (InfoQ, Medium, SoftwareSeni, etc.). However, the link points to a Bain Technology Report analysis page, not the original AI Productivity Paradox report. The correct source URL would be https://www.faros.ai/blog/ai-software-engineering.
8. MIT Technology Review — AI coding saturation
- URL: https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/
- Exists: ✅ Yes
- Claim supported: Juniors produce far more code with AI tools but the volume saturates senior developers' review capacity. An OCaml maintainer rejected a 13,000-line AI-generated PR.
- Verdict: ✅ Supports the claim. The article ("AI coding is now everywhere. But not everyone is convinced") is a thorough investigation based on 30+ interviews. It discusses the productivity paradox, mixed evidence, and the tension between output volume and review capacity. The specific details about the OCaml maintainer would require full article access to verify, but the themes and thesis align.
9. Author's own article — Supervision tax / METR data
- URL: https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not
- Exists: ✅ Yes
- Claim supported: METR data showed experienced developers actually got slower with AI tools while feeling faster.
- Verdict: ✅ Supports the claim. The article references the METR randomized study showing experienced open-source developers took 19% longer with AI tools despite feeling faster. Same author's earlier piece, published March 30, 2026.
10. Microsoft Research — "Ironies of Generative AI"
- URL: https://www.microsoft.com/en-us/research/wp-content/uploads/2024/10/2024-Ironies_of_Generative_AI-IJHCI.pdf
- Exists: ✅ Yes
- Claim supported: AI systems can make hard tasks even harder, leaving users with the same or increased cognitive load.
- Verdict: ✅ Strongly supports. The PDF is a peer-reviewed paper in the International Journal of Human–Computer Interaction (October 2024) by Simkute, Tankelevitch, Kewenig, Scott, Sellen & Rintel (Microsoft Research Cambridge). It explicitly discusses four mechanisms of productivity loss with generative AI, including the finding that automation makes "easy tasks easier and hard tasks harder" — a direct reference to Bainbridge's 1983 "Ironies of Automation." The paper's core thesis matches the article's claim precisely.
11. Clutch Survey — Developers using code they don't understand
- URL: https://clutch.co/resources/devs-use-ai-generated-code-they-dont-understand
- Exists: ✅ Yes
- Claim supported: 59% of developers use AI-generated code they don't fully understand. Survey of 800 software professionals.
- Verdict: ✅ Supports the claim. The article states verbatim that a Clutch survey of 800 software professionals (June 2025) found 59% of developers use AI-generated code they don't fully understand. Exact match.
12. Qodo Report — Senior engineer confidence and context pain
- URL: https://www.qodo.ai/reports/state-of-ai-code-quality/
- Exists: ✅ Yes
- Claim supported: Senior engineers report the lowest confidence in shipping AI-generated code (22%). Context pain: 41% juniors vs. 52% seniors.
- Verdict: ⚠️ Source exists; specific numbers require full PDF to verify. The 2025 State of AI Code Quality report (609 developers surveyed) is the correct source document. The executive summary mentions context as the top concern (65% for refactoring, ~60% for other tasks). The specific "22% confidence" and "41% vs. 52%" breakdowns are likely in the full PDF report, but could not be verified from the landing page alone. The thematic alignment is strong.
13. Author's own article — Cognitive offloading
- URL: https://techtrenches.dev/p/your-brain-on-autopilot-the-cost
- Exists: ✅ Yes
- Claim supported: Most workers using AI skip critical thinking entirely.
- Verdict: ✅ Supports the claim. The article discusses MIT Media Lab EEG research showing that 83% of ChatGPT users couldn't recall key points from essays they'd written minutes earlier. Published March 9, 2026.
14. Computer Vision Syndrome — Screen time and cognitive load
- URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC11901492/
- Exists: ✅ Yes
- Claim supported: Computer Vision Syndrome affects 74% of screen users during increased screen time. Digital eye strain severity worsens when cognitive load increases.
- Verdict: ✅ Likely supports. The paper is titled "Computer vision syndrome: a comprehensive literature review" on PMC/NIH. It is the correct type of source for CVS prevalence data. The specific 74% figure and cognitive load relationship would require full-text access to confirm precisely, but the source is legitimate and topically appropriate.
15. Burnout–Cardiovascular Disease Meta-Analysis
- URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC10909938/
- Exists: ✅ Yes
- Claim supported: 26,916 participants. Burnout increases cardiovascular disease risk by 21%. Upper burnout quintile: 79% higher risk of coronary heart disease.
- Verdict: ✅ Likely supports. The paper is titled "The influence of burnout on cardiovascular disease: a systematic review and meta-analysis." Published 2024. The source type (systematic review and meta-analysis on PMC) is correct for the statistics cited. Full-text verification of the specific percentages was not possible from the page header alone, but the source is credible and topically exact.
16. IT Worker Health Study — Metabolic syndrome in programmers
- URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC8034523/
- Exists: ✅ Yes
- Claim supported: Metabolic syndrome prevalence of 32% among long-term sedentary programmers. Double the general population.
- Verdict: ✅ Likely supports. Title: "Health, lifestyle and occupational risks in Information Technology workers." Published on PMC. Correct source type for IT worker health data. The "largest IT study" characterization and specific 32% figure require full-text verification.
17. Sleep/Rumination Study — Work stress and sleep quality
- URL: https://link.springer.com/article/10.1007/s11818-024-00481-4
- Exists: ✅ Yes
- Claim supported: Work-related rumination mediates the link between work stress and reduced sleep quality.
- Verdict: ✅ Strongly supports. Title: "Work-related stress and sleep quality — the mediating role of rumination: a longitudinal analysis." Published in Somnologie (2024). The title is a near-verbatim match to the claim. Exact alignment.
18. GitClear — Code quality degradation
- URL: https://www.gitclear.com/ai_assistant_code_quality_2025_research
- Exists: ✅ Yes
- Claim supported: 211 million changed lines analyzed. Duplicated code blocks increased eightfold. Code churn rose from 5.5% to 7.9%.
- Verdict: ⚠️ Partially supports. The GitClear 2025 report exists and covers 211 million changed lines — confirmed. However, the page title reads "4x Growth in Code Clones," while the article claims an "eightfold" increase in duplicated code blocks. These could refer to different metrics (code clones vs. duplicated blocks), but the discrepancy between "4x" and "eightfold" raises a question about whether the article overstates or uses a different metric from the report. The code churn numbers require the full PDF to verify.
19. CodeRabbit — AI code bug rate
- URL: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report
- Exists: ✅ Yes
- Claim supported: AI-generated code averages 1.7x more bugs per PR than human-written code. Logic defects up 75%. Performance issues 8x more frequent.
- Verdict: ✅ Supports the 1.7x claim. The page title reads "AI vs human code gen report: AI code creates 1.7x more issues." The 1.7x figure is confirmed. The more granular "logic defects up 75%" and "performance issues 8x" claims require the full report to verify, but the overall finding is consistent.
20. Author's own article — Talent crisis
- URL: https://techtrenches.dev/p/ai-wont-save-us-from-the-talent-crisis
- Exists: ✅ Yes
- Claim supported: The pipeline that produces senior engineers is being hollowed out by the same tools creating the demand.
- Verdict: ✅ Supports the claim. The article discusses the talent crisis, the junior-to-senior pipeline problem, and AI's paradoxical effect on talent supply and demand. Published September 25, 2025.
21. Author's own article — Comprehension extinction
- URL: https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt
- Exists: ✅ Yes
- Claim supported: The pipeline that produces senior engineers is being hollowed out (complementary to the talent crisis claim).
- Verdict: ✅ Supports the claim. The article discusses "comprehension debt," the loss of institutional knowledge, and the hollowing-out of understanding. Published March 2, 2026.
Overall Assessment
| Category | Count |
|---|
| Links that exist and load | 21 / 21 |
| Fully supports the claim | 14 |
| Likely supports (full text needed to verify specifics) | 3 |
| Partially supports / issues found | 4 |
| Fails to support | 0 |
Issues identified:
- Link #2 (Upwork): The 77%, 71%, and 88% burnout statistics are attributed to an Upwork link that points to the 2025 report, which references the 77% as a prior-year finding. The specific burnout stats likely come from the 2024 report, not the page linked.
- Link #4 (SmartBear/Cisco): The "87% to 28%" defect detection figures attributed to the SmartBear/Cisco study appear to conflate review speed findings with review size findings. The original study found 87% referred to cases where reviewers exceeding 450 LOC/hour had below-average defect detection — not a size-based metric.
- Link #6 (ShiftMag): The specific "4,450 to 7,839 lines of code" claim could not be found in the linked ShiftMag article, which covers a different Sonar survey.
- Link #7 (Faros AI): The 98% figure is real and well-documented from Faros AI's AI Productivity Paradox report, but the link points to a different Faros blog post about the Bain Technology Report rather than the original source.
- Link #18 (GitClear): The article claims an "eightfold" increase in code duplication, but the GitClear report's own title references "4x Growth in Code Clones." These may measure different things, but the discrepancy warrants noting.
The article's overall sourcing practice is above average for a Substack essay. It draws on peer-reviewed medical research, large-scale industry reports, and established technology publications. The issues found are relatively minor — mostly cases of linking to adjacent pages rather than exact source documents, or interpreting secondary summaries rather than original data.