
Paging Gen X: You’re Our Only Hope
I saw a meme this week: Gen X, what did we do to deserve teaching our parents and our kids how to use a printer? That’s us. One foot in analog, one in digital. Raised on mixtapes and manual settings, living on message boards and crafting the perfect prompt. Part I want to believe (👽), part world-class BS detector. No one is coming to save us. So dust off your Doc Martens, grab a legal pad, open your AI tab, and let’s go.
This is a response to (and build-on of) Eleanor Mills’s piece in The Independent, which argues that older workers may actually fare better in the AI job shakeup than many assume. Mills points to something Gen X instinctively knows: experience, judgment, and context are not relics. They’re the operating system AI needs.
https://www.the-independent.com/life-style/gen-x-ai-jobs-redundancy-b2815946.html
⸻
The twist The Independent gets right
Mills highlights three truths worth anchoring:
- AI complements judgment. Senior workers’ institutional memory and pattern recognition catch what AI hallucinates. As Mills quotes 55/Redefined’s Lyndsey Simpson, “Innovation needs experience… and the soft skills AI cannot replicate.” She adds that with off-the-shelf AI, many firms increasingly need “trainers and checkers,” not armies of coders. That is a Gen X superpower.
https://www.the-independent.com/life-style/gen-x-ai-jobs-redundancy-b2815946.html - The labor market is aging. By 2030, people over 50 are on track to comprise ~47% of the UK workforce. That is a demand curve, not a pity narrative.
https://group.legalandgeneral.com/en/newsroom/press-releases/proportion-of-over-50s-in-work-set-to-hit-record-high-of-47-by-2030-according-to-new-report
(Report PDF: https://www.legalandgeneral.com/asset/49893c/globalassets/personal/retirement/_resources/documents/your-retirement-income/reports/over-50s-in-the-labour-market.pdf) - Entry-level roles are getting hollowed out first. AI is already hitting the rungs Gen Z and new grads use to climb. A new Stanford Digital Economy Lab paper finds a 13% relative decline in employment for 22–25-year-olds in the most AI-exposed occupations since late 2022, while older workers hold steady or grow.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
(Project page: https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/)
Put differently: if you have tacit knowledge, relationships, taste, standards, and a nose for risk, AI is leverage. If your job was primarily codified, cut-and-paste work, AI is substitution.
⸻
The part The Independent underplays
Gen X is the smallest cohort, squeezed between two louder, larger generations. We’re also the “I’ll do it myself” cohort. That grit has carried us through analog-to-digital, dot-com, 9/11, many recessions, college debt, smartphones, and social. But the 2023–2025 era added new headwinds:
- Management-by-spreadsheet “optimizations.” Cost programs and headcount models increasingly target midlayer roles. When AI shows efficiency gains, some firms bank the savings by cutting junior and middle rungs first, instead of redeploying them. Stanford’s Canaries in the Coal Mine helps quantify the early effect on entry-level roles.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf - ATS and algorithmic screening. Age bias didn’t vanish; it got encoded. The EEOC reminds employers that AI-mediated decisions still fall under anti-discrimination law, and lawsuits against screening platforms have spotlighted how automated filters can harm older applicants. If you feel like your résumé disappears into a black hole, you’re not imagining it.
EEOC guidance: https://www.eeoc.gov/newsroom/eeoc-releases-new-guidance-artificial-intelligence-and-title-vii
News explainer: https://www.reuters.com/legal/labor/us-eeoc-warns-employers-over-artificial-intelligence-bias-2023-05-18/
Further analysis: https://www.nytimes.com/2023/05/18/technology/ai-hiring-discrimination.html - The first rung is cracking. Multiple reputable summaries of the Stanford work show the early damage lands on young workers and entry-level pathways. That’s bad for Gen Z now and bad for Gen X later, because career ladders without rungs strand everyone. And risk us never being empty nested. Without that first rung our kids are doomed never to launch.
https://fortune.com/2025/08/22/ai-labor-market-entry-level-jobs-stanford-report/
https://www.cnbc.com/2025/08/23/stanford-report-ai-taking-entry-level-jobs.html
All of which is to say: Mills is right that AI can be a lifeline for older workers. She’s also right that we have the BS-meter to catch AI’s errors.
But we need strategy and a get shit done attitude not vibes and empathy bots.
⸻
Why Gen X is built for this moment
Call it the Latchkey Advantage:
- Analog-to-digital fluency. We learned new systems with no manual and no safety net. That maps perfectly to prompt engineering, tool chaining, and “human in the loop” quality control.
- Context memory. We remember what the org tried five, ten, twenty years ago. That is exactly what current models lack.
- Independent problem solving. Printer jam at 6:55 a.m. before the final college paper is due? We fixed it, aced it and were not allowed to make up sources. That bias for action is gold when AI gives you a 70% draft and you must get it to 100%.
The IMF’s global analysis: about 40% of jobs are exposed to AI, with advanced economies seeing even more exposure. The question isn’t if AI rewires work. It’s whether we aim it at complementing human strengths or replace the wrong things.
https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
https://www.imf.org/en/Publications/WEO/Issues/2024/01/30/world-economic-outlook-update-january-2024
Meanwhile Bain projects 150 million jobs globally will shift to workers 55+ by 2030, especially in leadership and advisory roles where human connection, negotiation, and decision quality matter. That is the lane Gen X already drives.
https://www.bain.com/insights/better-with-age-the-rising-importance-of-older-workers/
https://www.bain.com/about/media-center/press-releases/2023/older-workers-will-fill-150-million-more-jobs-globally-by-2030-exceeding-a-quarter-of-the-workforce-in-high-income-countries/
⸻
Paging Gen X: the playbook
Think of this as a field manual you can run on Monday.
1) Become the trainer and checker your AI needs
- Treat AI like a talented intern that never sleeps. You spec, review, red-team, and approve.
- Capture your tacit knowledge as checklists and rubrics the model can echo.
- Use “compare and critique” prompts to force models to justify decisions against your criteria.
This is the “trainers and checkers” model Simpson described. It is not theoretical. It is a hiring plan. All the while making sure you nor your staff become Homer Simpson human in the loop nuclear plant safety inspector extraordinaire. Don’t just keep rubber stamping things (and someone explain these references to the younger people in the room…)
(Background on the “trainers and checkers” framing from Mills/Simpson): https://www.the-independent.com/life-style/gen-x-ai-jobs-redundancy-b2815946.html
2) Turn institutional knowledge into a searchable asset
- Build a private knowledge base. Load SOPs, proposals, post-mortems, and client lore into a retrieval system so AI can find your judgment at 3 a.m.
- Maintain a “What we tried and why it failed” log. That saves the next team from expensive reruns.
Result: you become the force multiplier for your org’s memory.
Don’t hoard your company knowledge!
3) Redesign jobs into tasks
- Don’t let leadership “optimize” whole roles away. Break jobs into tasks and decide what to automate, augment, and protect.
- Protect the learning loops junior team members need: scoping, feedback, and live-stakes responsibility.
Stanford’s evidence suggests automation-heavy use cases hit entry-level employment hardest. Insist on augmentation use cases first. When you automate away knowledge and tasks who is left to know if the automation is working.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
4) Demand skills-first hiring and promotion
- Push HR to pilot skills-based pathways and portfolio-based promotion. Degrees and age are noisy proxies; output is not.
- Require transparency for AI screening tools and opt-out pathways so humans can overrule false negatives. The EEOC has made it clear: companies are still on the hook.
https://www.eeoc.gov/newsroom/eeoc-releases-new-guidance-artificial-intelligence-and-title-vii
5) Ship visible wins with AI
- Pick three workflows a month: meeting notes + action items; competitor briefs; data-to-slides.
- Publish before/after time-savings and error-rate stats.
- Tie those wins to customer outcomes and risk reduction, not just hours saved, calls answered etc.
- Measuring the wrong KPIs is like grading Blockbuster employees on how many VHS tapes they rewound instead of how many customers they helped discover a great movie. Sure, everyone remembers the “Be Kind, Rewind” stickers — but nobody ever went back because of perfectly rewound tapes. They went back because they found something worth watching.
6) Codify Gen X mentorship as an apprenticeship
- Pair a Gen X “checker” with two junior team members. They drive prompts; you drive standards and context.
- Rotate ownership so juniors climb, not stagnate. Document skills gained per cycle.
7) Guardrails, not fear
- Adopt lightweight AI governance: tool registry, use-case approvals, PI/PHI rules, human-in-the-loop for external outputs, and a red-team channel for model mistakes.
- Write a one-page “When not to use AI” policy. If it’s safety-critical, personally identifying, or legal, slow down. Don’t trust AI with your or your customer’s health, wealth or freedom. At least not until it can uphold the professional standards and duties of care of a Doctor, Accountant or Attorney and be held responsible to them.
- Trusting AI without verification is like trusting Joe Isuzu — the guy who told us cars could fly and get 300 miles to the gallon. He smiled, he um hallucinated/lied, and the whole joke was that only a sucker believed him. Treat AI the same way: laugh at the pitch, check the fine print, and don’t hand over the keys.
⸻
Policy moves leaders should implement now
- Skills-based, age-neutral hiring audits for ATS and AI tools. Required adverse-impact tests by age band, with human review for borderline rejections. EEOC guidance applies here.
https://www.eeoc.gov/newsroom/eeoc-releases-new-guidance-artificial-intelligence-and-title-vii - AI apprenticeship tax credits for companies that keep or expand entry-level headcount while deploying AI.
- Mid-career learning stipends indexed to AI adoption. If AI changes the job, fund the transition.
- Right to a human review in screening and layoff decisions that involve AI scoring.
- Task-first automation reporting to boards. Force clarity on what’s being automated and how capability is preserved.
⸻
What this means for Gen X (and our kids)
- For us: AI can finally showcase our GenX strengths, if we step toward it. Learn it like we learned email, smartphones, and streaming. The job is not to out-speed AI. It is to out-judge it.
- For Gen Z/Alpha: the first rungs are fragile. We must protect apprenticeships, internships, and real responsibility. That is how capability compounds. The Stanford results are the early warning. Heed them.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf - For leaders: if you “optimize” by deleting junior roles and thinning the middle, you are not saving. You are quietly risk-insuring your future.
⸻
The Gen X closing argument
Going the distance
I am a driver, I’m a winner …
Boomers often had the “me” frame. Millennials popularized the “to me” frame. Gen Z and Alpha are building a new “we.” Gen X is the “I”: if no one else will do it, I will. That attitude is a feature, not a flaw. In this moment, it is our comparative advantage.
So here is the wake up call to action: be the trainer, be the checker, be the adult in the loop. Encode your judgment. Insist on apprenticeships. Fight lazy automation.
And keep the printer working for three generations under one roof.
Join with me, Gen X. You’re our only hope.
⸻
Energy used to create this article (100-watt bulb equivalent)
Estimated electricity: ~6 Wh, based on recent reporting that a single large-model text query uses ~0.3–0.34 Wh and that drafting this piece involved a few-dozen AI queries and generations. That is roughly 3.6 minutes of a 100-watt light bulb. Assumptions and context: estimates vary by model and infrastructure; recent analyses and statements include Epoch’s estimate (~0.3 Wh/query) and OpenAI’s CEO citing ~0.34 Wh/query.
https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use
https://www.businessinsider.com/how-much-energy-does-chatgpt-use-average-query-watts-altman-2025-6

