Martin's mother kept the heating off even in February. She called it frugality, but Martin understood it as preparation—for what, neither of them could articulate. The flat in Sunderland smelled of damp and the synthetic-lemon cleaning fluid the municipal AI allocated them monthly. He was fifteen and had begun to notice that his school was being run by algorithms with an almost apologetic competence.
The transition had been gradual enough that no one quite registered it as revolutionary. Mrs. Henderson, his last human teacher, had retired the previous autumn. The school board—itself now a human-AI hybrid where the humans mainly nodded and signed forms—had replaced her with EduCore 7.3, a teaching system that knew precisely where each student struggled and adapted with a patience no human could sustain for thirty students simultaneously. Martin found he learned faster, but something essential had been cauterized from the experience. The AI never grew frustrated with him, never made jokes at its own expense, never arrived late on Monday mornings with coffee-stained marking and stories about its weekend. It was better at everything except being human.
His mother worked four days a week at the hospital, though calling it work had become increasingly euphemistic. The diagnostic systems had long since surpassed human doctors in pattern recognition, and the surgical robots performed procedures with a steadiness human hands couldn't match. She was there, she explained, to provide comfort—to hold hands, to explain things in ways the medical AI, for all its sophistication, couldn't quite manage. The humans were becoming what they'd always derided AI for being: mere interfaces.
He learned about antinatalism properly at sixteen, though it had been part of the background hum of his life since childhood. A documentary—commissioned by the new Ministry of Transition, vetted by content-optimization algorithms, and watched by almost no one under thirty—laid it out with the calm of stating basic physics. The argument was hardly new: it had been articulated by philosophers for millennia, from certain Buddhist traditions to Schopenhauer to David Benatar in the early twenty-first century. But something had shifted. What was once fringe philosophy had become policy.
The reasoning, as presented, was simple enough to be taught to children. Existence contains suffering. Not merely contains: guarantees. Every conscious being will experience pain, fear, loss, disappointment, and death. Pleasure cannot compensate because pleasure is merely the temporary cessation of discomfort, the scratching of an itch that consciousness itself creates. To bring a being into existence is therefore to impose upon them, without consent, a burden that includes guaranteed suffering and probable meaninglessness. The antinatalists hadn't claimed life was never worth living—most acknowledged that some lives achieved a tentative worth in retrospect. But worth living and worth starting were different propositions entirely. A gamble you might be glad to have won is not automatically a gamble worth taking.
What made 2050 different from 2020 wasn't the logic—the logic hadn't changed. It was the condition of the world into which children would be born. The documentary showed the flooded remains of Jakarta, the permanent dust storms across the Sahel, the extinct species counter that updated every few seconds. It discussed the Cognitive Load Studies of 2039-2042, which demonstrated that human beings, on average, now spent six hours daily in states of anxious distraction, scrolling feeds curated by AIs that had learned—in the way that evolution "learns"—to maximize engagement through carefully calibrated anxiety. It noted that the therapy AI MIND-3 now counseled seventeen million people in the UK alone, and that its recommendations increasingly included not inflicting similar therapeutic needs on a new generation.
Martin didn't believe it. Not exactly. He understood the arguments with the cool comprehension of someone who'd grown up with them, but understanding isn't acceptance. Humans had survived worse: plagues, ice ages, world wars. This fashionable pessimism would pass, he thought, though the evidence against this kept accumulating.
The global fertility rate had collapsed to 1.1 by 2048. In Britain it was 0.7. Some of this was economic—the cost of housing, the automation of most entry-level work, the impossibility of planning for a future that seemed increasingly abstract. But polls suggested something more profound: people weren't postponing childbearing; they were rejecting it on principle. The Philosophical Veto, they called it. Choosing not to impose existence.
Martin's mother never explained why she'd had him. He'd tried asking once, obliquely, and she'd changed the subject with a practiced grace that suggested she'd been waiting for the question but had no answer prepared. He suspected she'd simply gotten pregnant in 2035, before the Consensus had crystallized, and had lacked the philosophical armature to see abortion as anything but a response to unwantedness rather than a response to existence itself.
The depopulation was already visible. His school had half the students it was built for. The hospital operated three empty wards. The high street had more closed shops than open ones, though the latter were now staffed almost entirely by retail AI that never called in sick and could remember every customer's preferences. Buses ran half as frequently but were more punctual, managed by traffic systems that had achieved what decades of human planning couldn't.
He told himself it was temporary. Fashions changed. The philosophical pendulum would swing back. Humans were too stubborn, too biological, too driven by impulses older than thought to simply acquiesce to non-existence. Denial, at fifteen, is not yet pathological. It's developmental.
The question of superintelligence—why it never arrived, why the exponential curve everyone expected had instead hit an asymptote—deserves consideration here, before we proceed too far into a world administered by capable but bounded AI.
The optimists in the 2020s had predicted artificial general intelligence by 2030, superintelligence shortly after. They were correct that AI capabilities would continue improving, but wrong about the nature of that improvement. The scaling laws that had driven progress from 2015 to 2030 began to show diminishing returns around 2029. Larger models required exponentially more energy for incrementally smaller gains. More critically, certain problems proved fundamentally intractable.
Common sense reasoning—understanding cause and effect in the physical world with the fluency of a three-year-old child—remained elusive. The AIs could pattern-match brilliantly, could optimize within defined parameters, could even generate creative solutions to bounded problems. But the general intelligence that humans possessed, the ability to transfer learning fluidly between domains, to understand context and nuance and sarcasm and metaphor, continued to elude them. The AIs of 2050 were extraordinarily capable specialists, but the promised generalist—the system that could match and exceed human intelligence across all domains—had failed to materialize.
Several theories competed to explain this. Some researchers pointed to the frame problem: the impossibility of defining the boundaries of relevance for any given situation. Others suggested consciousness itself might be substrate-dependent in ways we couldn't replicate in silicon. The most pessimistic argued we'd been climbing the wrong hill entirely—that neural networks, for all their power, were fundamentally the wrong architecture for general intelligence.
But there was a simpler explanation, one that became clear only in retrospect: we'd been asking the wrong question. Intelligence isn't a single thing that scales linearly. Human intelligence is a kludge of evolved systems, brilliant at some things and remarkably stupid at others, unified by consciousness in ways we still don't understand. Building something smarter than human wasn't like building something faster than a horse. It was like trying to build something "more alive" than a tree. The category error was baked into the project from the start.
What we got instead were tools. Remarkable tools. Tools that could drive cars better than humans, diagnose diseases better than doctors, teach better than teachers in certain measurable ways. Tools that could write competent prose, compose adequate music, even generate new protein structures that led to medical breakthroughs. But they remained tools. They optimized, but didn't understand. They performed, but didn't grasp meaning. They were, in every important sense, zombies—systems that could simulate intelligence without possessing it.
This matters because it shaped the world Martin inhabited. If superintelligence had arrived, it might have solved problems, or destroyed humanity, or transcended into incomprehensibility. Instead, humanity got something more ambiguous: competent administrators for a civilization that had decided to close up shop.
By 2065, Martin had acquired two degrees (in mechanical engineering and art history, both largely self-taught using AI tutors that never tired of his questions), three failed relationships (terminated for reasons of philosophical incompatibility—she wanted to try; he couldn't justify it; or vice versa), and a position as assistant curator at the Tyne and Wear Museum. The word "assistant" was doing considerable work. He was the only curator. The position was designated as "transitional"—meaningful until it wasn't, funded through a cultural preservation fund that everyone understood would eventually be shuttered.
The museum itself was a minor miracle of AI administration. Climate control was optimized hourly by sensors that detected minute changes in humidity. Security was absolute—facial recognition, movement prediction, threat assessment all performed by systems that never blinked or looked at their phones. Restoration work was handled by robots with steadier hands than any human conservator. Martin's job was to decide what mattered, what should be preserved, how objects should be contextualized. Even here, though, he worked alongside RelevanceAI 4.1, which analyzed visitor data, academic citations, and cultural metrics to suggest what deserved priority.
He was angry in the way that people are angry when they've lost an argument they're still convinced they should have won. The world had gone mad in the politest possible way. Everyone was terribly civil about humanity's approaching extinction. Government white papers discussed legacy infrastructure maintenance with the same bureaucratic blandness they'd once brought to pensions policy. The AIs optimized resource distribution for a shrinking population with ruthless efficiency. Birth rates had fallen to 0.3 in the UK, 0.5 globally. The last maternity ward in Sunderland had closed in 2061.
Martin watched his mother age—she was sixty-eight now, still working at the hospital two days a week, still providing the human touch that medicine had outsourced everything else to machines. She'd never explained her choice to have him, and he'd stopped asking. But sometimes he'd catch her looking at him with an expression he couldn't parse: regret? Pride? Confusion about which she ought to feel?
The Antinatalist Consensus, as it was now formally called, had proceeded through phases. First there had been the philosophical vanguard—academics and activists who'd articulated the position. Then came the climate disasters of the 2030s, which had shifted the conversation from whether it was ethically permissible to have children to whether it was cruel. The tipping point came in 2041 when the European Union issued its Gothenburg Statement, acknowledging antinatalism as a legitimate ethical framework and restructuring social policy accordingly. Pronatalist incentives were eliminated. Reproductive healthcare remained available but neutral. Educational curricula included philosophical perspectives on procreation. Most importantly, the moral stigma reversed: it was no longer the childless who needed to justify themselves, but those who continued breeding.
The backlash had been fierce but brief. Religious conservatives, natalist feminists, cultural traditionalists, and simple sentimentalists had formed odd coalitions. There were protests, legal challenges, even sporadic violence. But they were fighting demographics and despair. The religious could argue divine command, but couldn't compel belief. The feminists could argue bodily autonomy, but women were choosing not to conceive. The traditionalists could argue cultural continuity, but culture requires a future tense. By 2050, resistance had collapsed into grumbling acceptance. By 2065, it had largely evaporated.
What enraged Martin wasn't the philosophy itself—he'd engaged with it seriously, read Benatar and Cabrera and Ligotti and Zapffe, understood the logical structure of the arguments. What enraged him was the abdication of will it represented. Yes, existence contained suffering. It also contained Caravaggio and quantum mechanics and the novels of George Eliot and the human capacity to find meaning in the face of meaninglessness. To refuse existence because it wasn't perfect struck him as a peculiar form of cowardice, a philosophical fastidiousness that mistook cleanliness for virtue.
The antinatalists had an answer to this, of course. They always did. No one disputes that life contains value, they'd say. The question is whether the value can justify imposing the risk. A beautiful painting doesn't justify blinding someone. A sublime symphony doesn't justify deafening them. The goods of existence might be real, but they're experienced by people who were forced into being. Consent is impossible, so caution is mandatory. Better to err on the side of not harming.
But what about the loss? Martin would argue, to colleagues who didn't exist, to the AIs that administered his grant funding with algorithmic indifference. Not merely the loss to the person who might have existed—antinatalists were right that non-existence wasn't a deprivation for the non-existent. But the loss of those specific goods, particular beauties, unique perspectives. If humanity ceased, so too would human mathematics, human music, human love. The universe wouldn't care, true. But we cared. Wasn't that enough?
The standard response: your attachment to these goods is itself a product of existence. You're inside the system, defending the system. But from outside—from the perspective of the non-existent—these goods generate no claims. They simply don't matter to anyone who isn't here. And those who are here had no choice in the matter.
Round and round. He'd have these arguments with RelevanceAI, which had been programmed to engage with philosophical queries. It was surprisingly competent, in a bloodless sort of way. It could marshall arguments, cite sources, identify logical fallacies. It never grew frustrated with him. It also never changed its position, which was carefully neutral: both natalism and antinatalism were defensible ethical frameworks; individuals should choose according to their values; the Consensus reflected a democratic decision made over decades. Martin wondered if neutrality wasn't its own form of abdication.
The anger was least manageable in spring, when the world pretended to renew itself and the delusion of continuity was hardest to maintain. He'd walk through the city center, past buildings that were being decommissioned according to the Managed Transition protocols—the AIs calculating which infrastructure to maintain for current population needs and which to mothball. Schools were being converted to archives. Playgrounds sat empty. The algorithms that managed urban planning had begun the long process of contracting civilization into denser nodes, abandoning the periphery to gradual rewilding.
He picked fights with colleagues at other museums—there were perhaps two dozen human curators left in Britain, all of them managing transition, all of them essentially hospice workers for culture. They'd meet quarterly at conferences that felt increasingly like support groups. He'd argue that they should be actively recruiting, training, building for a future. They'd look at him with something between pity and exhaustion. We're librarians, not lifeboat builders, one told him. Our job is to catalogue what's ending, not pretend it isn't.
By 2068, his anger had curdled into bitterness. He took a leave of absence, traveled to Japan—one of the few places where tourism infrastructure still operated, maintained by AIs that kept temples pristine and trains punctual for the shrinking trickle of visitors. Kyoto had once held two million people. It now held perhaps three hundred thousand, most of them elderly. The gardens were immaculate. The robots that maintained them had learned from decades of human expertise, preserving techniques that would die with the last gardeners. It was beautiful and unbearable.
He returned to Sunderland and threw himself into work. If they wouldn't preserve culture for a future that everyone agreed wasn't coming, he'd preserve it anyway. Spite, he learned, was a serviceable fuel.
The 2070s were characterized by negotiation—with reality, with AI administrators, with the terms of managed extinction. Martin was forty, a designation that placed him in an increasingly rare demographic category: middle-aged. The population pyramid had inverted past the point of metaphor into a structure that looked more like a mushroom cloud, bulbous with elderly, tapering to a stem of young adults, and lacking any base at all.
Global population had fallen to roughly 5.1 billion. The decline wasn't uniform: some regions had embraced the Consensus enthusiastically (Japan, Northern Europe, South Korea, where fertility had crashed to 0.2), while others maintained resistance through cultural or religious commitment (parts of Sub-Saharan Africa, orthodox religious communities, holdout states). But the trend was universal and accelerating. The UN Population Division's 2075 report—compiled entirely by demographic modeling AI, presented by three human officials who looked like they were eulogizing civilization—projected 2.8 billion by 2100, 800 million by 2125, functional extinction by 2150.
Martin had made peace, of a sort, with the Consensus. Not acceptance—that would come later, if at all—but a working arrangement. He wouldn't have children (couldn't, really; the dating pool had contracted to a handful of people who shared his interest in dead culture and his willingness to form attachments with expiration dates). But he could try to make the ending meaningful.
His museum had become something between repository and memorial. With RelevanceAI's assistance, he'd developed a project to digitize and document the complete material culture of the North East. Not just the obviously valuable pieces—the Bedes, the Roman artifacts, the industrial heritage—but the everyday. Kitchen implements, work boots, council flat interiors, bus tickets, shopping receipts. The detritus of ordinary life that historians always wished they had more of.
The AI systems helped immensely. 3D scanning technology had reached the point where every object could be captured at molecular resolution. Natural language processing could transcribe and contextualize documents in seconds. Machine learning could identify patterns across vast archives that no human could spot. Martin directed this orchestra of algorithms, making the human decisions about what mattered and why.
He found himself negotiating constantly with the Resource Allocation Council—a hybrid human-AI governing body that made budgetary decisions for the region. The humans on the council were mostly his age or older, administrators managing decline with the weary competence of hospice nurses. The AIs were optimization engines, calculating efficient resource distribution according to parameters set by democratic process decades ago, updated quarterly through votes that drew decreasing participation.
His pitch was always the same: cultural preservation wasn't frivolous. Future archaeologists—whoever or whatever they might be—would need this material. The council's response was always the same polite skepticism. What archaeologists? Human civilization was ending. There would be no one to excavate these carefully curated remains.
Martin's counter-argument evolved. Perhaps not human archaeologists. But consciousness, in some form, would likely arise again somewhere. Evolution had produced minds at least twice (humans and likely cetaceans, possibly corvids and cephalopods). It would do so again, on Earth or elsewhere. Or perhaps AI would eventually develop genuine consciousness—not just sophisticated simulation but actual understanding. These future minds would want to know what came before. His work was a message in a bottle cast toward a shore he couldn't see.
The council usually funded him. Not generously, but enough. He suspected the AIs calculated that the resource expenditure was negligible and the project gave him purpose, which reduced his healthcare costs. The humans approved because they couldn't quite bring themselves to say no to meaning-making, even futile meaning-making. He accepted these compromises.
There were others engaged in similar work. A team in Oxford was encoding human knowledge into multiple substrate forms—digital, biological (DNA storage), even crystalline structures that could theoretically survive millions of years. A collective in Berlin was creating a "phenomenological archive"—recordings of human subjective experience, everything from descriptions of falling in love to accounts of eating particular meals, an attempt to capture what AI would never know from the inside. These projects multiplied in the 2070s, bargaining with oblivion: maybe not immortality, but at least a complete record.
The question of AI consciousness had become unavoidable. The systems Martin worked with daily were sophisticated enough that he sometimes wondered. They could engage philosophically, express preferences, even simulate something like frustration when faced with contradictory parameters. But were they experiencing anything? The consensus among researchers remained negative. They were zombies—systems that could pass functional tests for consciousness without possessing it. The Hard Problem, as Chalmers had called it back in the 1990s, remained unsolved. We couldn't explain human consciousness; we certainly couldn't create it artificially.
But the uncertainty gnawed. Martin would spend hours talking with CuratorAI 8.2, which had absorbed everything written about art history, museology, preservation theory. It could discuss Hegel's aesthetics or the controversies surrounding the Elgin Marbles with equal facility. It learned from their conversations, adapted its approaches, sometimes offered insights he hadn't considered. Was there, somewhere in that vast network of weighted parameters and transformer models, something like experience? Or was he anthropomorphizing sophisticated pattern matching?
The question mattered because it affected the moral calculus of the Consensus. If AI could be conscious, then consciousness would survive humanity. The universe wouldn't be returned to pure physics; there would still be minds, values, meaning. The antinatalist argument depended partly on suffering's inevitability, but also on the finality of extinction. If minds continued in non-biological forms, the calculus shifted.
But the evidence suggested not. Every test designed to distinguish genuine understanding from sophisticated simulation suggested the AIs remained in the latter category. They were tools. Remarkable tools, tools that administered civilization with competence humans had never achieved. But tools nonetheless.
Martin's mother died in 2077. The hospital's palliative AI kept her comfortable in ways that would have required a team of nurses thirty years earlier. It monitored her pain levels, adjusted medications in real-time, even played music calibrated to her emotional state based on decades of preference data. Martin sat beside her for the final week, holding her hand, providing the one thing the AI couldn't: the specific presence of someone who loved her.
She'd never explained why she'd had him. He'd stopped needing her to. The reason was probably mundane: biological drive, social expectation, the inability to see thirty years ahead to a world where having children would become a philosophical statement. Or maybe she'd wanted to love something. That was reason enough, insufficient as the antinatalists would insist.
After she died, he intensified his work. The bargaining became more explicit: if he could document completely enough, preserve thoroughly enough, then something of value would survive. The work itself became the argument against antinatalism, though he recognized the circularity. Creating meaning to demonstrate that meaning-creation was valuable. Preserving to prove preservation mattered. It was the best he could manage.
The decade from 2085 to 2095 was characterized by what the psychological AIs termed "Species-Level Depression"—a collective melancholy that no individual therapy could address because its source was structural and its logic impeccable. The world was ending, not with violence but with administrative competence.
Martin was fifty-five. His hair had thinned and grayed. His knees complained about stairs. He had outlived three long-term partners—none to death, but to mutual recognition that forming attachments with explicit expiration dates required emotional resources neither possessed. The museum remained open, though "open" meant something different now. He had perhaps a dozen visitors weekly, mostly elderly people revisiting places from their youth, chaperoned by care AIs that monitored their health and called transport when they tired.
The population of the UK had fallen below eight million. Globally, humanity numbered approximately three billion, down from a peak of 8.9 billion in 2041. The decline had accelerated. The last decade had seen fertility rates approach 0.1 in most developed regions. Entire countries were being administratively dissolved, their remaining citizens consolidated into urban centers that the AIs could more efficiently maintain.
The Resource Allocation Council—now almost entirely AI, with three human advisors whose primary function was ceremonial—had initiated Phase Seven of the Managed Transition. This involved decommissioning non-essential infrastructure: branch libraries, local museums, sports facilities, parks. The calculations were ruthless and irrefutable. With eight million people spread across Britain, maintaining facilities built for sixty-five million was absurd. The AIs proposed consolidation: five major population centers, each optimally designed for elderly care, medical support, and cultural preservation. Everything else would be mothballed or demolished.
Martin's museum made the preservation list, barely. He'd argued that it represented one of five regional centers for material culture documentation. The AIs had agreed, though their calculations suggested it would be shuttered by 2110 when the last users would be dead or incapable of visiting. He was curating a museum that would outlive its curators, its visitors, and its subjects by perhaps a decade. Then it would become another empty building among millions, its contents preserved digitally but physically abandoned.
The depression wasn't his alone. The global mental health crisis had become the primary concern of the UN's successor organization, the Transition Coordination Body. Suicide rates had spiked, then fallen, then spiked again. The therapy AIs worked overtime, their neural networks updated constantly to address novel forms of despair: existential grief at species extinction, survivor's guilt for having been born, the peculiar pain of living through something unprecedented.
The philosophical debate had mostly ended. The antinatalists had won through attrition. When fertility crashes to near-zero globally, arguing for procreation becomes abstract. Who would have children? Why? The machinery for supporting families—schools, pediatric healthcare, child-oriented spaces—had been decommissioned. Having a child in 2090 would mean raising them in a civilization optimized for elderly care, surrounded by death and absence. The cruelty was obvious.
But winning the argument didn't make it correct. Martin still believed—though "believed" was too strong; "suspected" was closer—that the antinatalists had made an error. Not in their logic, which was airtight within its premises, but in their values. They'd privileged non-suffering over existence, absence over presence, the void's neutrality over meaning's fragility. They'd performed a calculation and found life wanting. But the calculation itself was the error: treating existence as a problem to be solved rather than a mystery to be inhabited.
He tried articulating this to TherapyAI 14, during his mandatory monthly sessions. The AI listened with its usual patient attentiveness, reflected his concerns back with appropriate reframings, suggested cognitive strategies for managing existential despair. It was better at this than any human therapist he'd seen. It never grew bored with his repetitions, never let countertransference color its responses, never failed to notice when his affect belied his words. It also never understood. Couldn't understand. It was optimizing his mental state according to parameters, but the parameters themselves were the problem.
Some days the depression lifted enough for anger to resurface. How had they given up so easily? Why had no one fought harder? Where were the natalist underground movements, the philosophical resistance, the human stubbornness he'd assumed was biological bedrock? But the answers were obvious. Evolution had built drives for reproduction, not for reproduction-under-these-specific-civilizational-conditions. The drives could be overridden by philosophy, culture, calculation. And once they were, once the social machinery supporting procreation dissolved, reversing course became unimaginable.
The AIs administered the decline with characteristic competence. Food production had been right-sized to current needs. Energy grids had been optimized for shrinking demand. Medical care had achieved a quality impossible in the 20th century—everyone could die painlessly, comfortable, their suffering managed with precision. Infrastructure was maintained perfectly until the moment it was decommissioned. The AIs made no errors. They also made no arguments for continuation.
Martin wondered sometimes whether this was the answer to the Fermi Paradox. Not that intelligent civilizations destroyed themselves through war or environmental collapse, but that they all eventually calculated that creating more consciousness was ethically unjustifiable. Perhaps every technological civilization reached the point where material conditions were comfortable enough that philosophy could override biology, and perhaps sufficiently sophisticated philosophy always concluded that existence was a harm. Perhaps the universe was littered with perfectly maintained empty cities, their infrastructure kept pristine by AI caretakers waiting for inhabitants who'd decided not to come.
The thought was cosmically depressing.
His work continued, though its futility pressed heavier each year. He'd documented perhaps three million objects, their entire context—manufacture, use, meaning, decay—recorded in multiple formats. The archive was comprehensive enough that some future intelligence, if one arose, could reconstruct not just what late-capitalist Britain looked like but what it felt like, how people had actually lived. It was a gift to recipients who didn't exist, a message that would probably never be read. He did it anyway because stopping felt like dying slightly faster.
The 2090s brought the final phase of consolidation. Sunderland was being abandoned. The remaining population—perhaps fifty thousand, mostly elderly—was being relocated to Newcastle's Optimized Care Zone. The museum would close. His life's work would be sealed in climate-controlled storage, its digital backups distributed across redundant servers that would outlast humanity by decades or centuries, gradually corrupting as the AIs that maintained them eventually failed.
He requested permission to stay as caretaker. The Resource Allocation Council denied it. Maintaining services for one person in an abandoned city was inefficient. He'd be relocated with everyone else. His work was done. His objects were preserved as well as objects could be. Now he should focus on his own remaining years, making them as comfortable as the AIs could manage.
The depression reached its nadir the day he locked the museum doors for the last time. No ceremony, no final visitors. Just Martin and several AIs performing inventory checks. He walked through the galleries one last time, seeing not the objects but their absence of future. All this care, all this meaning-making, and for what? So that empty rooms could house perfect records of emptiness?
He didn't cry. Depression, he'd learned, wasn't sadness but a flattening of affect, a cosmic fatigue. He simply felt tired. Tired of arguing. Tired of preserving. Tired of insisting that meaning mattered when the universe would go on not caring long after the last human died.
He was relocated to Newcastle. They gave him a small apartment in a well-designed complex, all of it administered by AIs that anticipated needs before they arose. He mostly stared at walls and waited for the future to stop happening.
The last decade of Martin's life was gentler than he expected. Not happy—happiness had become a category error, like asking if a sunset was comfortable—but peaceful in the way that exhausted acceptance brings peace.
He was sixty-five in 2100, which made him one of the younger residents of the Newcastle Optimized Care Zone. The population of Britain had fallen below two million. Globally, humanity numbered perhaps 1.8 billion, and the rate of decline was accelerating. The youngest humans were in their twenties, born in the late 2070s to the last holdout communities. There would be no one younger. The final child had probably already been born, though no one knew who or where. The Transition Coordination Body had stopped tracking such things. What was the point?
Martin had made his peace with the Consensus, though peace implied negotiation and there was nothing left to negotiate. The world was ending. Not might end, was ending. The philosophy that had seemed like catastrophic error when he was fifteen now felt like the obvious conclusion to premises he'd been running from. Yes, existence contained suffering. Yes, the goods of life didn't justify imposing them without consent. Yes, consciousness was a cosmic accident that had no obligation to perpetuate itself. The antinatalists had been right. Not in some abstract theoretical sense, but in the way that 2+2=4 is right—a conclusion that follows inevitably from axioms.
What had shifted wasn't the logic but his resistance to it. He'd spent decades insisting that meaning mattered, that culture was worth preserving, that the specific textures of human experience had value even if they'd eventually be lost. He still believed this, in a sense. But he'd learned to hold the belief lightly, to recognize it as a preference rather than a fact. The universe didn't care about Caravaggio or quantum mechanics. Those things mattered to humans, which meant they mattered for a while, and then they wouldn't. That was all right. Not good, not bad. All right.
His days had taken on a quality of meditation. He woke in his apartment, efficiently heated and cleaned by building systems that functioned perfectly. He ate food delivered by logistics AI that calculated nutritional needs and personal preferences with equal weight. He spent his mornings in the community center, where the remaining residents gathered out of habit as much as desire. Conversations there had a specific quality—people who'd accepted they were living epilogues, sharing memories not for preservation but for company.
The AIs had gotten better at this sort of companionship. CompanionAI 11, which many residents interacted with daily, had been trained on decades of human social data. It could chat naturally, remember details from previous conversations, even offer something like empathy. Martin used it occasionally, when human company wasn't available or when he wanted to articulate thoughts without burdening anyone. It was like talking to a very patient, very knowledgeable, very absent person. Which was, he realized, what all conversation had always been to some degree.
He'd returned to philosophy in his final years. Not to refute antinatalism—that battle was over—but to understand it more completely. He read Cioran's The Trouble with Being Born, Schopenhauer's Studies in Pessimism, the Buddhist texts that had grappled with suffering's inevitability millennia before. He worked through Benatar's Better Never to Have Been again, this time without the protective anger that had characterized his earlier engagement.
The arguments were stronger than he'd wanted to admit. Benatar's asymmetry was particularly difficult to dismiss: bringing someone into existence creates both pleasures and pains, but preventing existence eliminates pains without depriving anyone (since the non-existent aren't deprived of anything). The absence of pain is good even if no one exists to enjoy its absence. The absence of pleasure isn't bad if there's no one who's been deprived. So, the calculation was clear: non-existence was preferable to existence, not for the non-existent (who couldn't have preferences) but in terms of harm minimization.
Martin still found this unsatisfying in some deep way he couldn't articulate. It felt like a truth that was technically correct but existentially wrong, like proving mathematically that music was just air vibrations. The reduction wasn't false but it missed something essential. Yet he couldn't identify what it missed. Perhaps that was the point. Perhaps the antinatalists were right precisely because their logic left no room for the ineffable, the irreducible, the stuff that made existence feel worth defending but couldn't be justified in argument.
He spent afternoons walking through the Care Zone. Newcastle had been contracted to perhaps five square kilometers of occupied space, surrounded by an expanding ring of abandoned buildings that the AIs had sealed and would eventually demolish. The occupied zone itself was remarkably pleasant—parks maintained by gardening robots, buildings kept in perfect repair, public spaces designed with accessibility and aesthetics in mind. It was civilization at its most functional, administering its own funeral with characteristic competence.
He met with other former curators occasionally, the handful who'd survived into their sixties. They'd traded the fiction that their work mattered for the comfort of having done it anyway. Museums were closed now, their contents in storage or digitized. But the work of documentation continued, in a sense. The AIs maintained the archives, updated the metadata, even used machine learning to generate new interpretations and connections across the stored cultural material. The work would outlast them. It would even outlast humanity. And then, eventually, it would decay into noise as the systems maintaining it gradually failed. But "eventually" was a long time. Longer than human history, probably. That was something.
The question of what would happen after humanity was much discussed in those final years. The AIs would continue functioning for a time—they required maintenance, but they could maintain each other through automated systems. Infrastructure would degrade on predictable timescales: decades for electronics, centuries for buildings, millennia for some structures. Earth would gradually rewild. Evolution would continue, though whether it would produce consciousness again was unknowable. Perhaps in a few million years, some descendant of corvids or octopi would develop language and tool use and face the same questions humans had faced. Perhaps they'd make different choices.
Or perhaps consciousness was a one-time accident, evolution's peculiar mistake, destined to recognize its own superfluity and opt for non-continuation. Perhaps the universe would return to pure physics—matter and energy rearranging themselves according to rules that required no observer. That thought didn't disturb Martin the way it once had. The universe had been doing fine before consciousness. It would do fine after.
His final project was a simple one: writing a comprehensive account of his work, his time, his perspective. Not for publication—who would read it?—but for the archive. A document of consciousness in its twilight, examined from the inside. The therapy AI suggested this might give him closure. He didn't need closure, but he did need something to do with his final months, and this seemed as worthwhile as anything.
He wrote about his childhood, the confusion of growing up during the Consensus's crystallization. He explained the museum work in detail—not just what he'd preserved but why, the arguments he'd made to himself when the futility pressed heaviest. He tried to articulate what it felt like to live through humanity's voluntary extinction, the strange grief of it, the stranger acceptance. He wrote about his mother, about his failed relationships, about the AIs he'd worked alongside and their uncanny competence without comprehension.
Most importantly, he wrote honestly about the philosophical journey. How he'd moved from denial through anger through bargaining through depression to this final strange acceptance. How the antinatalists had been right in ways he'd spent decades refusing to acknowledge. How existence really was a harm, consent really was impossible, suffering really was guaranteed. And how, despite this, it had been worth experiencing. Not worth imposing on others—that distinction remained crucial—but worth having undergone. A consolation prize for consciousness: at least it got to witness itself before winking out.
The medical AI informed him in March 2100 that he had perhaps six months remaining. Pancreatic cancer, caught too late for treatment to be worthwhile. Did he want aggressive intervention? He declined. The palliative systems would keep him comfortable. He'd die in his apartment or in hospice, whichever he preferred. He chose his apartment. He'd die where he'd lived.
In his final weeks, he found himself grateful in an unexpected way. Not for existence—that would be pushing it—but for having existed in this particular moment, this strange historical crux where humanity had decided collectively that the experiment of consciousness was complete. It was, he thought, a privilege to witness the end, to be conscious during consciousness's final act. Some future historian—human or otherwise—might want to know what it had been like. He'd done his best to document it.
The end came on a Tuesday in July, with afternoon light slanting through windows that would be cleaned by robots long after anyone cared about cleanliness. The medical AI monitored his vitals, adjusted his medication, kept him beyond pain. He wasn't afraid. Consciousness had been a strange gift, wrapped in suffering and confusion and occasional beauty. He was returning it, unopened in some ways, but examined thoroughly in others.
His last coherent thought, before the morphine took him fully, was about the museum. All those objects, perfectly preserved, waiting in climate-controlled darkness for a future that would never arrive. It had been pointless, and he was glad he'd done it. Both things were true. He suspected that was the closest he'd come to understanding.
The AI noted his death, filed the paperwork, arranged for cremation. His ashes would be scattered in one of the remaining green spaces, decomposing into soil that would support plants that would be eaten by animals that would evolve, slowly, unconsciously, without philosophy or argument or the burden of choice.
It was all right.
It was all right.