The Age of Anxiety
What We Can Do About AI’s Gravest Risks
Anxiety has become culturally resonant in 2026. Many movies in the last several years (see “Uncut Gems”) seem to traffic less in suspense and shock than in sustained unease, the feeling that something is off, that events are accelerating, that no one is really in control.
That feels familiar to me. I walk around most days with the same sensation in the pit of my stomach, not because of a fictional character’s bad luck, but because humanity is racing into an AI future whose largest risks we are nowhere close to managing.
Daniel Roher’s soon-to-be-released documentary, The AI Doc: Or How I Became an Apocaloptimist, wrestles with that same sensation that something quite spectacularly consequential is happening. It is worth watching not because it offers tidy conclusions, but because it captures a talented filmmaker trying, in real time, to make sense of a rapidly changing world. In the end he asks the only question that really matters: is there anything useful to do?
That is what this essay is about. I want to describe the risks as clearly as I can and offer a few thoughts on what can be done. None of those ideas will be fully satisfying, because there are no satisfying answers here. But the absence of perfect answers is not an excuse for passivity. We still have to do what we can. What is more, in doing what we can, we will at least be active and that activity will inevitably lessen our anxiety. In part, that conviction is why I founded SAIF, where we fund and advise startups building AI guardrails.
AI is not coming. It is here. Increasingly it will be woven into nearly everything, which means its risks will be equally broad. Some of those risks are real but manageable. Others are far more serious, even existential. I think of the gravest among them as four horsemen:
X-Risk: the creation of superintelligent AI more capable than we are, pursuing goals we cannot reliably understand or constrain. In this scenario, we (permanently) lose control.
Engineered pandemics: AI dramatically lowering the barriers to designing or deploying catastrophic biological agents.
Authoritarian lock-in: states using AI to monitor, predict, and control human behavior at unprecedented scale and potentially with an inescapable permanence.
Social collapse: the breakdown of trust through deepfakes, persuasion systems, automated fraud, epistemic chaos, job loss, and AI-enabled political violence.
These are very different dangers. Each is large, complex, and hard even to think clearly about, much less solve. Still, the practical question remains: what can any of us do? The answers are incomplete, but they are not nonexistent. And those answers vary by what each of us brings to the table. Not everyone can fund or help AI safety startups, for example. But all of us have influence to bring to bear. All of us can do something.
Naturally, no one, not I, not the CEOs of the companies building the frontier AI models can claim this will be sufficient to ward off the horsemen. The first honest response to AI anxiety must be that these risks are too large to be solved by personal lifestyle choices or better digital hygiene. No one is going to avert catastrophe by meditating more, deleting a few apps, or asking a chatbot to be nicer. Although these actions can be useful if they make us feel better, the fact is that these are civilizational problems, born from a combustible mix of a new technology of extraordinary power, a time of institutional weakness, geopolitical rivalry, and, sadly, plain old human folly.
And yet, in the face of these sometimes overwhelming challenges, we are not powerless. We are citizens, parents, scientists, founders, teachers, investors, engineers, writers, voters. We help determine what gets built, what gets funded, what gets normalized, what gets regulated, who gets elected, and what gets resisted. Those choices, when multiplied by millions, will shape the future as much as Elon Musk’s whims.
The question is not whether any one of us can “solve AI.” We cannot. The question is whether we can help make the world harder to destroy and easier to defend. And perhaps, in a moment like this, that is the real test: did we do what we could with the leverage we had?
Most people will not work directly on alignment or interpretability or evaluations. But everyone can help shift the culture. We can stop treating speed as a virtue in itself. We can stop confusing capability with wisdom. We can stop rewarding the posture that says, in effect, “yes, this may destroy us, but think of the market opportunity.” A great deal of harm in the world begins when caution is mocked as cowardice and seriousness is dismissed as naiveté. We should not participate in that. Today, you can see this playing out in local and national politics. Let your representatives know how you feel about safety and the role the government should play in helping ensure we all remain safe. Sign the Pro-Human AI Declaration, to let the world know where you stand.
Those with technical talent can choose to work on control, monitoring, robustness, and governance rather than simply on making the engines larger and faster. Those with capital can support the people and companies building guardrails rather than only the people tearing them down in the name of progress. And we should all insist, publicly and privately, that powerful systems be tested before they are trusted. This should not be controversial. We test bridges before driving over them. We test airplanes before filling them with passengers. The fact that we are even debating whether civilization-shaping AI should be subjected to meaningful oversight tells you something has gone badly wrong.
The second horseman, AI-enabled biological catastrophe, is in some ways more concrete, and therefore more actionable: the threat of pandemics has existed throughout history. If advanced AI systems lower the barriers to creating dangerous pathogens or helping bad actors navigate biology, then our answer cannot be a vague hope that decent people will prevail. We need defenses. Real ones. The world needs something like an immune system: vastly better screening of DNA synthesis, much better early detection, faster sequencing, broader monitoring, stronger public health infrastructure, better countermeasure platforms, and institutions that can actually respond at speed.
Here at least the path is visible, even if it is hard. Scientists can build the tools. Governments can set standards and fund preparedness. Philanthropists can underwrite the public goods that markets neglect. Investors can back companies building actual defensive capacity. Universities and labs can adopt norms that take dual-use risk seriously. And ordinary citizens can do something that sounds banal but matters immensely: they can support leaders and institutions that value competence, prevention, and public health capacity before the next emergency arrives, rather than after.
This matters because one of the most dangerous features of modern life is that we wait for proof in blood. We delay. We debate. We reassure ourselves. We ask whether the threat is really so bad, whether now is really the moment, whether perhaps this all sounds a bit alarmist. Then the thing happens, and suddenly everyone becomes a prophet of the obvious. We have seen this movie before. We should not insist on watching it again.
The third horseman, authoritarian lock-in, requires a different kind of defense because it is not primarily about extinction; it is about the permanent loss of human freedom. AI makes possible a form of surveillance and behavioral control that past tyrants could only dream of: every movement tracked, every communication filtered, every association mapped, every deviation flagged, every dissenter legible. Not gulags, perhaps. Something cleaner than that. More frictionless. More automated. More total.
The response to this cannot be merely technical. It has to be political, legal, and moral. We need bright lines around what democratic governments are permitted to do with these tools, and even brighter lines around what authoritarian governments should never be allowed to normalize without consequence. The recent clash between Anthropic and the U.S. Government is, at least in part, about precisely this danger: whether AI companies should be pressured to relax safeguards against uses such as mass domestic surveillance and autonomous weapons. We should be defending privacy, encryption, due process, freedom of thought, and limits on surveillance not because we are nostalgic for some earlier internet idealism, but because those protections may prove to be among the last barriers between open societies and digitally enforced obedience.
Even at the level of everyday life, there are choices to be made. We can become more suspicious of the tiny convenience for which we are perpetually being asked to surrender autonomy. We can oppose the normalization of ubiquitous biometric tracking. We can support institutions that defend civil liberties. We can teach our children that freedom includes not merely the right to speak, but the right not to be monitored all the time. We can resist the comforting fantasy that powerful tools will remain in benevolent hands forever. History suggests otherwise.
The final horseman may be the most diffuse and in some ways the most familiar: the disintegration of trust itself. A world in which images, voices, documents, identities, and narratives can all be manufactured at trivial cost and immediately disseminated on social media is a world in which reality itself becomes easier to contest. The damage here may not come as one dramatic event. It may come instead as erosion: a slow wearing away of shared belief, institutional legitimacy, and social cohesion until democratic life begins to feel impossible.
This, too, has technical dimensions. We need provenance systems, authentication tools, better fraud defenses, more resilient journalism, better election security, and platforms that stop subsidizing deception. But the problem is not only technical, because trust is not only technical. Trust is cultural. Moral. Civic. It lives in habits.
And so one of the most old-fashioned answers in this essay turns out to be one of the most important: be a better steward of truth. Resist emotional manipulation. Support institutions that are trying, however imperfectly, to uphold standards of evidence and verification. Teach younger people that virality and truth have never been the same thing, and now may be less related than ever. One practical habit matters more than ever: verify before amplifying. Check multiple sources. Compare claims against trustworthy reporting, primary documents, and, where useful, multiple AI tools. In an age of synthetic media, truth will increasingly require active maintenance.
None of this is glamorous. None of it will earn you likes. The work of holding a society together rarely feels cinematic. It is slow, unfashionable, and often frustrating. But once trust collapses, it is extraordinarily hard to rebuild.
Across all four horsemen, a few principles recur.
First: Act. In the face of anxiety action is the right answer, but it must be serious, thoughtful, and real.
Second: Favor Defensive Acceleration. Direct your life, in whatever sphere you inhabit, toward defense rather than acceleration for its own sake. If you are building something, ask what it protects as well as what it enables. If you are funding something, ask not only whether it will win, but what victory would mean. If you have influence, use it to strengthen institutions rather than merely sneer at them. Every one of these risks gets worse in a world where the state is hollow, science is distrusted, journalism is broken, and public life is little more than spectacle.
Third: Speak. Do not be silent and do not be afraid. We must all cultivate the moral courage to reveal our concerns and fears. There will be enormous pressure in the years ahead to treat concern as unserious, restraint as anti-innovation, and precaution as weakness. It is not usual to hear, even today, that concerns about safety are dangerous, anti-American, or anti-business. We should reject any such framing. The mature response to world-changing technology is neither panic nor boosterism: it is responsibility.
Fourth: Use Your Leverage. And finally: remember that while we do not all have equal leverage, we all have some. A president has more power than a parent, a frontier model researcher more direct influence than a schoolteacher, a billionaire more reach than a voter. Of course. But that does not absolve anyone of the question. The question is always the same: what am I helping to normalize, to build, to fund, to tolerate, or to resist?
This is not a complete answer to AI anxiety. There is no complete answer. The fear is justified and the risks are real. Some of us may one day be in a position similar to that of Stanislav Petrov in 1983, when the Russian Lieutenant Colonel saved the world from nuclear armageddon. Most of us can at minimum make our voices heard. Anxiety is difficult to grapple with, but action, not despair, is the answer.
The point is not that each of us can save the world. The point is that the world will, in fact, be shaped by what millions of us decide to do next. The task, then, is not merely to feel the dread of this moment, though many of us do. It is to convert dread into seriousness, seriousness into work, and work into some non-zero chance that we not only survive, but someday thrive.


Don’t you think that the political system is upstream of each of the four risk categories? And thus that the most effective AI risk mitigation comes from improving liberal democracy (through campaign finance reform, election reform, education reform, antitrust, stricter ethical rules, etc.)? I realize that is not your focus, but excluding politics, while understandable, also severely limits the type of solutions and their impact. Leaving the accelerationists to claim, naively (and often cynically) and despite evidence to the contrary, that a better socio-political system will be downstream of better AI.
Thank you for writing this. I fundamentally agree with your thesis, especially in how accountability creates the foundations upon which we can find peace and thrive.