Kyle Harrison
← Bookshelf

Boom: Bubbles & The End of Stagnation

Byrne Hobart, Tobias Huber
Read 2025

Key Takeaways

Under Consideration — to be added.

Interconnections

Under Consideration — to be added.

Highlights

  • On December 7, 1941, an unprepared United States was attacked at Pearl Harbor, crippling much of the US fleet. Less than four years later, the US deployed a new and unimaginably destructive weapon—a weapon that, just a few years before, had been considered science fiction. On May 25, 1961, John F. Kennedy promised that a man would walk on the Moon by the end of the decade. In July 1969, it happened. Many who witnessed this giant leap for mankind had been born before the invention of airplanes. These two events were outliers, but suggestive ones. They represent a century during which humans routinely invented and deployed new and transformative technologies at breakneck speed. By virtue of these technologies—which included the radio, plastics, mass-produced automobiles, indoor plumbing, television, and most major appliances—life near the end of the 20th century would have been barely recognizable to those who lived at its beginning.
  • By comparison, ours is an age of stagnation.
  • While our slick screens and the virtual worlds they depict suggest an era of technological abundance, progress in the world of atoms, not bits, is in freefall.
  • There are also worrying symptoms of scientific stagnation. Fields ranging from physics and neurobiology to cognitive psychology and quantitative finance have recently experienced a reproducibility crisis, wherein their findings have been impossible to replicate. Worse, multiple studies have shown that despite exponential increases in funding, nearly every scientific field is simply producing fewer breakthroughs. We’ve reached a point of diminishing returns, in which many areas of research produce noise but no signal and the cost of even minor innovations rises with no end in sight.
  • The philosopher Jean Baudrillard referred to this category as technologies of the “hyper-real”: simulacra that lack a referent and become substitutes for reality. In other words, instead of building the future, we are becoming better at developing increasingly realistic simulations of it.
  • Culturally, the trend of decelerating technological progress and the slowdown in economic growth is reflected in the rise of dystopian sci-fi, which feeds a nihilistic sentiment of existential malaise and civilizational doom.
  • In a series of case studies, we reverse-engineer how transformative progress arises from small groups with a unified vision, vast funding, and surprisingly poor accountability.
  • Across these disparate cases, we find that step-function improvements follow a J-shaped graph. To everyone but true believers, the initial stages of the endeavor look like wasted effort, but these early attempts ultimately deliver outsized gains.
  • Perhaps even more surprisingly, we find that technological breakthroughs and scientific megaprojects share an underlying dynamic with financial bubbles in one very specific sense: they coordinate behavior to build a complex future. Against the standard view in economics and finance, which holds that speculative financial bubbles are intrinsically negative phenomena, we develop a model of bubbles as innovation accelerators.
  • While conventional economic wisdom holds that bubbles are wasteful and full of delusion, over the following chapters we build the case that the opposite is true—that in fact, only innovation-accelerating bubbles can prevent the apocalypse.
  • Technological innovation is more driven by excess, exuberance, and irrationality than by cost-benefit analyses, rational calculation, and careful and deliberate planning. Reality-bending delusions are underrated drivers of techno-economic progress. In other words, a necessary enabling condition for technological progress, which ultimately fuels human flourishing, is what the Ancient Greeks called thymos, which often gets translated as “spiritedness”—a relentless drive to transcend the limitations of a listless present.
  • From the Dutch tulip mania of 1637 and the South Sea Bubble of 1720 through the great stock-market crash of October 1929, 1987’s Black Monday, and the 2007–2008 financial crisis, speculative bubbles and market crashes are seared in our collective memory. Yet not all bubbles are wealth- and value-destroying events. By generating positive feedback cycles of excessive enthusiasm and investment, certain financial bubbles mobilize the capital necessary to fund disruptive technologies at the frontier of innovation and accelerate breakthroughs in science, technology, and engineering. Crucially, such bubbles decouple investment from purely rational, backward-looking expectations of economic return, which correspondingly reduces risk aversion. Therein lies our escape from the Great Stagnation.
  • Meanwhile, we’ve developed an obsession with existential risks, from climate change to the rise of general artificial intelligence. In Silicon Valley in particular, AI safetyism has become so dominant that the obsession with alignment between humans and AI could, by inhibiting accelerated progress in the field, become an existential risk in itself. There’s a long list of examples of skepticism about technological progress, from the pseudo-environmental policy reactions against nuclear energy, to a backlash against fracking that implicitly treats some tons of CO₂ as worth tracking and others as worth ignoring, to fears that space exploration represents a form of escapism that denies (and saps funding from) more important terrestrial problems. These critiques demonstrate how the ideology of safetyism has itself become a civilizational danger.
  • Without trivializing recent technological breakthroughs, we believe more evidence is needed to support the hypothesis that we’ve returned to a trendline of accelerated progress. Systematic surveys across fields still support the idea that there has been a general decline in the rate of technological advance.
  • Financial bubbles, technological developments, and even the rise of governments and religions demonstrate that social systems are inherently reflexive—that predictions, like self-fulfilling prophecies, can affect or create the reality they try to predict.
  • It’s telling that right after the Moon landing the use of the word “progress” started to decline and use of the term “innovation” took off, reflecting a linguistic narrowing that refers almost exclusively to developments in software and information technologies.
  • Against the self-aggrandizing claims of “innovation”—often simply employed to sell a slightly improved beauty care formula, the latest software-as-a-service product, or any startup pitched as “X for Y”—we need to reclaim the meaning of innovation as a truly singular inflection point, a development that reorders the present and impacts the trajectory of the future.
  • ARPA–Energy’s attempts to pick a winner in so-called cleantech by subsidizing a select few startups (including the solar panel manufacturer Solyndra, which imploded spectacularly in 2011 after the cleantech bubble burst).
  • Over a decade ago, the visionary sci-fi writer Neal Stephenson lamented that “believing we have all the technology we’ll ever need, we seek to draw attention to its destructive side effects.”
  • the majority of the techno-scientific breakthroughs of the last hundred years—including nuclear energy, the space program, AI, genetic engineering, and Bitcoin—were, to a large extent, driven and shaped by deeply ideological, spiritual, and religious beliefs.
  • Innovation-accelerating bubbles—which emerge around a definite, optimistic vision of the future—induce meaning, as participants in a bubble share the conviction that their pursuit, while uncertain, promises to realize something that transcends the present. A bubble is therefore not simply a collective delusion but an expression of a future that is radically different from now.
  • Even when humans have achieved artificial general intelligence, escaped the limits of mortality, solved nuclear fusion, and colonized Mars, it will remain a moral imperative to warn about the civilizational dangers of stagnation and look past the present to construct a future that is radically better.
  • The increasing frequency and magnitude of financial bubbles since the 1970s can also be interpreted as a symptom of stagnation. The absence of techno-scientific progress, coupled with an abundance of capital enabled by experimental monetary policies and a general scarcity of vision, has fueled many of the bubbles we’ve witnessed in the past few decades. But, as we’ll show, these bubbles, which are largely driven by financialization, low yields, and the lack of productive investment opportunities, need to be distinguished from innovation-accelerating bubbles.
  • “Safetyism” could here be defined as the West’s cultural bias toward suppression of risk-taking and submission to a quasi-absolute form of safety that attempts to regulate and control every sphere of social interaction and techno-economic relation.
  • Benôit Godin, Innovation Contested: The Idea of Innovation Over the Centuries
  • Neal Stephenson, “Innovation Starvation,” Wired, October 27, 2011, https://www.wired.com/2011/10/stephenson-innovation-starvation/.
  • Futurists and science-fiction authors once prophesied an era of abundant energy due to nuclear fission, the arrival of full automation, the colonization of the solar system, the end of poverty, and the attainment of immortality. In contrast, futurists today ask questions about how soon and how catastrophically civilization will collapse.
  • Growth in US total factor productivity, which averaged almost 2 percent per year from 1920 to 1970, has averaged less than 1 percent per year since then. 13 Based strictly on this measure, the pace of technological improvement has been cut in half.
  • The West has also become more nihilistic. Based on an analysis of the Google Books Ngram corpus, the use of terms related to progress and the future has decreased by about 25 percent since the 1960s, while those related to threats, risks, and worries have become several times more common.
  • Despite increased funding and the efforts of a growing population of researchers to churn out more papers than ever before, science is suffering from diminishing returns. Hyper-specialization and bureaucratization are slowing down scientific progress such that instead of scientific breakthroughs, we get more grant proposals and press releases. Science and technology advances both in leaps and steady increments, but right now, we’re getting too much refinement and not enough revolution.
  • In his 2011 book The Great Stagnation, economist Tyler Cowen identifies three pivotal developments that accelerated technological progress: the cultivation of land, the exploitation of 18th- and 19th-century scientific breakthroughs, and the increasing value of education. In his 2016 work The Rise and Fall of American Growth, economist Robert Gordon argues that the technological innovations that fueled economic growth from 1870 to 1970, such as electricity, sanitation, medical inventions, the internal combustion engine, and the telephone, are unrepeatable one-offs. In other words, the singularity has already happened—the fundamental transformations of the 19th and 20th centuries that unlocked nature’s secrets through scientific breakthroughs in physics, engineering, biology, and chemistry and enabled one-off productivity boosts were outliers. According to this view, the rate of progress is now slowing down and mean-reverting to historical norms. We can call this the natural theory of stagnation.
  • As economist Dietrich Vollrath writes in his 2020 book Fully Grown, “slow growth, it turns out, is the optimal response to massive economic success.”
  • A civilization that’s rich enough to enjoy resting on its laurels is also rich enough to invest in more growth.
  • What’s behind this collective risk intolerance? While there isn’t a single root cause, we can identify at least three intertwined factors that contribute to the risk aversion currently paralyzing the West. First is the onset of total financialization, which resulted from the termination of the Bretton Woods system in 1971.
  • Instead of saving and planning for the long term, investors were forced to hunt for constant returns, which spawned an ever-intensifying sequence of financially destructive and socially destabilizing bubbles and crashes. Thus risk aversion has had the perverse effect of subjecting economies to escalating systemic threats.
  • In 2019, there were more than 700 million people aged 65 and older around the world. This number is expected to double to 1.5 billion by 2050. Meanwhile, fertility rates are collapsing. The global total fertility rate declined from five children per woman in 1960 to 2.3 in 2020.
  • “The current state of reproductive affairs can’t continue much longer without threatening human survival,” warns leading epidemiologist Shanna H. Swan.
  • Declining testosterone levels, falling fertility rates, and a rapidly aging population have compounded to create a self-reinforcing feedback loop that reduces societal risk tolerance.
  • As the economist Julian Simon famously argued in his anti-Malthusian response to two bestselling and highly influential environmental doomer manifestos—Paul Ehrlich’s The Population Bomb and the Club of Rome’s The Limits to Growth—despite limited or finite physical resources, continuous population growth and human ingenuity represents the ultimate resource for sustained technological progress. In less technical terms: More children and more families leads to more creativity and more novel ideas.
  • Synchronous with the abandonment of Bretton Woods, the increase in bureaucratization—reflected in our ever-expanding corpus of tax and legal codes—coincides with the invention of word processing in the 1970s, which dissolved the human limitations of copying longform texts. Whereas the US Tax Code and the Code of Federal Regulations (CFR) were around 3.5 million words combined in the 1970s, the count has exploded to more than 10 million words in the intervening decades.
  • Greatness often doesn’t follow predefined milestones. Given that markets and technological systems are too complex and dynamic to fully quantify, top-down centralization is often doomed to fail, because information is more often distributed from the bottom up and at the edges of the system, as the canonical example of the free market demonstrates.
  • the Apollo program and the Manhattan Project were simultaneously decentralized and centralized, and benefited both from highly individualistic and collectivist cultures. But it’s vital to avoid at all costs the emergence of what philosopher Alexandre Kojève called the “universal and homogeneous state.”
  • It’s no wonder, then, that a recent paper found that the larger a scientific field gets, the more progress in the field slows down.
  • As a consequence, well-cited papers disproportionally attract even more citations, which decreases the probability that novel or disruptive papers will become highly cited. This dynamic is inexorably linked to the rise of hyper-bureaucratization. Hyper-bureaucratization, which appears in many domains, can be generalized as meta-Malthusianism: Success in any domain tends to make people repeat the process that worked before until they run into resource constraints.
  • This led to a sequence of spectacular bubbles, including the savings-and-loans crisis of the 1980s, the crash of October 1987, the bursting of the Japanese real estate and stock market bubbles in 1991, the emerging-markets bubbles and crashes of 1994 and 1997, the collapse of the US hedge fund Long-Term Capital Management in 1998, the implosion of the dot-com bubble in 2000, the 2007–2008 financial crisis (which was initially driven by a global real estate bubble), and, more recently, the financial turmoil that followed the global Covid-19 shutdowns.
  • Superficially, there seems to be no issue here given that the past five decades have seen enormous gains in stock values. But these stock gains haven’t translated into productivity gains. After 1971, the divergence between hyper-financialized markets and the real economy accelerated, ushering in an age of artificial wealth and value fueled by exploding debt and leverage. For example, the United States’ national debt increased from $427 million in 1972 to more than $32 trillion in mid-2023. If interest rates and deficits don’t decline, it’s estimated that the cost of servicing America’s existing debt could reach $1 trillion by 2029, or 3.1 percent of projected GDP—more than will be spent on defense. And this is one of the more optimistic forecasts.
  • For instance, the Fed responded to the dot-com crash of the early 2000s by lowering the Federal Funds Rate from 6.5 percent in 2000 to 1 percent in 2003 and 2004. This decision contributed to the explosion of financial derivatives like mortgage-backed securities and collateralized-debt obligations, which in turn inflated the real-estate bubble that burst in 2007.
  • Instead of incentivizing risk-taking and long-term investment in the deep future, enabled by savings that are protected against monetary debasement, the future gets progressively discounted and investing turns into a present-oriented form of day trading. These higher time preferences manifest in the simultaneous abundance of capital and scarcity of vision.
  • What does accelerate progress is a concentration of effective people working on adjacent problems. Real growth is woven from specific threads.
  • Markets dominated by passive investing also reinforce concentrated ownership by the largest megafirms. Perversely, while based on the assumption that markets are highly efficient, the rise of passive investing makes markets less efficient and more sensitive to extreme financial events.
  • While “creative destruction” has become another empty buzzword, Schumpeter’s term refers not to an incrementally upgraded chatbot or social media app, as is commonly assumed, but to the violent destruction of entire industries, infrastructures, occupational categories, and financial systems. Creative destruction can render industrial capital instantly obsolete. These disruptive events of massive capital destruction, which are characteristic of technological revolutions, register as episodes of drastic destruction and social turmoil.
  • Consequently, “creative destruction without destruction,” “capitalism without bankruptcy,” and “risk without consequences” essentially amount to Christianity without Hell.
  • Financial markets have entered the age of the simulacrum, which substitutes the “signs of the real for the real.”
  • Spreadsheets and machine-learning algorithms, trained on what has come before, dictate what gets made based on what has worked in the past. Whereas the last century brought radically different and almost incommensurate phases in art, architecture, literature, and film, the past three decades have been characterized by a “recession of novelty” 58 and a period of increasing homogeneity.
  • Virtual surrogates of risk-taking have replaced actual risk-taking. Empire-building is restricted to strategy games, romantic conquests are substituted with virtual-reality porn, and the drive for greatness and heroism is passively sublimated into the latest Marvel movie.
  • As the anthropologist David Graeber noted, instead of engineering the future depicted in science fiction, technologists have built devices that better simulate the future.
  • Instead of making a sacrifice, you can simply restart the simulation.
  • Immediately after the Moon landing in 1969, Woodstock happened, and the Space Age gave way to the New Age. This striking observation, by the venture capitalist Peter Thiel, 64 represents the broader turn to subjective spirituality and interiority that took place in the 1970s and continues to this day. This shift coincided with a general movement toward escapism and a loss of ambition and hope for the future. The technological sublime that the Apollo mission represented was replaced by meditation, yoga, and individualistic forms of spirituality.
  • With no definitive vision to guide and structure action on an individual or societal level—with perpetual self-optimization replacing the hope for transcendent redemption and the promise of salvation—this shift toward interiority means there is no exit from an “eternal present.”
  • A large-scale study of 14 million works of literature published over the past 125 years in English, Spanish, and German found that over the last two decades, textual analogs of cognitive distortions, including disorders such as depression and anxiety, have surged well above historical levels—including during World Wars I and II—after declining or stabilizing for most of the 20th century. 67 It is perhaps unsurprising, then, that the use of antidepressants and the practice of self-medicating via hallucinogens and pacifying drugs like cannabis are steadily on the rise.
  • Contrary to early techno-utopian visions and cypherpunk ideals, the internet, under the rule of an oligarchy of monopolistic tech firms, has become a conformity-generating machine of control.
  • A 15th-century visitor to Constantinople, the ancient capital of a dying empire, remarked that the citizens were not obsessed with the existential threats to their lives but with obscure theological debates: “If you ask a man to change money he will tell you how the Son differs from the Father. If you ask the price of a loaf he will argue that the Son is less than the Father. If you want to know if the bath is ready you are told that the Son was made out of nothing.” 94 A visit to a modern college campus might produce a similar observation. Higher education is now the site of an irresistible drive to moralize and politicize everything, which in turn imposes self-censorship and a risk-averse culture. The polarization and partisanship that characterize politics today renders every dimension of existence, from science to education to sexuality, into a zone of irresolvable ideological conflict, subjecting society to constant moralizing resentment. These domains are important, and just about everyone has an opinion on them. But social media makes it simultaneously easier to be exposed to the dumbest version of any opinion one disagrees with and harder to find reasonable arguments. In theory, everyone wants moderation, but in practice, people actually want a moderate-sounding voice that aligns with their views; it’s worth noting that both Vox and Fox News have used branding that is explicitly centrist but implicitly partisan.
  • Even before they get absorbed into the apparatus of higher education, many students have already been subjected to a clinical version of imposed conformity. Behavior that deviates from the normative structure of the education system becomes pathologized, often subsumed under an abstract diagnosis and treated with medication.
  • In contrast, optionality—often sustained by what Debord has called the “false choices offered by spectacular abundance” 97—conceals a deep risk intolerance. Instead of taking bold risks, it’s safer to follow the pre-programmed career trajectory that leads from Harvard Business School to McKinsey or a big tech firm.
  • The exponential growth in scientific knowledge, and the myriad technological innovations it has spawned over the past two centuries, has given rise to the expectation that scientific progress will continue to accelerate. Superficially, this remains the case—there have never been more journals, papers, and scientists. But a deeper analysis reveals that even science exhibits the symptoms of techno-economic stagnation.
  • The cost of developing novel drugs doubles every nine years, an observation referred to as Eroom’s law. 99 In essence, the forces that govern scientific progress invert the dynamics that gave rise to Moore’s law in the semiconductor industry.
  • Scientific hyper-specialization is, to some extent, an inevitable consequence of the success of scientific progress. Due to the exponential accumulation of scientific knowledge over the past three centuries, specialization has become a practical necessity because it reduces the cognitive load that researchers in any given scientific field face.
  • Kuhn posits that scientific innovation progresses through revolutions, or “paradigm shifts.” In this framework, the steady and continuous process of scientific development—that is, normal science—is sporadically disrupted by scientific revolutions. These are catalyzed by the emergence of “anomalies,” which are incompatible with the existing paradigm that dominates normal science. A scientific revolution follows a crisis when a novel paradigm, which is “incommensurable” with the practice of normal science, supersedes the previous paradigm. For Kuhn, it is entirely possible that science could enter a state of eternal normalcy in which the scientific decoding of the “book of nature” has exhausted itself. In this scenario, there will be no further revolutions and revelations.
  • The projects that receive funding, get published, and attract citations are incremental “one to many” improvements rather than radically novel “zero to one” innovations. 103 Why? Again, we can trace this slowdown in scientific progress to a system that stimulates and rewards risk aversion.
  • High-risk, exploratory science gets less attention and less funding because it is less certain to lead to publishable results. Reducing science to a popularity contest is a good way to ensure that breakthroughs never happen. When they do happen, it is often in defiance of risk-averse, citation-counting bureaucrats. For example, the discovery of clustered regularly interspaced short palindromic repeats, better known as CRISPR, began as an area of basic research, only later becoming the basis of a technology that can be used to edit genes. It took more than 20 years for the world to recognize CRISPR’s promise. For a long time, research on the subject didn’t attract many citations. As recently as 10 years ago, leading scientific journals rejected papers on CRISPR that would ultimately help win its discoverers the 2020 Nobel Prize in Chemistry. A major scientific breakthrough essentially occurred while no one was looking.
  • Investment in the exploration of radical ideas, which are often high-risk and without apparent and immediate application, is critical for enabling future breakthroughs.
  • A recent large-scale study that examined 1.8 billion citations in 90 million papers across 241 scientific subjects found that scientific progress slowed as scientific fields grew.
  • Specialization may be effective for reducing cognitive overload, but it also causes scientific expertise to become myopic. If you’re mastering a domain within a subfield of your discipline, how can you keep up with what’s going on outside your subfield, let alone outside your discipline? The consequent diminishing of the frontier marks a regression to epistemic nihilism, which abandons any attempt to construct a universal and synoptic knowledge of nature. After all, a project like that would never result in publication, citations, funding, or a tenure-track position.
  • A large study analyzing more than 244 million scholars who contributed to 241 million articles over the last two centuries found that as scientists age, they are less likely to disrupt the state of science. Instead, they become more resistant to novel ideas and are more likely to criticize emerging research.
  • The Faculty Workload Survey, for example, estimated in 2012 that researchers spend 42.3 percent of their time on “tasks related to research requirements (rather than actively conducting research).”
  • The rigid, hyper-regulated, and highly formalized bureaucracy that controls scientific funding not only degrades scientific performance but also reveals a deep bias against creativity, novelty, and risk-taking. For example, when He Jiankui, a Chinese biophysics researcher, announced in 2018 that he had created babies genetically edited to have HIV immunity, the global scientific community reacted with moral outrage. The eminent geneticist George Church, one of the few prominent scientists to defend He Jiankui, stated that the “most serious thing I’ve heard is that he didn’t do the paperwork right. He wouldn’t be the first person who got the paperwork wrong.” 117 While this particular example is polarizing, ethically charged, and far from unambiguous, the extreme reaction can be understood as a symptom of the dominance of the “global bureaucratic empire of science,” which seems to privilege regulatory compliance over scientific novelty.
  • But another fundamental source of stasis might be the scientific method itself. Over the past decade and a half, we have witnessed the eruption of a reproducibility crisis, starting with the 2005 publication of John Ioannidis’s landmark paper “Why Most Published Research Findings Are False.” The paper showed that when the design and publication of a study is biased toward positive results—which is almost invariably the case—most of the results that get published are false. The culprit? “P-hacking,” whereby data is manipulated to make patterns appear statistically significant. The reproducibility crisis identified in Ioannidis’s paper has since been confirmed by myriad empirical studies. Almost every scientific field has been affected, from clinical trials in medicine to research in bioinformatics, neuroimaging, cognitive science, epidemiology, economics, political science, psychiatry, education, sociology, computer science, machine learning, and AI. But it’s not just the social sciences that are affected by the reproducibility crisis—even the so-called hard sciences are infected by it. Two of the most hyped results in physics, the supposed discoveries of primordial gravitational waves and superluminal neutrinos, were quietly retracted in the early 2010s.
  • While scientific journals now retract about 1,500 articles each year—up almost 40-fold since 2000—the number of replicable papers has not substantially increased.
  • The reproducibility crisis again has its roots in bureaucracy. The field’s obsession with citation-based metrics to measure scientific productivity has spawned a plethora of peer-reviewed journals, some of which are of low quality.
  • The scientific method itself has become a metaphysical abstraction that figures as an almost mystical source of epistemic authority—a process that is believed to automatically generate truth, understanding, and control.
  • The ritualistic quality of the scientific method is reflected in the cultural hegemony science has acquired over the past decade: We are told to “trust the science,” “believe the science,” “follow the science.” 125 While it makes sense to trust science and engineering at a micro level—indeed, most of us do so daily—when “follow the science” becomes a hardened theology that no longer allows scientific discovery to progress, we no longer leave room for heterodox scientific theories to advance or for novel discoveries to emerge.
  • But this is quite separate from the politically charged cultural perception that science has a monopoly on truth—one either believes reflexively or is accused of being a denier.
  • This dogmatic view of the scientific method, which has been referred to as “scientism,” is at odds with a conception of science as an activity geared toward the radical unknown and the truly novel, which cannot be routinized, rationalized, or ritualized.
  • One of the most radical proposals is philosopher of science Paul Feyerabend’s appeal to “epistemic anarchy.” As Feyerabend argued in his 1975 book Against Method, “the events, procedures and results that constitute the sciences have no common structure.” For him, “the success of ‘science’ cannot be used as an argument for treating as yet unsolved problems in a standardized way.” 128 Feyerabend argues that the history of science is so complex that it cannot be reduced to a general methodology; asserting a general method will inevitably inhibit scientific progress, as any unifying and static method would enforce restrictive conditions on new theories. Epistemic anarchy would therefore represent a radical alternative that might liberate us from the tyranny of the scientific method.
  • If the present is similar to the past, the possibility—at once frightening and liberating—remains that the epistemological and methodological foundations of what we call “science” might be less stable than they appear.
  • In April 2023, as SpaceX prepared the experimental launch of Starship, its flagship rocket, SpaceX CEO Elon Musk lamented the “soul-sucking” process of getting all of the safety reviews and requirements that the dozen or so regulatory agencies demanded. Holding his head, Musk said, “I’m trying to figure out how we get humanity to Mars with all this bullshit.” Later, he added, “This is how civilizations decline. They quit taking risks. And when they quit taking risks, their arteries harden. Every year there are more referees and fewer doers. That’s why America could no longer build things like high-speed rail or rockets that go to the Moon. When you’ve had success for too long, you lose the desire to take risks.”
  • Bruce Gibney’s Founder Fund manifesto but is most often attributed to Peter Thiel. Bruce Gibney, “What Happened to the Future?” Founders Fund, January 1, 2017, https://foundersfund.com/the-future/.
  • Tyler Cowen, “The New Tesla Is Great, But It Isn’t Progress,” Bloomberg, August 1, 2017, https://www.bloomberg.com/view/articles/2017-08-01/the-new-tesla-is-great-but-it-isn-t-progress. Cowen notes that even if we move away from energy-dense fossil fuels, the building of solar panels, wind turbines, and hydroelectric plants and the production of electric vehicles and storage batteries, which use massive amounts of rare-earth elements and critical metals, can offset any carbon emissions savings. Short of developing radically novel energy sources (such as nuclear fusion) or other novel technological breakthroughs, large-scale decarbonization and the physical trade-offs it involves will require massive investment and innovation just to maintain the status quo or achieve incremental growth. See Vaclav Smil, Invention and Innovation: A Brief History of Hype and Failure (Cambridge, MA: MIT Press, 2023).
  • In fact, testosterone, which has been called the “fuel of exuberance,” has been identified as a key contributor to the trader behavior that leads to speculative market bubbles and crashes. See John M. Coates and Joe Herbert, “Endogenous Steroids and Financial Risk Taking on a London Trading Floor,” Proceedings of the National Academy of Sciences 105 (2008): 6167–72.
  • Jordan Castro, “Testocalypse,” Pirate Wires, September 24, 2023, https://www.piratewires.com/p/testosterone-emergency.
  • Jeffrey Lonsdale, “A Country for Old Men,” Clarium Capital Management Letter, July 2009.
  • Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk (New York: Wiley, 1996).
  • Kenneth O. Stanley and Joel Lehman, Why Greatness Cannot Be Planned: The Myth of the Objective
  • Almost a century ago, the architect, philosopher, and futurist R. Buckminster Fuller referred to our ability to “do more and more with less and less until eventually you can do everything with nothing” as “ephemeralization.” R. Buckminster Fuller, Nine Chains to the Moon (Carbondale: Southern Illinois University Press, 1963), 272–3.
  • Interestingly, the onset of cultural exhaustion and stagnation appears to coincide with the termination of Bretton Woods. The increase of collective time preferences induced by the post-Bretton Woods economic regime might have also reduced cultural risk tolerance, resulting in the dominance of franchises. What gets funded is what has worked.
  • Jason Farago, “Why Culture Has Come to a Standstill,” The New York Times, October 10, 2023, https://www.nytimes.com/2023/10/10/magazine/stale-culture.html.
  • Relative to the past, in which the techno-economic singularity has already happened in the form of previous industrial revolutions, we might regress and become gradually more primitive. Baudrillard, “America After Utopia,” New Perspectives Quarterly 26, no. 4 (2009): 96–99.
  • The trend toward interiority and dopamine-inducing simulation—in 2021, a Chinese state media outlet referred to social media, porn, and video games as “spiritual opium”—is reflected in a survey that found that the majority of Western children aged 8 to 12 aspired to become social media influencers while their Chinese peers wished to become astronauts. Eric Berger, “American Kids Would Much Rather Be YouTubers Than Astronauts,” Ars Technica, July 16, 2019, https://arstechnica.com/science/2019/07/american-kids-would-much-rather-be-youtubers-than-astronauts.
  • According to a 2023 report, the average internet user spends over six hours online each day, and of that, over two hours on social media. Simon Kemp, “Digital 2023: Global Overview Report,” DataReportal, January 26, 2023, https://datareportal.com/reports/digital-2023-global-overview-report.
  • For Heidegger, the high-speed machinery of communication is the primary driver of the amalgamation of distance and the homogenization of difference. In his 1960 essay “The Thing,” he follows his critique of technology’s essence to the radical and provocative conclusion that mass media essentially poses the same threat as mass destruction from atomic weapons: “Man stares at what the explosion of the atom bomb could bring with it. He does not see that the atom bomb and its explosion are the mere final emission of what has long since taken place, has already happened… What is this helpless anxiety still waiting for, if the terrible has already happened?” Here, “the terrible” refers to the techno-economic and scientific uniformity that Heidegger identifies as a fundamental feature of modernity. Of course, he is not claiming that thermonuclear annihilation is morally equivalent to binge-watching Netflix. But we should understand that nuclear holocaust and doomscrolling on social media are metaphysically (“in essence”) the same. According to Heidegger, the calculative rationality of modern science, with its reductive drive for quantification, objectivization, and mechanization, “already had annihilated things as things long before the atom bomb exploded.” Heidegger, Poetry, Language, Thought, 166–70.
  • It’s important to note that prevailing orthodoxy is usually approximately correct. However, increased accuracy is impossible when the band of accepted debate is too narrow. Podcasts are not a great source of health care information relative to the CDC on average, but the niche podcasts obsessing over the novel coronavirus in January 2020 turned out to be much better sources than the establishment media outlets they challenged. The big question to ask in open debate is where the relative risks lie.
  • Social constructivism and epistemic relativism, which have started to dominate the social sciences and humanities over the past three decades, can result in a fear of knowledge, or “cognophobia,” as described by philosopher Paul Boghossian. Similarly, the culture of collective risk aversion and safetyism can be characterized in terms of a general “riskphobia.” Epistemic relativism and social constructivism threaten to demolish any stable points of reference or firm framework by undermining any form of authority, be it scientific or traditional. Boghossian, Fear of Knowledge: Against Relativism and Constructivism (Oxford: Oxford University Press, 2006).
  • The exponential increase in computing power captured by Moore’s law is sustained by an explosion of ever-increasing costs. Since the 1970s, research efforts focused on improving semiconductors have increased by a factor of 18, while productivity has decreased by the same factor. This finding implies that it’s now 18 times harder to accelerate Moore’s law than it was half a century ago.
  • Indeed, a recent survey found that 81 percent of researchers would shift their research focus if they could deploy their grant money however they liked, while more than 62 percent said they would pursue work outside their field of specialization and against the norms of NIH. These findings speak to the ways in which the funding process often determines the trajectory of scientific research, even when it is at odds with the preferences and interests of researchers. Patrick Collison, Tyler Cowen, and Patrick Hsu, “What We Learned Doing Fast Grants,” Future, June 15, 2021, https://future.com/what-we-learned-doing-fast-grants/.
  • Not only is peer review costly—by one estimate, scientists collectively spend 15,000 years reviewing papers each year, an effort that cost $1.5 billion in 2020 for US-based researchers alone—but its benefits are not obvious. In fact, many of history’s most notable scientific breakthroughs were not peer reviewed, including Isaac Newton’s 1687 Principia Mathematica, Albert Einstein’s 1905 paper on relativity, and James Watson and Francis Crick’s 1953 Nature paper on the structure of DNA.
  • Seventy percent of peer reviewers for the prestigious British Medical Journal failed to reject a deliberately flawed paper that contained errors in research design, methodology, and data analysis and interpretation. Sara Schroter et al., “What Errors Do Peer Reviewers Detect, and Does Training Improve Their Ability to Detect Them?” Journal of the Royal Society of Medicine 101, no. 10 (October 1, 2008): 507–14.
  • The scientific method is not immune to politicization. A 2021 paper analyzing NSF grant abstracts across all scientific fields documented a large and consistent increase in terms related to identity politics over the last three decades. As of 2020, 30.4 percent of all grants had one of seven politicized terms, up from 2.9 percent in 1990. See Leif Rasmussen, “Increasing Politicization and Homogeneity in Scientific Funding: An Analysis of NSF Grants, 1990–2020,” Center for the Study of Partisanship and Ideology, November 16, 2021, https://www.cspicenter.com/p/increasing-politicization-and-homogeneity-in-scientific-funding-an-analysis-of-nsf-grants-1990-2020.
  • Long-term funding for individual researchers instead of short-term funding for discrete projects could accelerate the frequency of scientific breakthroughs. An increase in small grants for early-career scientists and less established research institutions—even grants to researchers outside academia—could stimulate more scientific risk-taking.
  • Literature-based discovery, or LBD, which uses ChatGPT-style models to analyze massive amounts of scientific data and literature, promises to unlock novel discoveries. In fact, it already has, for example in the case of the antibiotics halicin and abaucin, which two MIT teams identified in 2019 using a machine-learning algorithm trained on the chemical structures of thousands of known antibiotics. Rather than pushing the epistemic edges, these AI-guided systems tend to excel at interpolating between existing data, not extrapolating to the radically novel and unknown. But at the very least, this silicon-based science could help reduce the epistemic load for carbon-based scientists and let them catch up to the frontier faster. And while cyborgs have yet to take over the world’s research labs, by fully automating replications to verify published findings, AI could help solve the reproducibility crisis.
  • This might seem counterintuitive at first, since bubbles have funded all kinds of gadgets, fads, and technologies that faded into irrelevance after they burst. But bubbles are among the most powerful mechanisms we have to boost risk tolerance. They foster precisely the kinds of futuristic and optimistic visions we need in order to arrest and reverse stasis and decline. Under the right conditions, they are ideal exit ramps from the Great Stagnation.
  • Almost $6 trillion evaporated after the dot-com collapse in the early 2000s, and the global economic system crashed when the housing bubble burst in 2008.
  • The standard view in economics and finance holds that speculative financial bubbles form when unrealistic expectations about future cash flows decouple prices temporarily from fundamental valuations. In this view, popularized by a 19th-century study on crowd psychology, bubbles are the result of “popular delusions,” the “madness of crowds,” or “irrational exuberance.”
  • Yet not all bubbles destroy wealth and value. Some can be understood as important catalysts for techno-scientific progress. Most novel technology doesn’t just appear ex nihilo, entering the world fully formed and all at once. Rather, it builds on previous false starts, failures, iterations, and historical path dependencies.
  • By generating positive feedback cycles of enthusiasm and investment, bubbles can be net beneficial. Optimism can be a self-fulfilling prophecy. Speculation provides the massive financing needed to fund highly risky and exploratory projects; what appears in the short term to be excessive enthusiasm or just bad investing turns out to be essential for bootstrapping social and technological innovations.
  • Like bubbles, FOMO tends to have a bad reputation, but it’s sometimes a healthy instinct. After all, none of us wants to miss out on a once-in-a-lifetime chance to build the future.
  • In his 2007 book Pop, Daniel Gross runs through a list of historical bubbles, including telegraphs, railroads, dot-coms, and housing, that had undeniable upsides. Even if the earliest investors in transatlantic cable lost their shirts, cables did get laid, and the world grew more connected. Thanks to overbuilding, a quarter of the entire US railroad system was in bankruptcy by 1894, but the tracks were still there, and cheap transportation remained available. To this day, the US has some of the world’s best freight rail infrastructure thanks to what in the 19th century was excess capacity. And since trains are the most cost-effective way to move many goods over land, US consumers and companies continue to benefit from lower prices. Meanwhile, this cheap capacity led to some of the first national brands, and when globalization started accelerating in the 1970s, dominant American brands evolved into dominant global ones. All of this was made possible by a bubble.
  • Railway bubbles made time more granular, since a railroad’s timetable only works if people can show up at the station at a precise time. The Manhattan Project gave us nuclear power and a good reason to develop better rockets. And the Apollo program subsidized the early stages of the transistor, arguably leading to the modern computer revolution.
  • Bubbles literally add new dimensions to the way we view the world, and lead to more precise measurements within those dimensions.
  • The contrarian case for bubbles is imperfect, however. It’s a numerator without a denominator. Yes, bubbles have long-term benefits, but they also have upfront costs. Sometimes, those costs are tolerable—as it turns out, all that money AOL spent on mailing disks to prospective customers helped create a well-wired country that could support more sustainable internet businesses, even if AOL itself wasn’t one of them. But in other cases, the result of a bubble is pure malinvestment: empty subdivisions in cities where the only viable industries were building houses and selling mortgages, or renewable energy sources that were economically viable only because of subsidies. 135 Therefore, we need to think more carefully about bubbles so that we can discern useful bubbles from destructive ones.
  • Looking at bubbles more closely, we can differentiate between two kinds of bubbles, although they are less distinct than they might initially seem. One kind, the classic speculative financial bubble, involves speculators egging one another on, pushing prices higher and higher, until the only justification for asset prices is the expectation that someone else will be willing to pay even more. The other kind of bubble is a filter bubble. Participants in filter bubbles wall themselves off from opinions they disagree with and become increasingly convinced that their viewpoints reflect the one true way to understand the world. Filter bubbles are generally seen as dangerous; indeed, they can be a source of conspiracy theories, misinformation, and pathological behavior. But they also represent a filtering out of the noise that can distract people from their mission.
  • Consider a more generous interpretation of the financial bubble as a bet on a particular version of the future, especially one that differs meaningfully from the present.
  • The sheer volume of information in the world means that we need some kind of filter bubble just to keep things coherent. The real question is whether we’re filtering out good information or bad.
  • Both kinds of bubbles—the classic speculative financial bubble and the filter bubble—can lead to good and bad outcomes. Which outcome arises often depends on whether the bubble in question is a mean-reversion bubble or an inflection bubble.
  • The most value-destructive phase of a mean-reversion bubble arrives at the moment when belief and reality have diverged yet belief is still able to drive perceptions of reality.
  • When enough people believe it, they affect the future, ensuring the future won’t look like the past. Prudent extrapolation is still valuable, of course—history is mean-reverting in many ways.
  • In an inflection-driven bubble, investors decide that the future will be meaningfully different from the past and trade accordingly. Amazon was not a better Barnes & Noble; it was a store with unlimited shelf space and the data necessary to make personalized recommendations to every reader. Yahoo wasn’t a bigger library; it was a directory and search engine that made online information accessible to anyone. Priceline didn’t want to be a travel agent; it aspired to change the way people bought everything, starting with plane tickets. 140 If a mean-reversion bubble is about the numbers after the decimal point, an inflection bubble is about orders of magnitude. A website, a PC, a car, a smartphone—these aren’t 5 percent better than the nearest alternative. On some dimensions, they’re incomparably better.
  • When investors debated the merits of Uber, the question was whether the company would be a niche taxi service worth perhaps a billion dollars or a transportation revolution that would create and capture trillions in value. Because these visions of the future are so radically different, there is no middle ground.
  • Since there is no middle ground during an inflection bubble, prices are set by trades between the most optimistic investor and the most committed short seller. In theory, different market participants can reach a middle ground because they debate the probabilities—perhaps one investor thinks fully autonomous cars run by Uber have a 5 percent chance of happening and another thinks the odds are more like 10 percent. But in practice, bubble-driven parallelization strikes again: Many different things have to happen for the thesis to be correct, and their odds of happening are correlated because of the bubble dynamics. So the probability calculation ends up being that one side thinks five things have to go right and there’s a 90 percent chance that they happen, so the odds of success are about 60 percent, and another side thinks that there’s only a 10 percent chance that each of those five things will happen, putting the odds of all of them working out at 0.001 percent.
  • If the buyer is correct, then the seller is living in a fantasy about the past. If the seller is correct, then the buyer is living in a fantasy about the future. Either way, someone is living in fantasyland.
  • For most lenders, fantasyland is a place to avoid, since they merely earn interest on a success while being on the hook for all of the downsides in case of a failure. 141 This makes inflection bubbles much less leverage-driven. Instead, they are fueled by equity. As a result, inflection bubbles tend not to cause major financial crises.
  • The fundamental utility of inflection bubbles comes from their role as coordinating mechanisms. When one group makes investments predicated on a particular vision of the future, it reduces the risk for others seeking to build parts of that vision. For instance, the existence of internet service providers and search engines made e-commerce sites a better idea; e-commerce sites then encouraged more ad-dependent business models that could profit from directing consumers. Ad-dependent businesses then created more free content, which gave the ISPs a better product to sell. Each sector grew as part of a virtuous circle.
  • Companies and founders have an incentive not to scare people with their audacity. A firm might have a product idea that, if realized, would change the world, but the world is sometimes reluctant to be changed. Microsoft used the internal slogan “A computer on every desk and in every home, running Microsoft software” early on (although it later abandoned the latter part of this dream—as it turns out, Microsoft and the Justice Department had a disagreement over whether Microsoft software should be running on every single computer). Google has also publicly distanced itself from the full scope of its ambition. Google’s mission, to “organize the world’s information and make it universally accessible and useful,” certainly isn’t modest. And yet the company aligns itself with inoffensive goals like providing search and mapping tools and laboriously scanning and digitizing every book in the world. What goes unstated is that Google also organizes information about who is interested in which products, information it uses to serve up targeted ads. If the goal were expressed as “Predict every thought that might eventually lead to a purchase decision,” Google would sound both greedy and dystopian.
  • In fact, one advantage of the bubble dynamic is that it can bring together people with wildly varied motives. Some are in it because of their commitment to a vision, others because they want the opportunity to do unfettered research. Some want to climb to the top of a hierarchy and view the bubble as an opportunity to skip a longer slog, while others just want a paycheck. A surprising number of early participants in the semiconductor industry moved to the West Coast for the weather. 142 From the outside, and even from the inside, it can be hard to discern that something revolutionary is afoot.
  • Importantly, bad bubbles are usually not the result of wild speculation in nonsensical assets. It can seem this way because sometimes people do invest in nonsense and then lose out. But when such bubbles burst, only the investors get hurt. The price of Beanie Babies experienced a rapid ascent and decline in the 1990s, but the collapse was never that consequential in broader terms. It didn’t even appreciably hurt eBay, the platform on which the Beanie Babies pricing speculation occurred. Instead, the categories of activity prone to bad bubbles that result in the widespread destruction of value tend to be more commonplace because they have a clearly identifiable use. Conglomerates in the 1960s, leveraged buyouts in the 1980s, and mortgage-backed securities in the 2000s were all sources of utility, just not nearly enough to justify the level of economic activity they engendered.
  • The inverse pattern is that many good bubbles start out looking trivial because their unforeseen effects are hard to estimate. Even well-placed observers will ask, “People are really getting excited over this?” These dynamics create a challenging paradox in which the most productive bubbles are the ones that can readily be dismissed as either pointless or crazy, while the bubbles that serious people buy into are often the most destructive.
  • A classic symptom of bubbles, both good and bad, is the fear of missing out. As J. P. Morgan is alleged to have said, “Nothing so undermines your financial judgment as the sight of your neighbor getting rich.”
  • The inverse of FOMO is betting against bubbles. This approach has a mixed record. There are stories of people who correctly called the peak, but there are also plenty of people who placed their bets too early to make a profit. For example, one savvy fund manager put together a memo outlining the subprime bubble with the clever and memorable subtitle “A Home without Equity Is Just a Rental with Debt.” The fund manager was eventually proven right, but he wrote his memo in 2001, in the earliest stages of the bubble. Betting against the housing market at that point would merely bring years of pain.
  • If a bubble excites speculators but not entrepreneurs, it will bid up assets without building anything. If a bubble only convinces founders to act, it will be starved for capital. All of these people need to participate in the bubble at the same time, and FOMO can bring them together.
  • In the 1980s and 1990s, a group of technology boosters in Congress was dubbed the “Atari Democrats”—a mildly pejorative nickname, given that Atari had been an expensive failure after its promising early years. But the politicians embraced the label, strongly signaling they believed in what it signified. They were prescient.
  • In the 1990s, some of the Atari Democrats persuaded their colleagues to give internet companies a safe harbor in the form of Section 230, which protected them from any liabilities related to content users posted on their sites. The provision made the internet something closer to an open forum than a traditional media operation that happened to be accessible via modems. Intentionally or not, this openness is now a defining feature of the web.
  • For example, consider how Paul Graham explained Yahoo’s valuation in 1999: By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were excited about the internet. One reason they were excited was Yahoo’s revenue growth. So they invested in new internet startups. The startups then used the money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth for Yahoo, and further convinced investors the internet was worth investing in. When I realized this one day, sitting in my cubicle, I jumped up like Archimedes in his bathtub, except instead of “Eureka!” I was shouting “Sell!”
  • Or how Greg Lippmann’s subprime short thesis is described in Gregory Zuckerman’s 2009 book The Greatest Trade Ever: [Deutsche Bank quant Eugene] Xu split the country into quartiles. He discovered that states with the lowest rates of default, like California, Arizona, and Nevada, also claimed the highest growth in home prices. The quartile with the highest rates of default, on the other hand, had the slimmest growth in home prices. Florida and Georgia, for example, seemed similar in many ways, but Xu’s numbers showed Florida had a much lower rate of default than its northern neighbor, which seemed to be due solely to its soaring home prices… “Holy shit,” Lippmann exclaimed to Xu on Deutsche Bank’s trading floor while reading over his work, “if home prices stop going up, these guys are done.”
  • But not every instance of investor validation results in disaster. This is often the case when the validation is the product of parallel innovation in two industries. Some of the most productive inflection bubbles seem to function as a pair of complementary bubbles, with each justifying the other’s existence.
  • For either thesis to be right, both had to be. Cars could only become a ubiquitous means of transportation if gas was readily available throughout the country. Otherwise, a car was just a fancy toy.
  • Semiconductors and software involved a similar tandem-bubble cycle, with each generation of software justifying the next generation of chips. Still later, the glory days of ISPs as growth stocks lined up with the rise of publicly traded dot-coms. VCs who invested in e-commerce were indirectly subsidizing AOL and CompuServe, which were, in turn, indirectly subsidizing e-commerce.
  • One of the most impressive aspects of the Manhattan Project was its reliance on the assumption that other parts of the project would finish successfully.
  • Any researcher interested in nuclear weapons in, say, 1935 could have looked at the available information and concluded that such weapons were possible, or at least weren’t demonstrably impossible. But building any individual part in isolation would be worthless except for demonstration purposes or to confirm theories. It took a megaproject to make nuclear weapons a reality. As a species of inflection bubble, a megaproject accomplishes a set of tasks in parallel that would never be accomplished serially.
  • Or consider the proliferation of companies that provide various kinds of white-label delivery, from food and parcel delivery to managing cargo and tracking shipments. These companies require fixed investments that will only pay off if the e-commerce market keeps growing. Meanwhile, the e-commerce market keeps growing partly on the assumption that products ordered online will be delivered quickly and cheaply.
  • Every financial mania requires some suspension of disbelief and unshakeable faith that the idea at its core will pan out. More often, these delusions are more rational than they appear, if only in hindsight.
  • To make the envisioned future a reality, delusional ideas, ambitious people, companies, labs, hardware, and computer code all need to collide. Innovation clusters involve many people in the same industry working in close proximity—bouncing around ideas, attracting and poaching talent, raising capital, and figuring out management best practices. But a cluster is more than just a group of companies. It also encompasses universities, capital providers, service providers like law firms and accountants, and participants whose core function is to make useful introductions.
  • In healthy bubbles, participants are outsourcing their judgment about specifics in service of a general trend in which they have confidence. Meanwhile, others are outsourcing to them in the same way. A bubble participant is sacrificing some of their own autonomy, betting that the rest of the infrastructure they need will someday get built. At the same time, they’re performing the same role for someone else.
  • Scarcity thinking kicks off a self-reinforcing doom loop, which results in more scarcity.
  • According to René Girard’s theory of mimetic desire, recently popularized among the Silicon Valley set by Peter Thiel, our wants tend to be borrowed from other people. We want not what we desire on our own but what we think other people desire.
  • Instead of violently discharging mimetic tensions, bubbles channel the destructive mimetic dynamic into something productive and socially net positive. And since they represent the pursuit of something that has been visualized but not defined in its final form, direct mimicry is harder (except by joining the bubble).
  • In his book The Decline of the West, German historian Oswald Spengler contrasted an Apollonian culture, obsessed with the present, with a Faustian culture, which looks toward the infinite and the transcendent.
  • A bubble also provides its participants with a narrative arc. It has a beginning, full of excitement and promise; a middle, comprising a series of difficult challenges; and an end, in which the original promise of the bubble has been partly refuted and partly fulfilled, at which point participants must find a new vision to pursue. As William Blake writes, “The road of excess leads to the palace of wisdom.” That’s true of bubbles, too. The end result of the voracious process of information creation and assimilation the bubble entails is that participants exhaust most of the initial uncertainty, leaving behind a stable finished product, new knowledge, and hopefully more wisdom.
  • Whether the bubbles form in hard tech or in software, the same general principle is at work in all of them: The more capital, attention, and commitment an emerging technology or project has, the more it tends to attract.
  • Private equity, like subprime lending, can create value—among other things, working at a private equity firm is high status, while being an executive at the type of company these firms buy is fairly low status. (One can view the private equity industry as a giant conspiracy to trick Harvard MBAs into proudly managing ball-bearing factories, janitorial supply companies, and nursing homes.) But private equity doesn’t want to radically transform the business it acquires, and in one sense it can’t, since lenders want to lend against something real rather than something hypothetical. The business can be turned into a better version of itself but not into something completely new. This is part of the function private equity serves: finding companies that are cheap because they could be run better, then running them better before selling them off. But it’s hard to drive long-term economic growth purely through that process.
  • Franklin Roosevelt and his brain trust were self-confident enough to think they could evaluate the state of theoretical physics through state-directed research. In this case, they turned out to be right.
  • The Manhattan Project had all the trappings of a successful bubble: risk tolerance from institutions with immense capital and resources; a clear sense of urgency; concentrated talent, fostering an internal intellectual supply chain producing new cognitive tools for solving emerging problems; massive parallelization, with multiple institutions working toward the same goals; and the perception of competition—from Nazi Germany, the Soviet Union, and indirectly from other parts of the US war machine that could put the money and manpower toward more immediate goals. The Manhattan Project was also bubble-like in that it featured high uncertainty, plenty of waste, many mistakes, and divergent concerns. Some participants were motivated by scientific curiosity, others by geopolitics, but the project focused all of them on the same endpoint. And finally, like many other bubbles, the Manhattan Project had significant spillover effects, enabling new technologies quite apart from the project’s goal and leading to innovations upon innovations.
  • Like a twitchy day trader, FDR had to ask himself, “What do they know that I’ll wish I’d known?”
  • During the Manhattan Project, it’s probable that Los Alamos contained the largest concentration of brainpower ever assembled.
  • Oppenheimer represented one end of the Manhattan Project’s collection of geniuses: a cultured Europhile who came from a moneyed Manhattan family, he saw his role as both doing physics and organizing physicists. At the opposite end of the spectrum was Richard Feynman, born 25 miles and a world away in blue-collar Far Rockaway. If Oppenheimer was a force for calm and cooperation, Feynman was an endless source of novel ideas, brilliant contraptions, and practical jokes. Feynman contributed to both the rebellious spirit of the program (learning safecracking to demonstrate his attitude toward the military’s security theater) and its outcome (modeling the payloads of different bomb configurations).
  • What’s more, because it wasn’t obvious how to obtain sufficient quantities of either, the Manhattan Project launched multiple large-scale manufacturing facilities in parallel, knowing that at least some of them wouldn’t work or would be redundant. Such investments are characteristic of bubbles. Only when participants are all in, superbly capitalized, and certain of the value to be realized are they able to countenance such wastefulness.
  • The parallel approach to uranium enrichment increased the risk that the amount of money, time, and material wasted on fruitless efforts would be significant. But it lowered the more important risk that the bomb would not be completed at all.
  • In fact, the entire Manhattan Project could be modeled as one big computation. Everyone involved—scientists, engineers, administrators, construction workers, pilots—was trying to solve the same large and complicated math problem. That is: Given a specific mix of human and natural resources, how do you end a war as quickly as possible?
  • Spillovers like these, which long outlive the bubble itself, are a defining feature of innovation-accelerating bubbles. Formal futures markets were formed during the 17th-century Dutch tulip bubble. The railway networks that materialized during the British railway mania in the 1840s continue to move passengers and freight. Fiber-optic networks and Amazon survived the dot-com collapse of the early 2000s. Similarly, the Manhattan Project “bubble” burst in the sense that after the bombs detonated the program was wound down with the conclusion of the war, yet the resulting innovations remain critical to the way we live now.
  • In a provocative essay titled “Thank God for the Atom Bomb,” Paul Fussell estimates that—depending on the assumed scale of a Kamikaze-style Japanese counter-offensive—the two nuclear weapons saved between 1 to 3 million lives.
  • The success of Apollo speaks to what Peter Thiel has called “definite optimism,” or the belief that the future will be better than the present for specific, concrete reasons—as contrasted with “indefinite optimism,” the view that GDP will grow and standards of living will improve without the help of any specific change.
  • Yet the Apollo program was even larger than the Manhattan Project, with a price tag about 12 times higher.
  • Johnson made a powerful appeal, evoking America’s “race for survival.” As he put it, “control of space means control of the world.”
  • To return to Plato’s model of the soul, mentioned in Chapter 1, what was required wasn’t just logos (reason) but also thymos (spirit).
  • This is one of the hidden benefits of projects marked by extreme ambition: They attract the most dedicated, focused people.
  • It’s no wonder Gene Kranz, an Apollo 13 flight director, called “irrational exuberance” a prerequisite to space exploration.
  • As one NASA researcher notes, the “common goal” and “sense of purpose” underlying Apollo “prevented the development of bureaucratic sclerosis.” 192 Apollo’s management structure was decentralized, flexible, and often informal, which enabled scientific and technical risk-taking, rapid cycles of testing and feedback, and a relentless focus on problem-solving.
  • In von Braun’s system of “Monday notes,” engineers and technicians were required to identify the most salient issues and submit a single-page note. After leaving comments in the margins, von Braun would circulate the entire annotated collection of notes within the organization. Through this informal system, everyone was able to tap into the organization’s collective knowledge and contribute solutions to each other’s problems. There was one level of centralization, with von Braun serving as the hub for information, but his role was really to highlight problems in a way that facilitated decentralized solutions.
  • In April 1970, when Apollo 13 was heading for the Moon, Joseph Shea, program manager for the command module and lunar module, required his direct reports to assemble a summary of everything they had accomplished and every outstanding decision that needed his input. These handwritten notes, which ran to hundreds of pages long, were delivered weekly. Shea used this system to stay continuously updated and to ensure that anything running slightly behind schedule caught up in time.
  • In contrast to NASA’s later mantra—“In God we trust. All others must bring data”—Kranz tried to systematically incorporate the gut feelings and hunches of his colleagues into the decision-making process.
  • By contrast, when the Challenger exploded in 1986, NASA was heavily reliant on data communication and visualization protocols, hierarchy, and a culture of conformity, all of which were implicated in the accident.
  • Houbolt, a mid-level engineer, wrote countless letters on the subject, which he was able to submit to the very top of the command chain. Notably, he was able to route around NASA’s hierarchies and disseminate his vision across its various facilities.
  • The protocol worked marvelously. By the end of its testing regimen, NASA’s engineers had identified no fewer than 14,000 anomalies and had explanations and fixes for all but 22 of them.
  • From the very beginning of spaceflight programs, there was an intense debate among NASA administrators, policymakers, scientists, and taxpayers about whether the massive investment was worth it. In response, NASA has emphasized an endless list of technological innovations its research helped facilitate, from CAT scans and MRIs to communications satellites and freeze-dried food. But Apollo’s most transformative technological contribution was the integrated circuit.
  • Indeed, NASA needed so many integrated circuits that, by the end of 1963, NASA purchased 60 percent of all integrated circuits manufactured in the US.
  • If the Singularity does arrive, it will be traceable back to the Eagle, the lunar landing module’s onboard computer.
  • Would integrated circuits have happened without Apollo? Most likely, given enough time, another use case might have prompted their discovery and broad-scale usage. But Apollo massively accelerated their development and diffusion. Moreover, Apollo set the pace for chip improvement, which kickstarted a recursive process: Because software was designed for higher-performance computers, chip manufacturers made better chips to meet that demand for computation, which encouraged the next generation of chip manufacturers to build still-faster chips, and so on.
  • Asked about the purpose of a manned Mars mission, von Braun once remarked that the question of “what we are going to do… once we get there” was a “weak point.” 202 But such questions are missing one of the most important lessons from Apollo. Manned space flight needs to be evaluated less in terms of economic efficiency and more in terms of spiritual effectiveness.
  • In the 20th century, the experience of the technological sublime was a recurrent phenomenon in America—think of the interstate highway system, the Hoover Dam, the Manhattan skyline, the atomic bomb, the jet airplane, or the Golden Gate Bridge. After Apollo, that experience essentially vanished. Instead of a new opening upon an endless frontier, Apollo marked the advent of an era of diminishing expectations about the future and decreasing returns on technological progress. It signaled our entry into an age of deceleration, in which technological and scientific ambitions are constantly scaled down. Dreams about sparkling Moon dust died in the mud of Woodstock, which gave rise to more Earth-centric concerns.
  • Former NASA administrator Michael Griffin has compared the space program to medieval cathedrals, built to last an eternity.
  • Plato, The Republic. For an updated version and application of the Platonic idea of thymos in modern political theory, see Francis Fukuyama, The End of History and the Last Man (New York City: Simon and Schuster, 2006).
  • Notably, when Vanguard TV-3, the US response to Sputniks I and II, blew up on the launchpad in December 1957, the Soviet delegation at the UN informed the US that the USSR had a program to provide technical aid to underdeveloped countries, and that the US was welcome to take advantage. David Graeber, “Of Flying Cars and the Declining Rate of Profit,”
  • For the techno-optimist version of this prophecy, see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology
  • A few days after the Moon landing, Wernher von Braun prepared a plan for manned Mars missions, which included lunar-orbital space stations, Mars surface bases, and nuclear-powered Earth–Moon shuttles. The plan, which had the public support of the vice president and which von Braun presented to the US Senate, stated that ships for Mars would launch on November 12, 1981. While setting an ultra-ambitious deadline worked for Apollo, von Braun was skeptical that the US Senate would support it this time. A few decades later, Elon Musk became known for implementing a similar culture of almost-impossible deadlines for Mars colonization at SpaceX.
  • His observation became a reality because of the push and pull between scientific research, which enabled greater efficiency, and increasingly elaborate applications for chips, which required it. Moore’s law is perhaps the most compelling and enduring example of a two-sided bubble, wherein the expectation of progress in one domain spurs progress in another, which then propels growth in the first.
  • As Intel CEO Brian Krzanich noted in 2015, if cars had improved as fast as chips since 1971, we would be able to travel “300,000 miles per hour. You would get two million miles per gallon of gas, and all that for the mere cost of four cents.”
  • As the mysteries of physics were being studied at universities—and, later, at Los Alamos—the practical implications of novel physics advances were being tested by Bell Labs. As the R&D arm of AT&T, one of the country’s largest companies, Bell Labs had hired some of the world’s most talented physicists and put them to work on open-ended research projects. While the hope was that some of the work would help AT&T find new product lines or cheaper ways to offer what it already sold, Bell Labs was tolerant of work that had more theoretical than practical import.
  • The media reaction was subdued. The New York Times famously ran its transistor story on page 46, as part of a longer column titled “News of the Radio.” The transistor was an interesting gadget, a triumph of America’s long tradition of tinkering and its relatively new leadership in theoretical physics. But the prospect of smaller televisions, more reliable radios, lighter walkie-talkies, and perhaps a new kind of hearing aid did not strike the press as a transformative innovation. The applications for a new technology were hard to see because the use cases for existing products were defined by their limitations.
  • To convince manufacturers to use its new and improved chips, Fairchild created blueprints for transistorized consumer electronics—radios, televisions, clocks, calculators—and gave them to manufacturers for free. This approach successfully reduced risk for manufacturing companies and helped drive demand for Fairchild’s products.
  • The company was operating in an industry whose more risk-averse legacy participants weren’t as optimistic as it was, so it willed that optimism into existence by putting the R&D costs on its own profit-and-loss statement instead of expecting everyone else to.
  • This is a common feature of new technologies. The initial killer app may have little to do with the ultimate use case, but it provides enough demand to scale production (and convince the creators they’re on to something). The first use case for the steam engine, for example, was pumping water out of flooded mines, not transportation. The early model for the automobile and personal computer industries was hobbyists, who weren’t looking for a practical device but something that was fun to use. Airlines in the 1920s made their money hauling mail for the US government, not passengers or other forms of freight.
  • Fairchild and Intel pioneered some of the management techniques that would later become standard in the chip industry. For example, when there were competing options for performing a task, such as for a potential chip design or a production method, the companies would often assign independent teams to work on multiple models. Whichever one worked best would be implemented at scale. This created some waste and duplication, but it also meant that launches weren’t delayed; parallelizing the process ensured that something would be ready by the target launch date. This approach was similar to the Manhattan Project’s decision to pursue numerous sources of fissile material in case one of them didn’t work; if the goal was to deliver something that worked as quickly as possible (and if the consequences of success were extreme enough), waste was an acceptable price for making that happen.
  • The history of linear extrapolations from recent trends includes many infamous failures. In 1798, Thomas Malthus observed that agricultural output grew linearly while populations grew exponentially, leading him to conclude that the future would be characterized by endless poverty and starvation. The Limits to Growth, a report commissioned by the Club of Rome in 1972, ran a more complicated set of extrapolations, projecting sudden declines in industrial output in the early 2000s, followed—again—by poverty and starvation. But one shining exception to the general rule against graphing recent historical data into the future is Gordon Moore’s 1965 paper, “Cramming More Components Onto Integrated Circuits.” Moore looked at the transistor density of four recent chips released in the prior three years and observed that it had doubled every year. He argued that this trend could continue for at least the next 10 years. A decade later, he revised his prediction, speculating that chip density would continue to double every two years.
  • Moore’s law represents a fundamental shift in the history of the chip. Until that point, the saga of the chip was a narrative propelled by technology. After Moore’s observation, it became a technology propelled by narrative.
  • In one sense, the 8080 was just another data point in Moore’s straightforward extrapolation. But in another sense it was a watershed moment, because the 8080 was the first chip powerful and affordable enough to power a personal computer. In response, MITS, an Albuquerque-based company, did exactly that. Its new computer, the Altair, didn’t look like much.
  • As the essayist and programmer Paul Graham has pointed out, Facebook and Microsoft were both founded not just by Harvard undergraduates but by Harvard undergraduates in January, during the school’s “reading period,” a time when students are, in theory, free to study for exams and, in practice, free to procrastinate. Harvard’s computing resources at the time included a powerful PDP-10, which Gates and Allen repurposed for their work.
  • Gates quickly grasped the power of the trend Moore had identified. As he put it in an interview with Playboy decades later: When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software. 231 Yet for Intel, Advanced Micro Devices (AMD), and other chip companies, firms like Microsoft made their jobs easier by ensuring there was always demand for the next generation of chips. In other words, Moore’s law was a coordination mechanism: If everyone in the industry thought it was true and acted as if it would stay true, then it was true, at least until processors reached physical limits that made further advances impossible. Moore’s law was a backward-looking observation that transformed into a self-fulfilling prophecy.
  • When capital costs were low and the primary customer was NASA, pure technological wizardry was rewarded. As chips made their way into more consumer applications and the cost of starting a fab rose, the industry began to reward strategic brilliance. It’s no coincidence that Intel’s Andy Grove wrote the canonical book on tech company strategy—nor is it a coincidence that he called it Only the Paranoid Survive.
  • At this point, the bubble started to bifurcate. Some companies were still equity financed and aimed for disproportionate profits with high risk, while a growing number were debt financed and tried to compete with manufacturers of existing products on cost and quality. This dynamic began to resemble a mean-reversion bubble model in which the expectation was that the world would demand the same products but in greater quantities and at lower costs.
  • Like the nematode worm, chips are now everywhere and invisible, just as Moore predicted. But without that prediction, there would have been no coordinating mechanism to move the chip industry forward so quickly. Instead, the talent and capital focused on the tech industry might have spread out across other fields, diluting its network effects. In this counterfactual, other industries might be a bit more productive today. But the overwhelming sums of money invested in tech, and the overwhelming number of talented people who opted to pursue careers in the industry, had a compounding effect. New tech startups had suppliers and customers who moved at a faster pace, and were pitching to investors who also understood and encouraged rapid growth.
  • In a very real sense, Moore’s prediction helped create the future it imagined.
  • The recycling of profits from one epoch-defining company into early-stage investment in the next generation of businesses has been an important force in the technology industry. Google was backed by money from Sun Microsystems, while Facebook received support from a cofounder of PayPal. Fairchild Camera also benefited from this cycle, since the founder’s father was an early executive at IBM.
  • Every technology cycle leaves behind a residue of solved problems that make the same iteration of an idea both more likely to succeed and more likely to face competition. The first e-commerce companies, for example, had to figure out how to accept payments online—eBay originally assessed fees on an honor system and asked customers to mail in checks—but now third-party services solve this problem. So solving payments is no longer something founders have to think about, but it’s also one less potential competitive advantage.
  • The relationship between what Michael Malone calls “the Intel trinity” in his 2014 book of the same name, especially between the more abrasive and focused Grove and the kind and charismatic Noyce, was sometimes fraught. Complementary skills often involve clashing personalities. One function of a company like Intel is that it forces people with varying traits and goals to collaborate on the same project. Just as an attractive force balances a repulsive force in an atom, such complementarity creates brand-new materials that couldn’t stably exist in any other format.
  • While no single factor can explain America’s mid-century growth, one key driver was the proliferation of the corporate research and development laboratory. From AT&T’s Bell Labs to Lockheed’s Skunk Works to Xerox PARC, the corporate R &D lab fostered innovation by creating bubbles dispersed across space and time.
  • Corporate research long predated this period, of course. In the late 19th century, a chemist at Carnegie Steel testing ores and other inputs discovered that steelmakers’ folk wisdom about the optimal ingredient mix and process did not match metallurgical facts—knowledge that enabled Carnegie to save money by buying cheaper, better iron ore. 235 Edison Electric famously placed research at the core of its business; Thomas Edison had 1,084 patents, setting a record that wouldn’t be broken until the late 20th century. And Ford Motor continuously experimented with new designs in its early days, before settling on a few models it could mass-produce at unprecedented scale. But it was only in the 1930s that the corporate R&D lab was formalized and emerged as an important part of major corporations’ strategies.
  • Each of these innovative environments possessed elements of speculative and filter bubbles, enabling them to transform a vague vision of corporate destiny into a specific vision of the future.
  • Today, the huge benefits of the mid-century R&D bubble aren’t mentioned in many business books. When R&D case studies do appear in the literature, they often take the form of cautionary tales. A central reason for this is that the biggest successes were not captured by the companies themselves. Major AT&T and Xerox inventions indirectly created trillions of dollars in market value, but they didn’t generate outsize rewards for their shareholders. Another factor is that some of the influential work done by corporations during this period was shrouded in Cold War secrecy—by the time one could read about why Lockheed Martin was so successful, its glory days had long since passed. Finally, much of the research was buried inside large, blue-chip companies—the sorts of companies that merit a business book only when they make a major misstep.
  • In 1903, Du Pont opened its first R&D lab, the Experimental Station. In the decades to come, the Experimental Station and the company’s other labs created novel substances with seemingly magical traits. The whole universe of useful materials was transformed as a result. Before the advent of plastics and synthetic fibers, humanity primarily obtained materials by digging them out of the ground or manipulating animal and plant products. Those materials were more or less unchanged as end products. A wool sweater is still recognizable as the processed fleece of a sheep; polyester, by contrast, starts with crude oil, air, and water, but looks nothing like any of them.
  • In addition to a good idea, bubbles depend on knowledge and talent. It also helps if there is a market in which to sell those innovations.
  • Much as the Manhattan Project would do several years later, Du Pont was able to capture significant talent from overseas and leapfrog its competition.
  • Du Pont’s status as an R&D powerhouse lasted until the 1960s, after which it gradually declined. One significant challenge was that homegrown innovation began to undercut Du Pont’s own product lines. Back when the company was primarily competing with natural products, any novel synthetic meant that Du Pont could enter a new market. But as synthetic materials became more common, the development of a new material was increasingly likely to compete with something the company already sold. Du Pont continued to invent but picked low-hanging fruit, developing synthetics that could replace expensive materials rather than developing, say, better fibers for everyday use in cheap goods.
  • Following a bubble-like trajectory, executives set a goal as arbitrary as it was ambitious. In 1910, management decided AT&T would be able to offer cross-country phone calls before the 1915 World’s Fair in San Francisco.
    • One of the other advantages of a World’s Fair was the inspirational timelines it offered people to build towards.
  • From a public-relations perspective, it was far better for AT&T to report lower profits and spend a lot on R&D than to earn as much as its monopoly position would allow. Finally, the company could dream of a future date when antitrust enforcement would loosen, enabling it to either convert its science projects into profitable businesses or slash its research budget in service of greater profits—evincing definite optimism about the ability of heavy R&D spending to create novel, useful products, and indefinite optimism about future changes in government policy.
  • Meanwhile, the world at large benefited from important inventions produced by R&D centers like AT&T’s Bell Labs, which gave us the transistor, the C programming language, the Unix operating system, lasers, solar cells, and information theory.
  • Over time, Bell Labs remained a great place for dreamers and tinkerers but an increasingly frustrating one for doers. As it turned out, some theorists were interested in seeing their products get built and sold (especially if it involved stock options).
  • AT&T’s history demonstrates that a regulated monopoly, freed from some of the day-to-day concerns about profitability and strictly limited in its ability to pursue new lines of business, could still produce a series of exceptionally valuable discoveries. AT&T was essentially a government-backed theoretical research program with a slight tilt toward telecom applications, wrapped in a nominally private-sector business. During the period when the government was willing to tolerate the company as a monopoly in exchange for the benefits of good phone service and worthwhile research, AT&T generously funded its labs and allowed them to operate with wide latitude. The result was a single company that gave birth to multiple trillion-dollar industries.
  • The origins of Xerox’s Palo Alto Research Center (PARC) were different from those of AT&T’s Bell Labs, but the two institutions had similar trajectories. 250 In both cases, the labs benefited from bubble-like dynamics, in which a relentless selection for optimism and creativity led small teams to build transformative projects. They also both developed innovations that ultimately benefited their competitors.
  • In 1945, engineer and inventor Vannevar Bush published “As We May Think,” a prescient essay that imagined a networked world where documents were stored on microfilm rather than paper. By the mid-1960s, computer stocks were growing popular.
  • PARC benefited from a glut of talent—a direct consequence of the winding down of the Apollo program, which led to job cuts at aerospace companies and a tougher job market for PhDs in hard-science fields. This provided PARC with access to a bumper crop of smart scientists. Meanwhile, Xerox’s continued profitability gave the group the budget necessary to explore ideas that might have long-term payoffs.
  • Given their focus on computing, it stands to reason that PARC’s researchers were heavily influenced by 1968’s “Mother of All Demos,” an event where Stanford engineer Douglas Engelbart showcased graphical user interfaces, a WYSIWYG text editor, and the computer mouse. Engelbart also described a proto-internet in which computers were networked by default.
  • Bubble-like dynamics are hard to predict not only because of the inherent uncertainty of future technologies but also because it is often unclear who will benefit financially. Put another way, the social benefits of corporate R&D activity exceed the private benefits—the bubble creates wealth not just for the people who build and fund new technologies but also for those who buy them or expand on them.
  • High inflation ushered in an era of higher interest rates. As a result, future profits were less important relative to immediate cost cutting. Since a company’s theoretical value is based on its future earnings discounted back to the present, an increase in the discount rate means profits from the distant future are less meaningful. Meanwhile, a premium is placed on immediate earnings. In the 1960s, a research-intensive company might have traded at 30 times its earnings or higher, meaning investors were primarily betting on future growth. By the late 1970s, the average large stock in the US traded at just seven times earnings, implying a very low value for distant, hypothetical profits and rewarding short-term earnings improvements over research breakthroughs.
  • Companies involved in these oligopolies enjoyed a good deal of stability and could afford to think about the long term. But with prices rising due to inflation, that stability disappeared. Companies were caught in a vise: either pass higher costs on to their customers and lose sales or cut costs. Some companies downsized or opened new factories in less union-friendly locations. Others were acquired in leveraged buyouts, which often involved ruthless cost-cutting, including lower funding for research. These circumstances promoted shareholder myopia: In a more liquid market for labor and corporate control, executives can only plan ahead if they’re able to continuously convince shareholders their efforts will pay off. To an extent, the very effort to invest in innovation became self-defeating.
  • As discussed in Chapter 1, the abundance of money in big tech over the last decade has coincided with a scarcity of ideas, as indicated by the billions of dollars the large tech firms have been amassing. But beyond antitrust law and regulation, recent technological advances like the theoretical breakthrough of transformer-based AI and the user-facing advancements represented by OpenAI’s ChatGPT and other large language model-based tools might be the most significant accelerants for another corporate R&D bubble. While they have been in the making for decades, the recent breakthroughs in generative AI achieved by upstarts like OpenAI, Stability AI, and Anthropic have disrupted big tech’s R&D complacency for now. Whereas R&D spending by Meta, Alphabet, Google, Microsoft, and Amazon totaled $109 billion in 2019, they channeled $223 billion into R&D amid the AI boom in 2022. And they’re not just building their own tools but also partnering with or acquiring AI companies. (Antitrust is less complicated when a business is being bought for its potential to create new markets rather than its share of an existing one.)
  • Unless they’re deliberately kept secret, transformative corporate R&D projects always face an uphill climb from a skeptical public and worried competition. But that also induces the filter bubble phenomenon so necessary for transformative innovation. Some of the people working at OpenAI, Anthropic, and other AI companies feel like they’re part of a tiny minority who understand the implications of AI and therefore have a responsibility to make it happen first so they can make it happen right. It’s this kind of tension, between the fear of ending the world and the feeling of being the only one who can save it, that produces transformative technological change.
  • Tuxedo Park: A Wall Street Tycoon and the Secret Palace of Science that Changed the Course of World War II
  • Warren G. Bennis and Patricia Ward Biederman, Organizing Genius: The Secrets of Creative Collaboration (New York: Basic Books, 2007) and Richard W. Hamming, The Art of Doing Science and Engineering
  • Dan Wang, “Definite Optimism as Human Capital,” August 7, 2017, https://danwang.co/definite-optimism-as-human-capital/.
  • Michael A. Hiltzik, Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age
  • On one level, it’s an almost magical substance, the product of inert materials buried deep under the Earth for millions of years, brought to the surface to power our world. But the task of unearthing it has been a story of science, engineering, and financial and physical risk-taking.
  • How have we been able to not just maintain the pace of but increase oil production? Fracking plays a central role. Like every bubble we’ve documented so far, fracking reveals how the risk-taking and ingenuity of a small group of speculators, engineers, and entrepreneurs can radically alter an industry’s entire trajectory.
  • One explorer in what is now Iran realized that the best places to look for oil were in locations where locals worshiped fire, as a spiritually significant eternal flame might indicate a geologically significant deposit of natural gas.
  • Prior to kerosene, one of the most common sources of artificial lighting was whale oil, refined from blubber. The whaling industry, composed of relatively small outfits in search of uncertain returns, was thoroughly decentralized. Voyages were funded by people who had previously worked in the industry and had retired to a less stressful life of allocating capital and counting their returns. 262 Each project was a one-off event, although it might lead to repeat business between the same investor and crew. But the oil industry was organized somewhat differently. While there were many independent wildcat operations involved in oil exploration, the refining industry consolidated rapidly under John D. Rockefeller’s Standard Oil. Through aggressive pricing and acquisitions, Standard Oil was able to buy up or drive out most of its competitors. This gave the industry an interesting structure. Decentralized entrepreneurial exploration meant that any time drilling for oil became unusually profitable, new competitors would spring up, fueling a speculative bubble. In contrast, the refining portion of the industry was consolidated, which meant its future was being directed according to a coherent plan. Part of that plan was to end oil’s extreme price volatility.
  • For decades, fracking seemed promising but didn’t deliver much. It tended to be more expensive than other methods of extraction, and was mostly relegated to desperate producers with no other options. But the temptation still lingered. There were numerous wells that weren’t producing but which sat on top of untapped oil and gas reserves. To someone who believed that fracking would eventually work out, leases of frackable land looked like a hundred-dollar bill on the sidewalk, just waiting to be picked up. But the process of actually picking up that bill turned out to be challenging, both from an engineering perspective and a narrative one.
  • Given this history, energy investors are primed to be suspicious. As a result, energy entrepreneurship attracts good storytellers.
  • Entrepreneurial hypomania induced spending that wasn’t strictly justified at the time.
  • Cheaper energy is hard to notice in the short term, for two reasons. First, while energy is an input into just about every economic activity, it’s rarely the largest single cost. This means that cheaper energy shows up as a sort of pan-economic quality dividend, where everything is slightly more cost-effective than it otherwise would be. Another reason is that even with the more responsive supply from frackers, energy prices remain volatile from day to day; high prices at the pump produce headlines, while a change in the compounded rate of oil price growth over decades doesn’t. And we certainly don’t have headlines about counterfactuals, like a world where oil and gas are more expensive and acquiring them requires more entanglements with volatile parts of the world. Ultimately, the fracking dividend made the world richer in energy, and made some of the frackers very wealthy indeed.
  • The finite nature of natural resources can do the work that legislation can’t in making climate-impacting energy extraction economically untenable. But the arc of progress is also a long series of unsustainable practices that eventually lead to more sustainable ones. Urbanization was a rolling health crisis until the advent of modern sanitation and medicine; artificial nitrogen fixation reached production scale just as the world was running out of its next-best fertilizer, millennia-old accumulations of guano on various small islands; coal averted deforestation in early industrial England. Every technology advancement couples finite natural resources with infinite human creativity, and pivots to a new finite input when the older one runs down.
  • The history of US involvement in the Middle East is long and depressing, and much of it can be explained by the fact that Saudi oil was the only way to keep the US economy functioning. Oil is a fairly fungible resource, but it’s fungible precisely because the US makes it so by protecting shipments around the world and protecting countries like Kuwait from invasions, as it did during the First Gulf War. A need for oil has repeatedly required the US government and American companies to deal with governments they’d prefer to avoid. Because of fracking, the best opportunity for energy switched from oil and gas that was inconveniently placed around the globe to oil that was inconveniently placed in the US’s own backyard.
  • At their core, all of the bubbles and technologies we’ve discussed so far share a fundamental narrative that influences public perception. The development of nuclear energy, for example, was shaped by perceptions of its promises and perils. In the decades following the discovery of radium in 1896, radiation was perceived as a form of “alchemy” and “transmutation” that could lead to utopian civilizational renewal. In the wake of Hiroshima and Chernobyl, nuclear energy became irreversibly linked with the dystopian imagery of “contamination,” “mutation,” and “destruction,” 280 immortalized in the pop culture iconography of mutants and monsters like the zombies in 1968’s Night of the Living Dead and Godzilla. 281 Growing fear of nuclear energy—which has, paradoxically, resulted in policies that have hindered the adoption of a lower-emission energy technology—illustrates how a shared narrative has the power to influence technological adoption and diffusion.
  • While memes sometimes produce irrational exuberance (Beanie Babies are perhaps the most notorious example), 289 imitation in markets can be rational for individual investors faced with either too much or too little information. Rather than acting on private information or signals, investors often imitate the behavior of other investors, which can lead to so-called informational cascades or mimetic contagions. Memes, which compress information and evoke emotions, can encode valuable information for investors and speculators. The self-organizing dynamics of memes and narratives can therefore lead to spectacular busts and crashes.
  • For anyone financially or emotionally invested in clean or renewable energy, fracking is the obvious apotheosis of a bad bubble that envisions the future essentially as a continuation of the present—a world where extractive energy remains the norm. On its surface, it seems to be a pure example of a mean-reversion bubble fueled by sin instead of virtue. However, while the motivations to frack have been mostly profane—that is, purely profit-driven—the technique nevertheless realized an aspirational vision. 291 Unintentionally or not, the fracking bubble helped the US achieve energy independence. This may not sound as impressive as sending a man to the Moon, but energy independence has similarly large-scale geopolitical effects. As recent trends toward deglobalization, industrial onshoring, and geopolitical fragmentation have made clear, oil is more than a barbarous relic of a fossil-powered past. “Black gold” remains a strategic geopolitical reserve that continues to dictate the rise and fall of the wealth of nations.
  • Scott Davis, Carter Copeland, and Rob Wertheimer, Lessons from the Titans: What Companies in the New Economy Can Learn from the Great Industrial Giants to Drive Sustainable Success
  • For a philosophical account of the moral case for fossil fuels, see Alex Epstein, Fossil Future: Why Global Human Flourishing Requires More Oil, Coal, and Natural Gas—Not Less
  • Robert Shiller, “Narrative Economics,” American Economic Review 107, no. 4 (2017), 967–1004.
  • Etymologically, fracking represents an “innovation” in its primordial sense. The philosopher Xenophon, who provides the first full-length discussion of innovation, uses the term kainotomia, meaning “opening new veins,” to refer to innovation, placing it within a concrete (physical) context. Godin, Innovation Contested, 21.
  • Answers can be found in Bitcoin’s bubble-like nature. When we say “bubble-like,” we’re not just referring to Bitcoin’s trajectory, the sort of developmental path that introduces and diffuses transformative technologies. That’s not the whole story. Bitcoin is a complex, multi-layered, and self-referential bubble. At the heart of the platform, in the code itself, the technology is a bubble machine. Its algorithm is structured to provide greater value, greater security, and greater network effects as adoption increases, so that adoption drives further adoption and commitment drives further commitment. Yes, speculation will lead to waves of price increases and crashes—this has always happened and will continue to happen. But with each correction, the technological utility of Bitcoin as a trusted keeper of value, and its effectiveness as an inflater of social bubbles, only increases. And while its USD-denominated price can drop in the short term, Bitcoin’s value has, when measured over its existence, not only increased, but every crash that Bitcoin survives is a testament to the network and the cryptocurrency’s robustness.
  • The typical story goes like this: They heard a bit about Bitcoin and then spent hours (or days, or weeks) reading everything they could about how the technology works. They opted into a Bitcoin-centric media diet and started making crypto-curious friends. In other words, they formed a social bubble. Then there’s the economic dimension: Bitcoin’s scarce supply, and the difficulty of coming up with a way to value it, leads to price spikes and crashes. These dimensions are intertwined, as each bubble converts more users into believers, and—as has been the case over the past decade—after each crash, the user base of committed believers is larger than before the preceding bubbles.
  • In other words: Bitcoin wasn’t entirely new. Strictly speaking, the most novel thing about the technology was that it succeeded where other projects had failed. This is often the nature of bubbles—their existence depends on a combination of serendipity and the self-reinforcing nature of the bubble itself. In 2010, Nakamoto acknowledged Bitcoin’s forebears, writing on the Bitcointalk.org forum that “Bitcoin is an implementation of Wei Dai’s b-money proposal on Cypherpunks in 1998 and Nick Szabo’s Bitgold proposal.”
  • Ordered in a linear sequence, these blocks give rise to what Nakamoto called a “time-chain,” which later became known as a blockchain. 306 (It is a chain because each new, verified set of transactions references the previous block, going all the way back to the first block ever mined. Meanwhile, each transaction references the previous Bitcoin address that received the payment. It’s as if every dollar bill in circulation had a public history, starting at the US Mint and documenting everywhere the bill had been since then.
  • Bitcoin is as much a social bubble as it is a financial bubble or a technological achievement.
  • Attracted by the prospect of higher returns, more investors follow suit, triggering feedback that fuels further technological development and growth. This is classic bubble behavior. When funding for a new technology is decoupled from rational expectations of economic return, collective risk aversion drops, propelling creativity. This was true for the canal bubble in the 1790s, the railway mania in the 1840s, the electrical utility boom of the 1920s, the proto-tech bubble of the 1960s, and, more recently, the dot-com bubble of the 2000s. It’s also the case with Bitcoin.
  • Creativity can be expressed in several ways, including in the stories we tell ourselves about a new technology’s potential.
  • As Peter Thiel once remarked, “The best startups might be considered slightly less extreme kinds of cults. The biggest difference is that cults tend to be fanatically wrong about something important. People at a successful startup are fanatically right about something those outside it have missed.”
  • Like many important religious figures, Satoshi Nakamoto made a significant sacrifice before departing at a crucial early juncture—for his messianic, techno-libertarian vision of a decentralized alternative to fiat currencies and central banking, he sacrificed his estimated 1,148,800 Bitcoin, which never moved from the original wallet. Similarly, Christianity emerged around sacrificial narratives that spurred followers to take action for centuries to come. A similar pattern is witnessed in Livy’s History of Rome, in which Romulus disappears into a whirlwind; Livy speculates that he may have been murdered by the Senate or simply kidnapped by Mars. In more modern terms, John F. Kennedy’s assassination transformed support for the Apollo program into part of the national grieving process. There seems to be a selection effect for great founders who have an early impact, pass on specific ideas, and are then unavailable for further comment. An early departure cements their initial statements as canonical, causing subsequent work to focus on instantiating their ideals in the real world or accurately interpreting their views.
  • As the preceding chapters have shown, innovation-accelerating bubbles are characterized by definite optimism and the relentless lowering of the collective risk intolerance of those involved. This results in extreme commitment and almost delusional belief, which coordinates an often small group of high-agency founders, researchers, and early adopters. By stimulating irrational risk-taking, which results in excessive investment and experimentation, bubbles massively parallelize innovation in economic clusters that occur in time rather than space. Until the bubble bursts, these self-fulfilling dynamics are constantly reinforced. Participants discard traditional risk-benefit analyses and valuation models because the risks of not participating in the bubble are perceived as greater than the bubble’s identifiable risks. Whether they form in technology, in markets, or in scientific megaprojects, bubbles channel our thymotic energies, ambitious visions, irrational exuberance, and economic desires toward the realization of a future that is radically different from the present.
  • What is the source of FOMO and YOLO, these fleeting feelings that move entire markets?
  • Beneath the spectacular boom and bust sequences and the emergence of category-defining and world-changing technologies, behind the utilitarian talk of “productivity growth” and “general-purpose technologies,” beyond the impersonal GDP and population statistics, bubbles reveal a far deeper spiritual and religious dimension.
  • The frenetic build-up of bubbles and their economically and psychologically violent collapse provide some of the purest examples of the virulent mimetic process, crystallizing fear, hope, hype, overconfidence, and underconfidence. Markets—and, by extension, bubbles—“channel the competitive spirit into constructive efforts instead of exacerbating it to the point of physical violence.”
  • Markets perform a similar function to religions in another, more literal sense. The translation of future cash flows into a present asset price is just another way of reconciling the demands of the eternal future with the here and now. Therefore, markets—these sublime machines that synthesize beliefs and aggregate them into prices—instantiate a secularized version of the sacred.
  • These cyclical swings in perception are so common that people talk about the “magazine cover indicator,” the observation that big trends often reverse soon after they get major media attention. Bull markets in stocks and oil, for example, were presaged by infamous magazine covers, including Business Week’s “The Death of Equities” in 1979 and The Economist’s “Drowning in Oil” in 1999.
  • A bubble typically starts with a radically divergent opinion or vision (differentiation), which, over time, becomes more mainstream (undifferentiation) as more people join in. Or, as the 19th-century writer and historian Thomas Carlyle put it, “Every new opinion, at its starting, is precisely in a minority of one.”
  • In sum, markets collectivize contagious mimetic desires and suppress their violent discharge. The market achieves a quasi-sacred status in our desacralized culture; it represents a transcendent absolute, irreducible and beyond human understanding and control. The forces of the market can only be interpreted but not truly known. We can try to extract meaning and quantitative signals, but the market is, as believers in the efficient-market hypothesis have long argued, fundamentally unknowable.
  • For example, singularitarianism—the techno-optimist view of runaway progress in AI—and technological doomerism—the belief that technological progress will inevitably result in civilizational or environmental collapse 337—both mistakenly assume that progress is an inevitable and automatic process and fail to recognize the role of human agency in technological innovation.
  • Conversely, a bubble can’t be understood without taking into account the agency of a few visionary individuals. Innovation-accelerating bubbles thus result from a confluence of micro-level human agency and macro-level catalysts, such as the competition between Nazi Germany and the US in the case of the Manhattan Project, the Cold War in the case of the Apollo program, or quantitative easing and bailouts in the case of Bitcoin.
  • From the outside, bubbles seem to instantiate a repeatable and predictable pattern: They start with a novel idea or core technology that attracts extreme commitment and excessive investment from early adopters, which results in a speculative mania that is often followed by a crash. From the inside, however, the bubble seems to bring about a leap forward in time. The introduction of a novel idea or the invention of a cutting-edge technology results in the conversion of new adopters and the idea or invention’s widespread diffusion—which has, according to believers, the potential to actualize the envisioned future, a sort of techno-scientific Kingdom of Heaven. It’s no coincidence that startups stereotypically describe their business as “changing the way we do X”; they’re only a coherent project if they can express some vision of how human behavior will be different if they succeed.
  • Indeed, writers like Tom Holland, in his book Dominion: How the Christian Revolution Remade the World, have pointed out just how much modern norms owe to Christianity, even if they’re not explicitly Christian.
  • Future and present collide, and the infinite potentialities of the future—“a time nexus… a boiling of possibilities,” as Frank Herbert eloquently put it in Dune—collapse into one definite actualization.
  • Like the time-traveling cyborgs in the Terminator franchise, every successful technology entrepreneur is, in a sense, a time traveler who arrives from the future and brings strange new devices (or, at least, a rough blueprint and an urgent desire to instantiate it) to the present.
  • As the case studies in Part II demonstrate, each technology has a dual nature: We can harness nuclear energy to produce an energy-abundant future or to eliminate humanity from the face of the Earth; social media can be used to share baby pictures with family members or to livestream a massacre. It’s this dual utility that gives rise to a technology’s redemptive or catastrophic potential. Ultimately, it all comes down to how we use it.
  • We opened this book by arguing that we are living in an age of techno-scientific stagnation. But, as Thiel puts it, “the actual truth is that there are many more secrets left to find, but they will yield only to relentless searchers.”
  • Similarly, Kevin Kelly, founding executive editor of Wired, refers to this autonomous process as the “technium.” He writes that the “emergent system of the technium—what we often mean by ‘technology’ with a capital T—has its own inherent agenda and urges, as does any large complex system, indeed, as does life itself… The technium is a superorganism of technology. It has its own force that it exerts. That force is part cultural (influenced by and influencing of humans), but it’s also partly non-human, partly indigenous to the physics of technology itself.”
  • Frank Herbert gives an arresting description of messianic prescience in his classic sci-fi work Dune: “The expenditure of energy that revealed what he saw, changed what he saw. And what he saw was a time nexus within this cave, a boiling of possibilities focused here, wherein the most minute action—the wink of an eye, a careless word, a misplaced grain of sand—moved a gigantic lever across the known universe. He saw violence with the outcome subject to so many variables that his slightest movement created vast shiftings in the pattern.”
  • Derek de Solla Price was one of the first to describe scientific progress in terms of super-exponential growth. Given this growth trajectory, he concluded that within a century scientific progress would reach “doomsday” and stagnation, since “all the apparently exponential laws of growth must ultimately be logistic.” De Solla Price, Little Science, Big Science
  • For Wernher von Braun, for example, the future space program needed defense “with philosophical if not religious zeal,” not Washington’s creeping obsession with cost-effectiveness. Neufeld, Von Braun, 452.
  • In Part II of this book, we offered both empirical and theoretical support for the idea that the combination of high-agency individuals and the capital, talent, and attention unlocked by bubbles accelerates innovation and enables world-changing technological advancements. If we’ve succeeded in convincing you of this point, the question naturally arises: What are the radically transformative technologies, projects, or ideas of today? Or, in more futuristic (apocalyptic, even) terms: How can we identify those technologies and bubbles through which the future breaks into the present?
  • Anarchism, the metaverse, widespread consumer adoption of Linux—some things are perennially on the cusp of greatness, at least according to their fans, without ever getting there.
  • Instead, most of us are implicitly betting our time. What we choose to study in school, what career paths we pursue, what we work on outside of work, and what companies we start are all bets about the future.
  • By extracting the defining characteristics of innovation-accelerating bubbles, we’ve identified five overarching traits shared by all of the technologies and megaprojects we’ve covered: Definite optimism and constrained vision Innovation-accelerating bubbles are driven by and organized around a sense of definite optimism or a constrained vision that motivates participants. 357 Crucially, that vision involves a concrete and actionable plan to transition from the present to the future. Strong social interactions between committed, overenthusiastic believers in a technological, scientific, or engineering project or idea are critical for a bubble in its formative stage. From the atomic bomb and the Apollo program to fracking and AI, many transformative technologies generate intense backlash. The legacy of the Manhattan Project is highly debated, and more than half a century after Apollo, manned space missions are met with controversy not only about social costs and economic benefits but also around their inherent colonial impulse. The conflictual nature of bubbles can perhaps be best understood as a clash between the constrained and unconstrained visions of proponents and deniers, all of which involve different trade-offs and ideological priors. This dialectic itself proves energizing—participants are not merely proving themselves right but also proving their critics wrong. Fear of missing out (FOMO) and you only live once (YOLO) dynamics Exceptional enthusiasm leads to imitation and herding behavior. Fear of missing out, anticipation of regret or reward, and social signaling value all intensify the endorsement and commitment of those involved in the bubble, from entrepreneurs and investors to regulators and policymakers. This creates positive feedback loops wherein price appreciations and positive developments further justify future endorsement of the optimistic narrative supporting the bubble. This leads to the abandonment of best-practice models like standard risk-benefit analyses, which in turn reduces collective risk aversion. “FOMO” is usually invoked as a pejorative, but consider that in any given decade, a handful of companies make the difference between venture investing as a viable asset class and venture as an activity that produces T-bill-level returns with wild risks. By that logic, “missing out” is exactly what you should fear. Similarly, “You only live once” (YOLO)—a cliché that migrated to financial message boards like Reddit’s WallStreetBets in the early 2020s and helped coordinate life-changing speculative purchases of meme stocks—can be dismissed as the apex of irresponsibility. But beneath the veneer of financial nihilism, there’s a deeper truth to YOLO. Many innovation-accelerating bubbles ride on civilizational stakes. Many of the people involved in the Manhattan Project or SpaceX or contributing to Bitcoin or OpenAI’s codebases share the existential feeling of YOLO, only for them, it’s not a mortgage or a pension that hangs in…
  • Money, space, biology, AI, industrial manufacturing, and energy are all domains in which the dynamics of inflection bubbles appear to be unfolding and where future progress might occur. An especially striking example is the excessive risk-taking and over-investment in generative AI, and particularly how the race between upstarts like OpenAI and Anthropic and large incumbents like Microsoft, Google, and Meta has unleashed a wave of advancements, capabilities, and applications that are driving down inference costs. Or consider the renaissance in the design, prototyping, and deployment of nuclear reactors for both nuclear fission and fusion; attempts to pioneer new compute paradigms, such as quantum and thermodynamic computing, to engineer around compute bottlenecks induced by Moore’s law; advances in AI-enabled robotics that could reindustrialize the West’s manufacturing base; agricultural and weather modification technologies that aim to reverse desertification and water scarcity; or methods of cellular reprogramming, like CRISPR, that can extend human life and health spans. All of these fields demonstrate definite optimism, hyperstitional reflexivity, and a combination of decentralized parallelization and centralized coordination. It would take many more pages to investigate these dynamics in detail, but we hope this book has succeeded in making their features self-evident.
  • This suggests that one way to identify future domains of technological progress is to identify their spiritual potential for transcendence.
  • As he stated in 1959, the proposed year for Adam’s first manned mission, “It is profoundly important for religious reasons that [mankind] travel to other worlds, other galaxies; for it may be Man’s destiny to assure immortality, not only of his race but even of the life spark itself.” Interestingly, more than half a century later, Elon Musk, echoing von Braun’s eschatological concern with humanity’s quest to shatter the celestial spheres, tweeted that “we must preserve the light of consciousness by becoming a spacefaring civilization.”
  • The conquest of space thus becomes humankind’s salvation. Or, as von Braun put it, “Here then is space travel’s most meaningful mission… On that future day when our satellite vessels are circling Earth; when men manning an orbital station can view our planet against the star-studded blackness of infinity as but a planet among planets; on that day, I say, fratricidal war will be banished from the star on which we live… humanity will then be prepared to enter the second phase of its long, hitherto only Tellurian history—the cosmic age.”
  • For von Braun, science and theology—faith and reason—are fundamentally compatible. “While science tries to learn more about the Creation, religion tries to better understand the Creator,” he wrote in a letter in 1971. “Speaking for myself, I can only say that the grandeur of the cosmos serves only to confirm my belief in the certainty of a Creator.”
  • George Mueller, the director of NASA’s manned spaceflight program, reflected Apollo’s apocalyptic and millenarian spirit when, following the first Moon landing, he asked, “Should we withdraw in fear from the next step, should we substitute temporary material welfare for spiritual adventure… ? Then will Man fall back from his destiny, the mighty surge of his achievement will be lost, and the confines of this planet will destroy him.”
  • Before Armstrong and Aldrin ventured out onto the lunar surface, Aldrin asked Mission Control for radio silence and proceeded to take communion, reading from John 15:5. “It was interesting to think,” he later remarked, “that the very first liquid ever poured on the Moon and the first food eaten there were communion elements.”
  • As James Fletcher, NASA administrator during Apollo missions 15, 16, and 17, put it, humans should explore space because of a “God-given desire,” characterizing space exploration as representing a “frontier of expanding knowledge and the progress of understanding about nature and, by extension, about divinity.”
  • While the vision of designing a thinking machine was initially aimed at replicating and emulating the human mind, it soon became an eschatological attempt to create a machine “superintelligence” that, by transcending humans’ biological limits, would herald the advent of a new, artificial form of life. 375 These prophesied bodies and minds would be eternal, perfect, and immortal. The promise of AI, biotech, and robotics, in other words, mirrors the apocalyptic belief that God will resurrect the dead in purified bodies so that they can enter the Kingdom of God. But to do so, the bodies must be upgraded, as the apostle Paul suggests: “We will not all die but we will all be changed.”
    • Mormon Transhumanist Association
  • Even Alan Turing, one of the pioneers of modern computing, invoked the “transmigration of souls” 377—the transfer of souls to machines—which gave rise to the “spiritual machines” envisioned by futurist Ray Kurzweil half a century later. 378 Other transhumanists attempt to achieve immortality with biohacking, cellular reprogramming, or cryonics, a technology to freeze the human body after death in hopes of a resurrection enabled by future scientific advancements.
  • Whether through cyborgs, robots, or software, these scientific fields converge on the perennial theme of the immortality of the soul. While the concept of an eternal soul seems to clash with the materialism of the natural sciences, AI historian Daniel Crevier argues that the apparent contradiction is compatible with the Judeo-Christian tradition. He notes, for example, that Isaiah 26:19 states that “your dead will come to life, their corpse will rise… the land of ghosts will give birth.” This implies that the mind or soul cannot exist completely divorced from the body.
  • For Sinsheimer, the discovery of DNA and the rise of genetic engineering enabled humans to unlock the sacred code of life itself. “When Galileo discovered that he could describe the motions of objects with simple mathematical formulas, he felt that he had discovered the language in which God created the universe,” he wrote. “Today we might say that we have discovered the language in which God created life.”
  • Similarly, the director of the Human Genome Project, Francis Collins, a devout Christian, stated that his “work of discovery” is a “form of worship.”
  • This transhumanist view is often coupled with a belief in the singularity, which—because it designates the historical moment at which machine intelligence transcends human intelligence and silicon and carbon-based life forms merge—marks, for believers, the inevitable historical end state toward which technological progress teleologically converges.
  • As one early software pioneer put it, cyberspace will open gates of the Heavenly City “of Revelations.” 389 Therefore, as the postmodern philosopher of technology Paul Virilio concluded, “the research on cyberspace is a quest for God.” 390 In transhumanism, postmodern rationalism collapses into ancient apocalypticism.
  • As early as 1908, a popular book titled The Interpretation of Radium suggested that harnessing nuclear power, which promised a limitless source of energy, could restore the Garden of Eden. This quasi-religious enthusiasm for radium coincided with a deep techno-optimism in the late 19th and early 20th centuries, which was premised on the assumption of infinite progress and envisioned a civilizational future powered by solar and nuclear energy. But the release of nuclear energy could also usher in damnation and annihilation, as Hiroshima and Chernobyl later exemplified. Nuclear energy became the purest form of apocalyptic technology.
  • For many involved in the Manhattan Project, especially Leo Szilard and Niels Bohr, the atomic bomb was “a weapon of death that might also redeem mankind.”
  • As was the case in the Apollo program and the Human Genome Project, many key figures of the Manhattan Project were motivated by a deep religious commitment. Not only did they envision nuclear energy as a means of salvation, but atomic weapons were also conceived of as the technological resolution of the biblical conflict between good and evil. Echoing medieval philosopher Roger Bacon, who argued that the Antichrist must be overcome by technological innovation, many involved in the development of nuclear weapons shared the millenarian view that Satan—that is, the Soviets, with their diabolical weapons—could only be confronted with atomic weapons in a final nuclear battle. Edward Teller, who co-invented the hydrogen bomb, exhibited a “religious dedication to thermonuclear weapons,” and many of his followers believed the Cold War arms race to be a holy cause. The Livermore Laboratory, with which Teller was affiliated and where many of his disciples and descendants designed the later generations of nuclear weapons, was described as “akin to a monastery” and “[isolated] from the world by high security as well as by a peculiar set of customs, shared experience, and private language.” 394 In other words, they resembled a religious cult that envisioned itself to be participating in the apocalyptic battle against the Antichrist.
  • All of these supposedly secularized and hyper-rational technologies are deeply intertwined with notions of transcendence and spirituality. A trinity of technological breakthroughs—the beginning of the manned spaceflight program, the advent of artificial intelligence, and the discovery of the structure and function of DNA—all had transcendent significance for many of the key figures involved. The religious or spiritual impetus was, for many technologists and scientists, one of the main motivations to take the massive risks and make the sacrifices these projects often demanded. Such spiritual and religious energy is one of the pure manifestations of thymos, the ancient Greek concept of “spiritedness.”
  • Scarcity isn’t a built-in feature of our world; the universe produces more energy and offers more matter than we could ever desire to capture, convert, and consume. Against the dominant neo-Malthausian moral philosophy of scarcity, which assumes a static equilibrium state of nature that must be protected from humans’ exploitative tendencies and corrupting influences, we need to recognize that equilibrium is not, in fact, an ideal or achievable state but a segue to stasis, decline, and death.
  • Technological transcendence synthesizes the feats of the future with the myths of the past, giving rise to what could be called archeo-futuristic technology. We can see this techno-religiosity reflected in the mobile Russian Orthodox temples accompanying nuclear ICBMs, 402 the Shinto priests blessing Lockheed’s F-35 stealth fighter jet, and the Buddhist talismans and Orthodox priests who consecrate everything from data centers to the International Space Station. Like medieval cathedrals, these technological monuments are reminders that something greater exists. Like the spires reaching toward the heavens, the rockets soaring into space provide an irreducible experience of the sublime.
  • In a 2018 talk, Peter Thiel asked, “What aspects of technology are actually charismatic? Where there is a good story—[a] story about technology making the world a better place. It needs to be real, needs to be a viable business, but at least [it needs to be] something that inspires people, motivates people in the company, and has a transcendent mission.” 403 Another name for what Thiel here refers to as “charismatic technologies” is the sublime.
  • Art has often been defined in terms of the polarity between the sublime and mimesis. Here, we mean “mimesis” in the Aristotelian sense, meaning the simulation or representation of a “normal level” of reality. In contrast to mimesis, the sublime demarcates a moment of elevation that breaks with or ruptures normal reality.
  • Bubbles create the reality-distortion field that the transcendent mission requires. Only by embracing a radically different future with singular commitment and sacrifice, often in defiance of the many, can we release ourselves from stagnation, stasis, and decline. This is our transcendent mission.
  • The unifying sense of optimism needs to be definite and the vision constrained. If not, the future is perceived as so abstract and random that it can never be influenced. Without a plan that involves both first principles and trade-offs, we remain forever trapped in a baseless utopianism, because imagination alone is no substitute for action. In A Conflict of Visions, economist Thomas Sowell differentiates the concept of constrained vision—which emphasizes decentralized processes, tradition, empirical evidence, and trade-offs—from unconstrained vision, which rests on utopian idealism and prefers centralized top-down solutions that ignore trade-offs. In Zero to One, Peter Thiel offers a striking characterization of indefinite optimism: “Instead of working for years to build a new product, indefinite optimists rearrange already invented ones. Bankers make money by rearranging the capital structures of already existing companies. Lawyers resolve disputes over old things or help other people structure their affairs. And private equity investors and management consultants don’t start new businesses; they squeeze extra efficiency from old ones with incessant procedural optimizations.”
  • The concept of reflexivity, popularized by hedge fund manager George Soros, refers to the positive feedback loop between expectations and prices that drives market dynamics. The concept originated in the philosophy and sociology of science, most notably with the eminent philosopher of science Karl Popper, who was Soros’s professor. Relatedly, the term “self-fulfilling prophecy” was coined by Robert K. Merton, one of the most influential sociologists and philosophers of science of the last century.
  • The phrase, which originated in a 1966 Star Trek episode in which a humanoid species uses reality-distortion fields to create hyper-realistic illusions, was also used by an early Apple software engineer to describe Steve Jobs’s reality-bending charisma and management style. Hertzfeld, “Reality Distortion Field,” Folklore.org, February 1981, https://www.folklore.org/StoryView.py?project=Macintosh&story=Reality_Distortion_Field.txt.
  • David Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention (New York: Knopf, 1997); Erik Davis, TechGnosis: Myth, Magic, and Mysticism in the Age of Information (Berkeley, CA: North Atlantic Books, 2015).
  • Frank J. Tipler, The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead (New York: Doubleday, 1994).
  • Even the chip industry has its spiritual dimension. Given the miraculous process involved in endowing sand with intelligence, it’s no wonder some have proclaimed that they saw the “face of God” etched into TSMC-produced semiconductors. Virginia Heffernan, “I Saw the Face of God in a Semiconductor Factory,” Wired, March 21, 2023, https://www.wired.com/story/i-saw-the-face-of-god-in-a-tsmc-factory/.
  • Alan Turing, “Computing Machines and Intelligence,” in Computers and Thought, ed. Edward Feigenbaum (New York: McGraw-Hill, 1963), 21.
  • Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Penguin, 2000).
  • Religion and Science: Historical and Contemporary Issues (San Francisco: HarperCollins, 1997).
  • When asked whether he believes that God exists, the preeminent transhumanist Ray Kurzweil replied, “Not yet,” implying that the development of artificial general intelligence represents the active building toward a god. John Rennie, “The Immortal Ambitions of Ray Kurzweil: A Review of Transcendent Man,” Scientific American, February 15, 2011, https://www.scientificamerican.com/article/the-immortal-ambitions-of-ray-kurzweil/.
  • The eminent philosopher of media and self-described “apocalypticist” Marshall McLuhan condemns electronic media as the apotheosis of the Antichrist because it produces a “demonic simulacrum” of the mystical body of Christ. “Electric information environments being utterly ethereal foster the illusion of the world as spiritual substance. It is now a reasonable facsimile of the mystical body, a blatant manifestation of the Antichrist,” he writes. “After all, the Prince of this world is a very great electric engineer.” This claim is in stark contrast to McLuhan’s earlier proclamation that computer networks promise the creation of “a technologically engendered state of universal understanding and unity, a state of absorption in the Logos that could… create a perpetuity of collective harmony and peace.” Letters of Marshall McLuhan, eds. Matie Molinaro, Corinne McLuhan, and William Toye (Oxford: Oxford University Press, 1987); McLuhan, “The Playboy Interview: Marshall McLuhan,” Playboy, March 1969.
  • For an account of how Christianity influences Western culture, see Tom Holland, Dominion: The Making of the Western Mind (London: Hachette UK, 2019).
  • Whether it is artificial general intelligence, a miracle vaccine, or a cryptocurrency protocol, the worship of technology tends to culminate in what René Girard called “deviated transcendence,” or the “emergence of a false transcendence.” This idolization and mythologization of technology can, in both the accelerationist and decelerationist registers, induce a contemplative passivity toward the future, which can inspire fatalistic anticipations of catastrophe and redemption. Ratzinger, Introduction to Christianity, 203; Benoît Chantre, “The Steeple of Combray: From ‘Vertical’ to ‘Deviated’ Transcendence,” Religion & Literature 43, no. 3 (2011): 158–164.