Close Modal

But What If We're Wrong?

Thinking About the Present As If It Were the Past

Look inside
Paperback
$18.00 US
5.53"W x 8.24"H x 0.6"D   | 8 oz | 24 per carton
On sale Apr 25, 2017 | 288 Pages | 9780399184130
“Full of intelligence and insights, as the author gleefully turns ideas upside down to better understand them. . . Replete with lots of nifty, whimsical footnotes, this clever, speculative book challenges our beliefs with jocularity and perspicacity.” —Kirkus (starred review)

“Klosterman’s trademark humor and unique curiosity propel the reader through the book. He remains one of the most insightful critics of pop culture writing today and this is his most thought-provoking and memorable book yet.” —Publishers Weekly (starred review)

The tremendously well-received New York Times bestseller by cultural critic Chuck Klosterman, exploring the possibility that our currently held beliefs and assumptions about the world will eventually be proven wrongnow in paperback.

But What If We're Wrong? is a book of original, reported, interconnected pieces, which speculate on the likelihood that many universally accepted, deeply ingrained cultural and scientific beliefs will someday seem absurd. Covering a spectrum of objective and subjective topics, the book attempts to visualize present-day society the way it will be viewed in a distant future. Klosterman cites original interviews with a wide variety of thinkers and expertsincluding George Saunders, David Byrne, Jonathan Lethem, Alex Ross, Kathryn Schulz, Neil deGrasse Tyson, Brian Greene, Junot Díaz, Amanda Petrusich, Ryan Adams, Dan Carlin, Nick Bostrom, and Richard Linklater. Klosterman asks straightforward questions that are profound in their simplicity, and the answers he explores and integrates with his own analysis generate the most thought-provoking and propulsive book of his career.
“Full of intelligence and insights, as the author gleefully turns ideas upside down to better understand them. . . . This book will become a popular book club selection because it makes readers think. Replete with lots of nifty, whimsical footnotes, this clever, speculative book challenges our beliefs with jocularity and perspicacity.” Kirkus (starred review)

“Klosterman conducts a series of intriguing thought experiments in this delightful new book . . . Klosterman’s trademark humor and unique curiosity propel the reader through the book. He remains one of the most insightful critics of pop culture writing today and this is his most thought-provoking and memorable book yet.” Publishers Weekly (starred review)

“A spin class for the brain . . . Klosterman challenges readers to reexamine the stability of basic concepts, and in doing so broadens our perspectives . . . An engaging and entertaining workout for the mind led by one of today’s funniest and most thought-provoking writers.” Library Journal (starred review)

“Klosterman is a joy to hang out with: He relishes the contradictions he examines while making complex ideas comprehensible. In this new world, though, his voids of certainty aren’t just exhilarating, but ominous.” —Ryan Vlastelica, A.V. Club (Favorite Books of 2016)

“But What If We’re Wrong? is a book about the big things we’re wrong about that don’t get discussed, just because everyone assumes they can never happen. That’s as true for culture as it is for science, and the uniquely intellectual and dexterous Klosterman dives in with verve. Bonus points for interviews with some fascinating—and stubborn—people in the process.” Bloomberg Best Books of 2016, recommended by Ian Bremmer, President of Eurasia Group

“Klosterman is outlining the ideology of a contrarian here and reminding us of the important role that revisionism plays in cultural writing. What matters is the way he thinks about thinking—and the way he makes you think about how you think. And, in the end, this is all that criticism can really hope to do.” —Sonny Bunch, The Washington Post

“[Klosterman’s] most wide-ranging accomplishment to date . . . As inquisitive, thoughtful and dryly funny as ever, But What If We’re Wrong? . . . [is] crackling with the writer’s signature wit.” —Will Ashton, Pittsburgh Post-Gazette
 
“In But What If We’re Wrong? [Klosterman] takes on the really big picture . . . He ranges far and wide over the realm of known knowns and known unknowns.” —Brigitte Frase, Minneapolis Star Tribune
 
“I have often wondered how the times I live in will be remembered once they turn into History. It never occurred to me to figure out how to write a book about it, though, which is one of the reasons why Chuck Klosterman is smarter than I am.” —Aimee Levitt, The Chicago Reader

“Klosterman has proven himself an insightful and evolving philosopher for popular consumption . . . In his latest, But What If We’re Wrong?, Klosterman probes the very notions of existence and longevity, resulting perhaps in the most mind-expanding writing of his career.” —Max Kyburz, Gothamist
 
“Chuck Klosterman is no time traveler, but he's got a lot of ideas about how the future will shake out . . . in [But What If We’re Wrong?] he ponders the limits of humanity’s search for truth.” —Chris Weller, Tech Insider
 
“Prolific pop-culture critic Chuck Klosterman tackles his most ambitious project yet in new book But What If We’re Wrong?, which combines research, personal reflections and interviews.” —Alexandra Cavallo, The Improper Bostonian

“This book is brilliant and addictively readable. It's also mandatory reading for anyone who loves history and for anyone who claims to have a capacity for forecasting. It'll probably make them angry because it turns so many sacred assumptions upside down—but that's what the future does. Klosterman's writing style is direct, highly personal and robotically crisp—he's like a stranger on the seat next to you on a plane who gives you a billion dollar idea. A terrific book.” —Douglas Coupland
© Joanna Ceciliani
Chuck Klosterman is the bestselling author of nine nonfiction books (including The Nineties; Sex, Drugs, and Cocoa Puffs; and But What If We’re Wrong?), two novels (Downtown Owl and The Visible Man), and the short story collection Raised in Captivity. He has written for The New York Times, The Washington Post, GQ (London), Esquire, Spin, The Guardian (London), The Believer, and ESPN. Klosterman served as the Ethicist for The New York Times Magazine for three years and was an original founder of the website Grantland with Bill Simmons. He was raised in rural North Dakota and now lives in Portland, Oregon. View titles by Chuck Klosterman
***This excerpt is from an advance uncorrected proof***

Copyright ©2016 Chuck Klosterman

I’ve spent most of my life being wrong.

Not about everything. Just about most things.

I mean, sometimes I get stuff right. I married the right person. I’ve never purchased life insurance as an investment. The first time undrafted free agent Tony Romo led a touchdown drive against the Giants on Monday Night Football, I told my roommate, “I think this guy will have a decent career.” At a New Year’s Eve party in 2008, I predicted Michael Jackson would unexpectedly die within the next twelve months, an anecdote I shall casually recount at every New Year’s party I’ll ever attend for the rest of my life. But these are the exceptions. It is far, far easier for me to catalog the various things I’ve been wrong about: My insistence that I would never own a cell phone. The time I wagered $100—against $1—that Barack Obama would never become president (or even receive the Democratic nomination). My three‑week obsession over the looming Y2K crisis, prompting me to hide bundles of cash, bottled water, and Oreo cookies throughout my one‑ bedroom apartment. At this point, my wrongness doesn’t even surprise me. I almost anticipate it. Whenever people tell me I’m wrong about something, I might disagree with them in conversation, but—in my mind—I assume their accusation is justified, even when I’m relatively certain they’re wrong, too.

Yet these failures are small potatoes.

These micro‑moments of wrongness are personal: I assumed the answer to something was “A,” but the true answer was “B” or “C” or “D.” Reasonable parties can disagree on the unknowable, and the passage of time slowly proves one party to be slightly more reasonable than the other. The stakes are low. If I’m wrong about something specific, it’s (usually) my own fault, and someone else is (usually, but not totally) right.

But what about the things we’re all wrong about?

What about ideas that are so accepted and internalized that we’re not even in a position to question their fallibility? These are ideas so ingrained in the collective consciousness that it seems fool‑ hardy to even wonder if they’re potentially untrue. Sometimes these seem like questions only a child would ask, since children aren’t paralyzed by the pressures of consensus and common sense. It’s a dissonance that creates the most unavoidable of intellectual paradoxes: When you ask smart people if they believe there are major ideas currently accepted by the culture at large that will eventually be proven false, they will say, “Well, of course. There must be. That phenomenon has been experienced by every generation who’s ever lived, since the dawn of human history.” Yet offer those same people a laundry list of contemporary ideas that might fit that description, and they’ll be tempted to reject them all.

It is impossible to examine questions we refuse to ask. These are the big potatoes.

 

Like most people, I like to think of myself as a skeptical person. But I’m pretty much in the tank for gravity. It’s the natural force most recognized as perfunctorily central to everything we under‑ stand about everything else. If an otherwise well‑executed argument contradicts the principles of gravity, the argument is inevitably altered to make sure that it does not. The fact that I’m not a physicist makes my adherence to gravity especially unyielding, since I don’t know anything about gravity that wasn’t told to me by someone else. My confidence in gravity is absolute, and I believe this will be true until the day I die (and if someone subsequently throws my dead body out of a window, I believe my corpse’s rate of acceleration will be 9.8 m/s2).

And I’m probably wrong.

Maybe not completely, but partially. And maybe not today, but eventually.

“There is a very, very good chance that our understanding of gravity will not be the same in five hundred years. In fact, that’s the one arena where I would think that most of our contemporary evidence is circumstantial, and that the way we think about gravity will be very different.” These are the words of Brian Greene, a theoretical physicist at Columbia University who writes books with titles like Icarus at the Edge of Time. He’s the kind of physicist famous enough to guest star on a CBS sitcom, assuming that sit‑ com is The Big Bang Theory. “For two hundred years, Isaac Newton had gravity down. There was almost no change in our thinking until 1907. And then from 1907 to 1915, Einstein radically changes our understanding of gravity: No longer is gravity just a force, but a warping of space and time. And now we realize quantum mechanics must have an impact on how we describe gravity within very short distances. So there’s all this work that really starts to pick up in the 1980s, with all these new ideas about how gravity would work in the microscopic realm. And then string theory comes along, trying to understand how gravity behaves on a small scale, and that gives us a description—which we don’t know to be right or wrong—that equates to a quantum theory of gravity. Now, that requires extra dimensions of space. So the understanding of gravity starts to have radical implications for our understanding of reality. And now there are folks, inspired by these findings, who are trying to rethink gravity itself. They suspect gravity might not even be a fundamental force, but an emergent1 force. So I do think—and I think many would agree—that gravity is the least stable of our ideas, and the most ripe for a major shift.”

If that sounds confusing, don’t worry—I was confused when Greene explained it to me as I sat in his office

 

1 This means that gravity might just be a manifestation of other forces—not a force itself, but the peripheral result of something else. Greene’s analogy was with the idea of temperature: Our skin can sense warmth on a hot day, but “warmth” is not some independent thing that exists on its own. Warmth is just the consequence of invisible atoms moving around very fast, creating the sensation of temperature. We feel it, but it’s not really there. So if gravity were an emergent force, it would mean that gravity isn’t the central power pulling things to the Earth, but the tangential consequence of something else we can’t yet explain. We feel it, but it’s not there. It would almost make the whole idea of “gravity” a semantic construction.

(and he explained it to me twice). There are essential components to physics and math that I will never understand in any functional way, no matter what I read or how much time I invest. A post‑gravity world is beyond my comprehension. But the concept of a post‑gravity world helps me think about something else: It helps me understand the pre‑ gravity era. And I don’t mean the days before Newton published Principia in 1687, or even that period from the late 1500s when Galileo was (allegedly) dropping balls off the Leaning Tower of Pisa and inadvertently inspiring the Indigo Girls. By the time those events occurred, the notion of gravity was already drifting through the scientific ether. Nobody had pinned it down, but the mathematical intelligentsia knew Earth was rotating around the sun in an elliptical orbit (and that something was making this hap‑ pen). That was around three hundred years ago. I’m more fixated on how life was another three hundred years before that. Here was a period when the best understanding of why objects did not spontaneously f loat was some version of what Aristotle had argued more than a thousand years prior: He believed all objects craved their “natural place,” and that this place was the geocentric center of the universe, and that the geocentric center of the universe was Earth. In other words, Aristotle believed that a dropped rock fell to the earth because rocks belonged on earth and wanted to be there.

So let’s consider the magnitude of this shift: Aristotle—arguably the greatest philosopher who ever lived—writes the book Physics and defines his argument. His view exists unchallenged for almost two thousand years. Newton (history’s most meaningful mathematician, even to this day) eventually watches an apocryphal apple fall from an apocryphal tree and inverts the entire human under‑ standing of why the world works as it does. Had this been explained to those people in the fourteenth century with no understanding of science—in other words, pretty much everyone else alive in the fourteenth century—Newton’s explanation would have seemed way, way crazier than what they currently believed: Instead of claiming that Earth’s existence defined reality and that there was something essentialist about why rocks acted like rocks, Newton was advocating an invisible, imperceptible force field that some‑ how anchored the moon in place.

We now know (“know”) that Newton’s concept was correct. Humankind had been collectively, objectively wrong for roughly twenty centuries. Which provokes three semi‑related questions:

 


   • If mankind could believe something false was objectively true for two thousand years, why do we ref lexively assume that our current understanding of gravity—which we’ve embraced for a mere three hundred fifty years—will some‑ how exist forever?
   • Is it possible that this type of problem has simply been solved? What if Newton’s answer really is—more or less— thefinalanswer, and the only one we will ever need? Because if that is true, it would mean we’re at the end of a process that has defined the experience of being alive. It would mean certain intellectual quests would no longer be necessary.
   • Which statement is more reasonable to make: “I believe grav‑ ity exists” or “I’m 99.9 percent certain that gravity exists”? Certainly, the second statement issafer. But if we’re going to acknowledge even the slightest possibility of being wrong about gravity, we’re pretty much giving up on the possibility of being right about anything at all.

 

There’s a popular website that sells books (and if you purchased this particular book, consumer research suggests there’s a 41 per‑ cent chance you ordered it from this particular site). Book sales constitute only about 7 percent of this website’s total sales, but books are the principal commodity this enterprise is known for. Part of what makes the site successful is its user‑generated con‑ tent; consumers are given the opportunity to write reviews of their various purchases, even if they never actually consumed the book they’re critiquing. Which is amazing, particularly if you want to read negative, one‑star reviews of Herman Melville’s Moby-Dick.

“Pompous, overbearing, self‑indulgent, and insufferable. This is the worst book I’ve ever read,” wrote one dissatisfied customer in 2014. “Weak narrative, poor structure, incomplete plot threads, ¾ of the chapters are extraneous, and the author often confuses himself with the protagonist. One chapter is devoted to the fact that whales don’t have noses. Another is on the color white.” Interestingly, the only other purchase this person elected to review was a Hewlett‑Packard printer that can also send faxes, which he awarded two stars.

I can’t dispute this person’s distaste for Moby-Dick. I’m sure he did hate reading it. But his choice to state this opinion in public— almost entirely devoid of critical context, unless you count his take on the HP printer—is more meaningful than the opinion itself. Publicly attacking Moby-Dick is shorthand for arguing that what we’re socialized to believe about art is fundamentally questionable. Taste is subjective, but some subjective opinions are casually expressed the same way we articulate principles of math or science. There isn’t an ongoing cultural debate over the merits of Moby- Dick: It’s not merely an epic novel, but a transformative literary innovation that helps define how novels are supposed to be viewed. Any discussion about the clichéd concept of “the Great American Novel” begins with this book. The work itself is not above criticism, but no individual criticism has any impact; at this point, attacking Moby-Dick only reflects the contrarianism of the critic. We all start from the supposition that Moby-Dick is accepted as self‑evidently awesome, including (and perhaps especially) those who disagree with that assertion.

So how did this happen?

Melville publishes Moby-Dick in 1851, basing his narrative on the real‑life 1839 account of a murderous sperm whale nicknamed “Mocha Dick.” The initial British edition is around nine hundred pages. Melville, a moderately successful author at the time of the novel’s release, assumes this book will immediately be seen as a masterwork. This is his premeditated intention throughout the writing process. But the reviews are mixed, and some are contemptuous (“it repels the reader” is the key takeaway from one of the very first reviews in the London Spectator). It sells poorly—at the time of Melville’s death, total sales hover below five thousand copies. The failure ruins Melville’s life: He becomes an alcoholic and a poet, and eventually a customs inspector. When he dies destitute in 1891, one has to assume his perspective on Moby-Dick is some‑ thing along the lines of “Well, I guess that didn’t work. Maybe I should have spent fewer pages explaining how to tie complicated knots.” For the next thirty years, nothing about the reception of this book changes. But then World War I happens, and—somehow, and for reasons that can’t be totally explained2—modernists living in postwar America start to view literature through a different lens. There is a Melville revival. The concept of what a novel is supposed to accomplish shifts in his direction and amplifies with each passing generation, eventually prompting people (like the 2005 director of Columbia University’s American studies pro‑ gram) to classify Moby-Dick as “the most ambitious book ever conceived by an American writer.” Pundits and cranks can disagree with that assertion, but no one cares if they do. Melville’s place in history is secure, almost as if he were an explorer or an inventor: When the prehistoric remains of a previously unknown predatory whale were discovered in Peru in 2010, the massive creature was eventually named Livyatan melvillei. A century after his death, Melville gets his own extinct super‑whale named after him, in tribute to a book that commercially tanked. That’s an interesting kind of career.

Now, there’s certainly a difference between collective, objective wrongness (e.g., misunderstanding gravity for twenty centuries) and collective, subjective wrongness (e.g., not caring about Moby- Dick for seventy‑five years). The machinations of the transitionsare completely different. Yet both scenarios hint at a practical reality and a modern problem. The practical reality is that any present‑tense version of the world is unstable. What we currently consider to be true—both objectively and subjectively—is habitually provisional. But the modern problem is that reevaluating what we consider “true” is becoming increasingly difficult. Superficially, it’s become easier for any one person to dispute the status quo: Everyone has a viable platform to criticize Moby-Dick (or, I suppose, a mediocre HP printer). If there’s a rogue physicist in Winnipeg who doesn’t believe in gravity, he can self‑publish a book that outlines his argument and potentially attract a larger audience than Principia found during its first hundred years of existence. But increasing the capacity for the reconsideration of ideas is not the same as actually changing those ideas (or even allowing them to change by their own momentum).

We live in an age where virtually no content is lost and virtually all content is shared. The sheer amount of information about every current idea makes those concepts difficult to contradict, particularly in a framework where public consensus has become the ultimate arbiter of validity. In other words, we’re starting to behave as if we’ve reached the end of human knowledge. And while that notion is undoubtedly false, the sensation of certitude it generates is paralyzing.

 

In her book Being Wrong, author Kathryn Schulz spends a few key pages on the concept of “naïve realism.” Schulz notes that while there are few conscious proponents of naïve realism, “that doesn’t mean there are no naïve realists.” I would go a step further than Schulz; I suspect most conventionally intelligent people are naïve realists, and I think it might be the defining intellectual quality of this era. The straightforward definition of naïve realism doesn’t seem that outlandish: It’s a theory that suggests the world is exactly as it appears. Obviously, this viewpoint creates a lot of opportunity for colossal wrongness (e.g., “The sun appears to move across the sky, so the sun must be orbiting Earth”). But my personal characterization of naïve realism is wider and more insidious. I think it operates as the manifestation of two ingrained beliefs:

 


   • “When considering any question, I must be rational and logical, to the point of dismissing any unverifiable data as preposterous,” and
   • “When considering any question, I’m going to assume that the information we currently have is all the information that will ever be available.”

 

Here’s an extreme example: the possibility of life after death. When considered rationally, there is no justification for believing that anything happens to anyone upon the moment of his or her death. There is no reasonable counter to the prospect of nothing‑ ness. Any anecdotal story about “floating toward a white light” or Shirley MacLaine’s past life on Atlantis or the details in Heaven Is for Real are automatically (and justifiably) dismissed by any secular intellectual. Yet this wholly logical position discounts the over‑ whelming likelihood that we currently don’t know something critical about the experience of life, much less the ultimate conclusion to that experience. There are so many things we don’t know about energy, or the way energy is transferred, or why energy (which can’t be created or destroyed) exists at all. We can’t truly conceive the conditions of a multidimensional reality, even though we’re (probably) already living inside one. We have a limited under‑ standing of consciousness. We have a limited understanding of time, and of the

perception of time, and of the possibility that all time is happening at once. So while it seems unrealistic to seriously

 

2 The qualities that spurred this rediscovery can, arguably, be quantified: The isolation and brotherhood the sailors experience mirrors the experience of fight‑ ing in a war, and the battle against a faceless evil whale could be seen as a metaphor for the battle against the faceless abstraction of evil Germany. But the fact that these details can be quantified is still not a satisfactory explanation as to why Moby-Dick became the specific novel that was selected and elevated. It’s not like Moby-Dick is the only book that could have served this role.

consider the prospect of life after death, it seems equally naïve to assume that our contemporary understanding of this phenomenon is remotely complete. We have no idea what we don’t know, or what we’ll eventually learn, or what might be true despite our perpetual inability to comprehend what that truth is.

It’s impossible to understand the world of today until today has become tomorrow.

This is no brilliant insight, and only a fool would disagree. But it’s remarkable how habitually this truth is ignored. We constantly pretend our perception of the present day will not seem ludicrous in retrospect, simply because there doesn’t appear to be any other option. Yet there is another option, and the option is this: We must start from the premise that—in all likelihood—we are already wrong. And not “wrong” in the sense that we are examining questions and coming to incorrect conclusions, because most of our conclusions are reasoned and coherent. The problem is with the questions themselves.

About

“Full of intelligence and insights, as the author gleefully turns ideas upside down to better understand them. . . Replete with lots of nifty, whimsical footnotes, this clever, speculative book challenges our beliefs with jocularity and perspicacity.” —Kirkus (starred review)

“Klosterman’s trademark humor and unique curiosity propel the reader through the book. He remains one of the most insightful critics of pop culture writing today and this is his most thought-provoking and memorable book yet.” —Publishers Weekly (starred review)

The tremendously well-received New York Times bestseller by cultural critic Chuck Klosterman, exploring the possibility that our currently held beliefs and assumptions about the world will eventually be proven wrongnow in paperback.

But What If We're Wrong? is a book of original, reported, interconnected pieces, which speculate on the likelihood that many universally accepted, deeply ingrained cultural and scientific beliefs will someday seem absurd. Covering a spectrum of objective and subjective topics, the book attempts to visualize present-day society the way it will be viewed in a distant future. Klosterman cites original interviews with a wide variety of thinkers and expertsincluding George Saunders, David Byrne, Jonathan Lethem, Alex Ross, Kathryn Schulz, Neil deGrasse Tyson, Brian Greene, Junot Díaz, Amanda Petrusich, Ryan Adams, Dan Carlin, Nick Bostrom, and Richard Linklater. Klosterman asks straightforward questions that are profound in their simplicity, and the answers he explores and integrates with his own analysis generate the most thought-provoking and propulsive book of his career.

Praise

“Full of intelligence and insights, as the author gleefully turns ideas upside down to better understand them. . . . This book will become a popular book club selection because it makes readers think. Replete with lots of nifty, whimsical footnotes, this clever, speculative book challenges our beliefs with jocularity and perspicacity.” Kirkus (starred review)

“Klosterman conducts a series of intriguing thought experiments in this delightful new book . . . Klosterman’s trademark humor and unique curiosity propel the reader through the book. He remains one of the most insightful critics of pop culture writing today and this is his most thought-provoking and memorable book yet.” Publishers Weekly (starred review)

“A spin class for the brain . . . Klosterman challenges readers to reexamine the stability of basic concepts, and in doing so broadens our perspectives . . . An engaging and entertaining workout for the mind led by one of today’s funniest and most thought-provoking writers.” Library Journal (starred review)

“Klosterman is a joy to hang out with: He relishes the contradictions he examines while making complex ideas comprehensible. In this new world, though, his voids of certainty aren’t just exhilarating, but ominous.” —Ryan Vlastelica, A.V. Club (Favorite Books of 2016)

“But What If We’re Wrong? is a book about the big things we’re wrong about that don’t get discussed, just because everyone assumes they can never happen. That’s as true for culture as it is for science, and the uniquely intellectual and dexterous Klosterman dives in with verve. Bonus points for interviews with some fascinating—and stubborn—people in the process.” Bloomberg Best Books of 2016, recommended by Ian Bremmer, President of Eurasia Group

“Klosterman is outlining the ideology of a contrarian here and reminding us of the important role that revisionism plays in cultural writing. What matters is the way he thinks about thinking—and the way he makes you think about how you think. And, in the end, this is all that criticism can really hope to do.” —Sonny Bunch, The Washington Post

“[Klosterman’s] most wide-ranging accomplishment to date . . . As inquisitive, thoughtful and dryly funny as ever, But What If We’re Wrong? . . . [is] crackling with the writer’s signature wit.” —Will Ashton, Pittsburgh Post-Gazette
 
“In But What If We’re Wrong? [Klosterman] takes on the really big picture . . . He ranges far and wide over the realm of known knowns and known unknowns.” —Brigitte Frase, Minneapolis Star Tribune
 
“I have often wondered how the times I live in will be remembered once they turn into History. It never occurred to me to figure out how to write a book about it, though, which is one of the reasons why Chuck Klosterman is smarter than I am.” —Aimee Levitt, The Chicago Reader

“Klosterman has proven himself an insightful and evolving philosopher for popular consumption . . . In his latest, But What If We’re Wrong?, Klosterman probes the very notions of existence and longevity, resulting perhaps in the most mind-expanding writing of his career.” —Max Kyburz, Gothamist
 
“Chuck Klosterman is no time traveler, but he's got a lot of ideas about how the future will shake out . . . in [But What If We’re Wrong?] he ponders the limits of humanity’s search for truth.” —Chris Weller, Tech Insider
 
“Prolific pop-culture critic Chuck Klosterman tackles his most ambitious project yet in new book But What If We’re Wrong?, which combines research, personal reflections and interviews.” —Alexandra Cavallo, The Improper Bostonian

“This book is brilliant and addictively readable. It's also mandatory reading for anyone who loves history and for anyone who claims to have a capacity for forecasting. It'll probably make them angry because it turns so many sacred assumptions upside down—but that's what the future does. Klosterman's writing style is direct, highly personal and robotically crisp—he's like a stranger on the seat next to you on a plane who gives you a billion dollar idea. A terrific book.” —Douglas Coupland

Author

© Joanna Ceciliani
Chuck Klosterman is the bestselling author of nine nonfiction books (including The Nineties; Sex, Drugs, and Cocoa Puffs; and But What If We’re Wrong?), two novels (Downtown Owl and The Visible Man), and the short story collection Raised in Captivity. He has written for The New York Times, The Washington Post, GQ (London), Esquire, Spin, The Guardian (London), The Believer, and ESPN. Klosterman served as the Ethicist for The New York Times Magazine for three years and was an original founder of the website Grantland with Bill Simmons. He was raised in rural North Dakota and now lives in Portland, Oregon. View titles by Chuck Klosterman

Excerpt

***This excerpt is from an advance uncorrected proof***

Copyright ©2016 Chuck Klosterman

I’ve spent most of my life being wrong.

Not about everything. Just about most things.

I mean, sometimes I get stuff right. I married the right person. I’ve never purchased life insurance as an investment. The first time undrafted free agent Tony Romo led a touchdown drive against the Giants on Monday Night Football, I told my roommate, “I think this guy will have a decent career.” At a New Year’s Eve party in 2008, I predicted Michael Jackson would unexpectedly die within the next twelve months, an anecdote I shall casually recount at every New Year’s party I’ll ever attend for the rest of my life. But these are the exceptions. It is far, far easier for me to catalog the various things I’ve been wrong about: My insistence that I would never own a cell phone. The time I wagered $100—against $1—that Barack Obama would never become president (or even receive the Democratic nomination). My three‑week obsession over the looming Y2K crisis, prompting me to hide bundles of cash, bottled water, and Oreo cookies throughout my one‑ bedroom apartment. At this point, my wrongness doesn’t even surprise me. I almost anticipate it. Whenever people tell me I’m wrong about something, I might disagree with them in conversation, but—in my mind—I assume their accusation is justified, even when I’m relatively certain they’re wrong, too.

Yet these failures are small potatoes.

These micro‑moments of wrongness are personal: I assumed the answer to something was “A,” but the true answer was “B” or “C” or “D.” Reasonable parties can disagree on the unknowable, and the passage of time slowly proves one party to be slightly more reasonable than the other. The stakes are low. If I’m wrong about something specific, it’s (usually) my own fault, and someone else is (usually, but not totally) right.

But what about the things we’re all wrong about?

What about ideas that are so accepted and internalized that we’re not even in a position to question their fallibility? These are ideas so ingrained in the collective consciousness that it seems fool‑ hardy to even wonder if they’re potentially untrue. Sometimes these seem like questions only a child would ask, since children aren’t paralyzed by the pressures of consensus and common sense. It’s a dissonance that creates the most unavoidable of intellectual paradoxes: When you ask smart people if they believe there are major ideas currently accepted by the culture at large that will eventually be proven false, they will say, “Well, of course. There must be. That phenomenon has been experienced by every generation who’s ever lived, since the dawn of human history.” Yet offer those same people a laundry list of contemporary ideas that might fit that description, and they’ll be tempted to reject them all.

It is impossible to examine questions we refuse to ask. These are the big potatoes.

 

Like most people, I like to think of myself as a skeptical person. But I’m pretty much in the tank for gravity. It’s the natural force most recognized as perfunctorily central to everything we under‑ stand about everything else. If an otherwise well‑executed argument contradicts the principles of gravity, the argument is inevitably altered to make sure that it does not. The fact that I’m not a physicist makes my adherence to gravity especially unyielding, since I don’t know anything about gravity that wasn’t told to me by someone else. My confidence in gravity is absolute, and I believe this will be true until the day I die (and if someone subsequently throws my dead body out of a window, I believe my corpse’s rate of acceleration will be 9.8 m/s2).

And I’m probably wrong.

Maybe not completely, but partially. And maybe not today, but eventually.

“There is a very, very good chance that our understanding of gravity will not be the same in five hundred years. In fact, that’s the one arena where I would think that most of our contemporary evidence is circumstantial, and that the way we think about gravity will be very different.” These are the words of Brian Greene, a theoretical physicist at Columbia University who writes books with titles like Icarus at the Edge of Time. He’s the kind of physicist famous enough to guest star on a CBS sitcom, assuming that sit‑ com is The Big Bang Theory. “For two hundred years, Isaac Newton had gravity down. There was almost no change in our thinking until 1907. And then from 1907 to 1915, Einstein radically changes our understanding of gravity: No longer is gravity just a force, but a warping of space and time. And now we realize quantum mechanics must have an impact on how we describe gravity within very short distances. So there’s all this work that really starts to pick up in the 1980s, with all these new ideas about how gravity would work in the microscopic realm. And then string theory comes along, trying to understand how gravity behaves on a small scale, and that gives us a description—which we don’t know to be right or wrong—that equates to a quantum theory of gravity. Now, that requires extra dimensions of space. So the understanding of gravity starts to have radical implications for our understanding of reality. And now there are folks, inspired by these findings, who are trying to rethink gravity itself. They suspect gravity might not even be a fundamental force, but an emergent1 force. So I do think—and I think many would agree—that gravity is the least stable of our ideas, and the most ripe for a major shift.”

If that sounds confusing, don’t worry—I was confused when Greene explained it to me as I sat in his office

 

1 This means that gravity might just be a manifestation of other forces—not a force itself, but the peripheral result of something else. Greene’s analogy was with the idea of temperature: Our skin can sense warmth on a hot day, but “warmth” is not some independent thing that exists on its own. Warmth is just the consequence of invisible atoms moving around very fast, creating the sensation of temperature. We feel it, but it’s not really there. So if gravity were an emergent force, it would mean that gravity isn’t the central power pulling things to the Earth, but the tangential consequence of something else we can’t yet explain. We feel it, but it’s not there. It would almost make the whole idea of “gravity” a semantic construction.

(and he explained it to me twice). There are essential components to physics and math that I will never understand in any functional way, no matter what I read or how much time I invest. A post‑gravity world is beyond my comprehension. But the concept of a post‑gravity world helps me think about something else: It helps me understand the pre‑ gravity era. And I don’t mean the days before Newton published Principia in 1687, or even that period from the late 1500s when Galileo was (allegedly) dropping balls off the Leaning Tower of Pisa and inadvertently inspiring the Indigo Girls. By the time those events occurred, the notion of gravity was already drifting through the scientific ether. Nobody had pinned it down, but the mathematical intelligentsia knew Earth was rotating around the sun in an elliptical orbit (and that something was making this hap‑ pen). That was around three hundred years ago. I’m more fixated on how life was another three hundred years before that. Here was a period when the best understanding of why objects did not spontaneously f loat was some version of what Aristotle had argued more than a thousand years prior: He believed all objects craved their “natural place,” and that this place was the geocentric center of the universe, and that the geocentric center of the universe was Earth. In other words, Aristotle believed that a dropped rock fell to the earth because rocks belonged on earth and wanted to be there.

So let’s consider the magnitude of this shift: Aristotle—arguably the greatest philosopher who ever lived—writes the book Physics and defines his argument. His view exists unchallenged for almost two thousand years. Newton (history’s most meaningful mathematician, even to this day) eventually watches an apocryphal apple fall from an apocryphal tree and inverts the entire human under‑ standing of why the world works as it does. Had this been explained to those people in the fourteenth century with no understanding of science—in other words, pretty much everyone else alive in the fourteenth century—Newton’s explanation would have seemed way, way crazier than what they currently believed: Instead of claiming that Earth’s existence defined reality and that there was something essentialist about why rocks acted like rocks, Newton was advocating an invisible, imperceptible force field that some‑ how anchored the moon in place.

We now know (“know”) that Newton’s concept was correct. Humankind had been collectively, objectively wrong for roughly twenty centuries. Which provokes three semi‑related questions:

 


   • If mankind could believe something false was objectively true for two thousand years, why do we ref lexively assume that our current understanding of gravity—which we’ve embraced for a mere three hundred fifty years—will some‑ how exist forever?
   • Is it possible that this type of problem has simply been solved? What if Newton’s answer really is—more or less— thefinalanswer, and the only one we will ever need? Because if that is true, it would mean we’re at the end of a process that has defined the experience of being alive. It would mean certain intellectual quests would no longer be necessary.
   • Which statement is more reasonable to make: “I believe grav‑ ity exists” or “I’m 99.9 percent certain that gravity exists”? Certainly, the second statement issafer. But if we’re going to acknowledge even the slightest possibility of being wrong about gravity, we’re pretty much giving up on the possibility of being right about anything at all.

 

There’s a popular website that sells books (and if you purchased this particular book, consumer research suggests there’s a 41 per‑ cent chance you ordered it from this particular site). Book sales constitute only about 7 percent of this website’s total sales, but books are the principal commodity this enterprise is known for. Part of what makes the site successful is its user‑generated con‑ tent; consumers are given the opportunity to write reviews of their various purchases, even if they never actually consumed the book they’re critiquing. Which is amazing, particularly if you want to read negative, one‑star reviews of Herman Melville’s Moby-Dick.

“Pompous, overbearing, self‑indulgent, and insufferable. This is the worst book I’ve ever read,” wrote one dissatisfied customer in 2014. “Weak narrative, poor structure, incomplete plot threads, ¾ of the chapters are extraneous, and the author often confuses himself with the protagonist. One chapter is devoted to the fact that whales don’t have noses. Another is on the color white.” Interestingly, the only other purchase this person elected to review was a Hewlett‑Packard printer that can also send faxes, which he awarded two stars.

I can’t dispute this person’s distaste for Moby-Dick. I’m sure he did hate reading it. But his choice to state this opinion in public— almost entirely devoid of critical context, unless you count his take on the HP printer—is more meaningful than the opinion itself. Publicly attacking Moby-Dick is shorthand for arguing that what we’re socialized to believe about art is fundamentally questionable. Taste is subjective, but some subjective opinions are casually expressed the same way we articulate principles of math or science. There isn’t an ongoing cultural debate over the merits of Moby- Dick: It’s not merely an epic novel, but a transformative literary innovation that helps define how novels are supposed to be viewed. Any discussion about the clichéd concept of “the Great American Novel” begins with this book. The work itself is not above criticism, but no individual criticism has any impact; at this point, attacking Moby-Dick only reflects the contrarianism of the critic. We all start from the supposition that Moby-Dick is accepted as self‑evidently awesome, including (and perhaps especially) those who disagree with that assertion.

So how did this happen?

Melville publishes Moby-Dick in 1851, basing his narrative on the real‑life 1839 account of a murderous sperm whale nicknamed “Mocha Dick.” The initial British edition is around nine hundred pages. Melville, a moderately successful author at the time of the novel’s release, assumes this book will immediately be seen as a masterwork. This is his premeditated intention throughout the writing process. But the reviews are mixed, and some are contemptuous (“it repels the reader” is the key takeaway from one of the very first reviews in the London Spectator). It sells poorly—at the time of Melville’s death, total sales hover below five thousand copies. The failure ruins Melville’s life: He becomes an alcoholic and a poet, and eventually a customs inspector. When he dies destitute in 1891, one has to assume his perspective on Moby-Dick is some‑ thing along the lines of “Well, I guess that didn’t work. Maybe I should have spent fewer pages explaining how to tie complicated knots.” For the next thirty years, nothing about the reception of this book changes. But then World War I happens, and—somehow, and for reasons that can’t be totally explained2—modernists living in postwar America start to view literature through a different lens. There is a Melville revival. The concept of what a novel is supposed to accomplish shifts in his direction and amplifies with each passing generation, eventually prompting people (like the 2005 director of Columbia University’s American studies pro‑ gram) to classify Moby-Dick as “the most ambitious book ever conceived by an American writer.” Pundits and cranks can disagree with that assertion, but no one cares if they do. Melville’s place in history is secure, almost as if he were an explorer or an inventor: When the prehistoric remains of a previously unknown predatory whale were discovered in Peru in 2010, the massive creature was eventually named Livyatan melvillei. A century after his death, Melville gets his own extinct super‑whale named after him, in tribute to a book that commercially tanked. That’s an interesting kind of career.

Now, there’s certainly a difference between collective, objective wrongness (e.g., misunderstanding gravity for twenty centuries) and collective, subjective wrongness (e.g., not caring about Moby- Dick for seventy‑five years). The machinations of the transitionsare completely different. Yet both scenarios hint at a practical reality and a modern problem. The practical reality is that any present‑tense version of the world is unstable. What we currently consider to be true—both objectively and subjectively—is habitually provisional. But the modern problem is that reevaluating what we consider “true” is becoming increasingly difficult. Superficially, it’s become easier for any one person to dispute the status quo: Everyone has a viable platform to criticize Moby-Dick (or, I suppose, a mediocre HP printer). If there’s a rogue physicist in Winnipeg who doesn’t believe in gravity, he can self‑publish a book that outlines his argument and potentially attract a larger audience than Principia found during its first hundred years of existence. But increasing the capacity for the reconsideration of ideas is not the same as actually changing those ideas (or even allowing them to change by their own momentum).

We live in an age where virtually no content is lost and virtually all content is shared. The sheer amount of information about every current idea makes those concepts difficult to contradict, particularly in a framework where public consensus has become the ultimate arbiter of validity. In other words, we’re starting to behave as if we’ve reached the end of human knowledge. And while that notion is undoubtedly false, the sensation of certitude it generates is paralyzing.

 

In her book Being Wrong, author Kathryn Schulz spends a few key pages on the concept of “naïve realism.” Schulz notes that while there are few conscious proponents of naïve realism, “that doesn’t mean there are no naïve realists.” I would go a step further than Schulz; I suspect most conventionally intelligent people are naïve realists, and I think it might be the defining intellectual quality of this era. The straightforward definition of naïve realism doesn’t seem that outlandish: It’s a theory that suggests the world is exactly as it appears. Obviously, this viewpoint creates a lot of opportunity for colossal wrongness (e.g., “The sun appears to move across the sky, so the sun must be orbiting Earth”). But my personal characterization of naïve realism is wider and more insidious. I think it operates as the manifestation of two ingrained beliefs:

 


   • “When considering any question, I must be rational and logical, to the point of dismissing any unverifiable data as preposterous,” and
   • “When considering any question, I’m going to assume that the information we currently have is all the information that will ever be available.”

 

Here’s an extreme example: the possibility of life after death. When considered rationally, there is no justification for believing that anything happens to anyone upon the moment of his or her death. There is no reasonable counter to the prospect of nothing‑ ness. Any anecdotal story about “floating toward a white light” or Shirley MacLaine’s past life on Atlantis or the details in Heaven Is for Real are automatically (and justifiably) dismissed by any secular intellectual. Yet this wholly logical position discounts the over‑ whelming likelihood that we currently don’t know something critical about the experience of life, much less the ultimate conclusion to that experience. There are so many things we don’t know about energy, or the way energy is transferred, or why energy (which can’t be created or destroyed) exists at all. We can’t truly conceive the conditions of a multidimensional reality, even though we’re (probably) already living inside one. We have a limited under‑ standing of consciousness. We have a limited understanding of time, and of the

perception of time, and of the possibility that all time is happening at once. So while it seems unrealistic to seriously

 

2 The qualities that spurred this rediscovery can, arguably, be quantified: The isolation and brotherhood the sailors experience mirrors the experience of fight‑ ing in a war, and the battle against a faceless evil whale could be seen as a metaphor for the battle against the faceless abstraction of evil Germany. But the fact that these details can be quantified is still not a satisfactory explanation as to why Moby-Dick became the specific novel that was selected and elevated. It’s not like Moby-Dick is the only book that could have served this role.

consider the prospect of life after death, it seems equally naïve to assume that our contemporary understanding of this phenomenon is remotely complete. We have no idea what we don’t know, or what we’ll eventually learn, or what might be true despite our perpetual inability to comprehend what that truth is.

It’s impossible to understand the world of today until today has become tomorrow.

This is no brilliant insight, and only a fool would disagree. But it’s remarkable how habitually this truth is ignored. We constantly pretend our perception of the present day will not seem ludicrous in retrospect, simply because there doesn’t appear to be any other option. Yet there is another option, and the option is this: We must start from the premise that—in all likelihood—we are already wrong. And not “wrong” in the sense that we are examining questions and coming to incorrect conclusions, because most of our conclusions are reasoned and coherent. The problem is with the questions themselves.