Virginia Tech® home

The Shape of Things to Come

Can science fiction help us imagine — and avoid — a dystopian future?

A futuristic landscape

With a hissing sigh, the stove ejects eight pieces of perfectly browned toast, eight sunny-side-up eggs, sixteen bacon slices, two hot coffees, and two chilled glasses of milk.

Over the course of the day — August 4, 2026, as a voice intones from the kitchen ceiling — the house’s tasks continue on schedule.

Tiny robot mice, “all rubber and metal,” thud against chairs as they suck at hidden dust and drop detritus down tubes into an incinerator that squats “like evil Baal in a dark corner.”

When the house requests and yet receives no password, it slams its windows shut and draws its shades in “a mechanical paranoia.”

With the exception of this lone structure, a nuclear catastrophe has leveled a Californian city, leaving a soft, radioactive glow. The family members who had once lived in the house remain in pale blast silhouettes against the charred exterior: the father, pushing a mower; the mother, stooped to pick flowers; the boy in mid-throw; the girl with a hand raised to catch.

When Ray Bradbury published the short story “There Will Come Soft Rains” seven decades ago, the world’s first atomic bombs had dropped just five years earlier. The Cold War was in its infancy, and schoolchildren huddled beneath desks in duck-and-cover drills for the first time.

Absent from Bradbury’s dystopia was the bright, cheerful frenzy of The Jetsons’ automation — one we even share today, with robotic assistants, button-summoned meals, autonomous vehicles, and scurrying, mission-driven vacuum cleaners. That animated cosmos was still a decade into the future.

Bradbury instead offered a bleak scenario, one with a dire lesson: Technology cannot save humanity. In the absence of the diplomacy needed to avert nuclear annihilation and other cataclysms, in fact, innovations — no matter how efficient or clever — are as useless as a voice calling from the ceiling to a family destroyed.

A woman sits in a capsule overlooking a futuristic skyscape

Mind Your Monsters
Science fiction has long offered cautionary tales, a tradition that many scholars date back to Mary Shelley’s 1818 novel, Frankenstein. In that story, the inventor, horrified, abandons his reanimated creation with tragic consequences.

“People learned the wrong lesson from Frankenstein, and it’s one now applied to artificial intelligence, robotics, and genetically modified organisms: Don’t ever do this,” says Damien Williams, a doctoral student in the Virginia Tech Department of Science, Technology, and Society who specializes in the ethics and philosophy of nonhuman consciousness. “Instead, the lesson we should take from the novel is this: If you’re going to tinker with nature and the future of humanity, think it through first and take full responsibility for it. Don’t just invent and hope for the best.”

The urgency of thinking through technological implications is greater than ever, Williams adds.

“The pace of our world is such that every new technology is a force multiplier,” he says. “We need to be able to explore possible consequences agilely and carefully. Fortunately, fiction affords us the ability to grasp more quickly what’s at stake. And good science fiction writers craft believable worlds in which they can spool out the logical and psychological eventualities of actions and inactions.”

Distant Early Warnings
Science fiction centers on imagining what can be, says Leigh McKagen, a doctoral student in the Alliance for Social, Political, Ethical, and Cultural Thought who studies imperialistic themes in Star Trek.

“History is civilization’s laboratory of the past, one that enables us to study what people tried before, so we can replicate successes and avoid mistakes,” she says. “By contrast, science fiction is our laboratory of the future, allowing us to test out unfolding consequences. What might the future look like? What should we be afraid of happening?”

McKagen notes that science fiction and science fact have a dynamic interplay. “Scientific findings have long fed fictional scenarios,” she says, “and fabricated worlds have predicted and even inspired technological advancements.” Star Trek’s prognostic record is particularly impressive, she adds, with early seasons previewing such innovations as smartphones, tablet computers, and universal translators.

She also cites the three laws of robotics that Isaac Asimov — another skilled prognosticator — crafted in a 1942 short story: A robot may not injure a human, or, through inaction, allow a human to come to harm. A robot must obey orders from humans, except when the orders would conflict with the first law. And finally, a robot must protect its own existence as long as that protection does not conflict with the first two laws.

“Asimov’s laws are key principles many people consider today when they explore the implications of robots and artificial intelligence,” McKagen says. “A number of developers in artificial intelligence joke about those laws, but still take them to heart.”

Strange New Worlds
Although science fiction can alert us to potential perils, McKagen says, its cautions are useless if they aren’t heeded.

“George Orwell’s novel 1984 serves as a fantastic warning of what could happen if you let the state run your media and restrict the information that citizens can access,” says McKagen. “We would all do well to listen to such messages.”

McKagen notes that some of the earliest science-fiction narratives centered on the dangers of robots gaining sentience and taking over the world. While that thread persists, today’s novelists, scriptwriters, and showrunners are also exploring critical issues through climate fiction and post-apocalyptic science fiction, such as The Handmaid’s Tale.

“Science fiction is doing a brilliant job of outlining the paths we should not take, whether they’re environmental, social, or technological,” McKagen says. “Climate fiction in particular is reacting to the urgency of protecting the planet. Those stories remind us that it’s through human interactions—smart policies, astute politics, and intelligent governance—that we can avoid a dystopian future.”

Williams agrees. “Ultimately, we need to realize that every technological problem that involves humans is a societal issue, a humanity challenge,” he says. “When things go very, very wrong with algorithms, for example, large swaths of people can be incarcerated or denied jobs.”Fortunately, fictional narratives tend to resonate with many readers and viewers, Williams adds. They can coax people into thinking about issues more deeply and with greater urgency than they would otherwise.

“What will artificial intelligence, algorithmic biases, and big data mean for the quality of human life?” Williams asks. “What if CRISPR inspires more and more biohackers to tinker with genetics in their garages? We need to think about such scenarios now, rather than responding to them after they happen. And science fiction can help.”

Through a Glass, Darkly
Williams cites Black Mirror, a televised science fiction anthology inspired in part by The Twilight Zone, as being particularly adept at extrapolating the implications of technologies.

“Many of these storylines — being able to rewind your entire bank of memories, for example, or having a robotic companion go rogue — may not have come to pass yet, but they’re foundational,” he says. “If your loved ones can continue to interact with your digital self after you’ve died, is that comforting or terrifying? If the algorithmic you goes out into the world, who’s responsible?”

To ensure a future we would all want to inhabit, Williams adds, technologists and humanists must work hand in hand.

“Science fiction is not prophecy,” he says. “Yet it provides us with narratives that allow us to examine philosophical, ethical, and social questions in multiple ways. What does it mean to be human, to be alive? Those questions are the colossal, enduring ones, and their contours are now being defined.”

Written by Paula Byron and illustrated by Léonard Dupond