Virginia Tech® home

What Could Possibly Go Wrong?

If we don’t govern technologies, they may end up governing us.

A mid-century-styled illustration of a man struggling to evade a robot's grip

The smartphone is a symbol of the transformative powers of technology. It not only allows us to communicate in ways that were once the stuff of science fiction, but it also makes possible endless other minor miracles, such as dispensing treats to your schnauzer from miles away or identifying constellations with a simple angling skyward. But when Aaron Brantly visited the frontlines of Ukraine’s conflict with Russia in 2017, he found smartphone technology being used in another way: to demoralize and even kill enemy combatants.

Brantly, a Virginia Tech assistant professor of political science, learned that the Russians were using international mobile subscriber identity catchers to intercept calls and texts transmitted over smartphones by Ukrainian soldiers to one another and to their families. These devices allowed the Russians to fire back texts to Ukrainian soldiers, imploring them to surrender with messages such as, “You’re going to die in the snow!” They also sent bogus texts to the soldiers’ wives and girlfriends, reporting that their loved ones had been killed in action.

More sinisterly, says Brantly, who’s also a cyber policy fellow at the United States Army Cyber Institute, “the Russians would gain real-time location data about the soldiers on the frontline, so they could follow their movements.” Those data were used for targeting artillery strikes, with deadly results.

Other examples abound of smartphone technology being abused for nefarious purposes, says Brantly, such as Mexican drug cartels planting spyware on the phones of journalists, whom they then tracked down and assassinated.

These stories add to an ever-growing catalog of cautionary tales about technology — how it is often used for ill will or otherwise fails to live up to the promise of making our lives easier and safer.

Consider just a few recent examples: Flawed flight-control software caused two Boeing jetliners to fall from the sky, killing 346 people. An autonomous, or self-driving, vehicle was involved in a fatal traffic accident. During the past year alone, more than a hundred U.S. cities and towns had their administrative computers taken hostage and held for ransom by hackers. Each week seems to bring news of another major corporation experiencing a data breach. Social-media meddling has interfered with U.S. elections, a threat to democracy that still looms.

Brantly is one of a growing number of Virginia Tech scholars who study technology’s impact on society, and who have recently been thinking about our uneasy relationship with today’s smart machines.

“The question is no longer what could happen,” says internet historian Janet Abbate, a professor of science, technology, and society at Virginia Tech. “It’s more accurately: How bad does it have to get before we actually do something? We could be one catastrophe away from a public revolt. A hack that causes physical injury or death could rouse demand for new types of regulation in the name of security. What’s going to be the 9/11 for the internet?”

Think Twice
Not all tales of tech fails are malicious. Take, for example, the motorist who listened and obeyed as a navigation app instructed him to drive his car — a Jeep Compass, appropriately enough — down a boat ramp and into icy Lake Champlain in Vermont. (He and his two passengers escaped unharmed.) Other motorists have followed GPS guidance and driven into houses, trees, and mud pits, and even down a staircase at Riverside Park in Manhattan.

And then there’s the university whose website was taken down by candy bars and energy drinks. Well, not exactly, but in 2017 Verizon reported that hackers had shut down the website of an unnamed American university by attacking it with bots. These apps, which perform automated tasks, clogged the system, preventing students and other legitimate users from gaining access.

The source of those malicious bots? Hackers delivered them through vending machines and other devices around campus that were connected to the university’s computer servers.

While not being able to sign up for next semester’s classes is a hassle, other tech fails that have been reported in recent years could have a profound — even deadly — impact on people’s lives. The problem in many cases is that the artificial intelligence that powers a multitude of modern machines can sometimes be, well, dumb, to say nothing of sexist and racist.

The purpose of artificial intelligence is to give machines the ability to “think” and make decisions. That’s made possible with algorithms, which are simply sets of rules used to solve problems. Data fed into an algorithm “teach” a device how to behave, a process called machine learning. But, says Brantly, “an algorithm is only as good as its data, which often have inherent social, economic, cultural, and ethnic prejudices.”

For example, a study published in Science in October 2019 found that software used by many U.S. health care providers contains an algorithmic bias that leads some African Americans to receive inadequate treatment. The software uses previous health care expenditures as a surrogate for how sick a patient is — but our health care system spends more on whites, so this algorithm understates the needs of African American patients.

Also in 2019, Georgia Tech researchers tested software that’s supposed to prevent autonomous vehicles from running over pedestrians and found that it frequently failed to brake or swerve when challenged with images of people with dark skin. The problem: The software used an algorithm that had been trained predominantly with images of light-skinned pedestrians.

Many employers today use recruitment software to help identify job candidates, but these systems rely on algorithms that “will drift toward bias by default,” according to a 2019 Harvard Business Review article. Amazon developed an artificial intelligence–based recruiting tool that routinely ranked women lower than men as candidates for software developer jobs. Several people familiar with the project told Reuters in 2018 that the data used to train the algorithm came from resumes submitted to Amazon over the previous decade — which overwhelmingly came from men. (Amazon has since abandoned the tool and insisted it was never used to evaluate job candidates.)

Despite these worrisome examples, Brantly says that artificial intelligence is not doomed to violate human rights and promote inequality.

“Bad algorithms can be fixed,” he says, “but it’s critical for users to comprehend how they work to prevent these kinds of problems. When we fail to understand how an algorithm chose that data, we end up baking in bias.”

Assume Nothing
People with disabilities make up another group that can be victimized by tech bias, says Ashley Shew, an assistant professor of science, technology, and society at Virginia Tech.

Shew — who has what she calls a “whole bingo card of disabilities” — argues that developers of assistive technologies too often promote ableism, or discrimination against people with disabilities. A prime example of “technoableism,” Shew says, are robotic exoskeletons, wearable devices that use electric, hydraulic, or pneumatic actuators to control movement.

“Exoskeletons make the assumption that everyone’s dream is to walk,” says Shew. “But for some people who use wheelchairs, that’s simply not the case.”

Shew, who teaches a course on technology and disability, saw this reality play out in her classroom. One of her former students was an engineering major who was developing an exoskeleton with his brother, who has spina bifida, in mind. Yet when the engineering student told his brother about his plans, he received an unexpected response.

“I’d try one,” the young man said, “but I’ve used a wheelchair my whole life. This is how I know how to get around in the world and I’m not unhappy with my life.”

The engineering student has since turned his attention to developing powered exoskeletons for aging agricultural workers.

Even when good technologies for disabled people come along, says Shew, getting insurers to pay for the devices is difficult. In the late 1990s, inventor Dean Kamen created a wheelchair called the iBot that allowed users to stand, plow through snow and sand, and climb a stair. But it cost $25,000 and insurers rarely covered the expense, so only about 500 chairs were ever sold. (The makers of the iBot reintroduced it in 2019 and are working to get Medicare to pay for it.)

Panic Buttons
A 2019 poll by the Pew Research Center found that 70 percent of Americans feel their personal data are less secure than they were five years ago. Fears that tech is robbing us of our privacy were no doubt heightened by a recent series of New York Times articles detailing how the data-collection industry can use pings from your smartphone to track your daily movements to within a few feet of where you’re standing or sitting. The past year has also brought multiple reports of hackers hijacking internet-connected home security cameras to spy on and taunt families.

If it’s any comfort, concerns that newfangled machines are making our lives worse are nothing new.

“People have always worried that the latest technology would carry negative social implications,” says Lee Vinsel, an assistant professor of science, technology, and society at Virginia Tech. The arrival of the automobile, for instance, stoked anxiety that this new mode of transport would break up tightknit communities by allowing people to escape.

Past anxieties weren’t necessarily unfounded, notes Vinsel. “There were also real harms being caused by new technologies,” he adds. “Trains, cars, and streetcars killed thousands and thousands of people.” New laws, though, made those modes of transport safer. Do we need new rules to ensure that technology is a force for good, not evil?

“We can regulate; we’ve always regulated,” says Vinsel. But he cautions against getting too panicked about the dire possibilities that await us unless technology is reined in by strict rules. He notes, for example, that 15 years ago academic journals were filled with papers about the need for humanistic governance of nanotechnology, which some feared would forever change society.

“If we’re going to legislate against tech’s potential harm,” he says, “we need to identify actual harms.”

Safety First
Other experts believe it’s time to act toward making technology safer.

“We need large-scale regulatory and policy reform to manage these challenges,” says Brantly. He supports the idea of introducing standards that would force tech developers to make their products more secure. He notes that the average computer program has between 20 and 40 bugs per thousand lines of code, which create openings for hackers.

The risk for security breaches and other intrusions could be substantially reduced, however, if companies were required to engage in processes such as secure development lifecycle, a Microsoft-devised method for testing computer code before it’s deployed. Such processes can reduce bugs and security flaws by 90 percent, says Brantly, who suggests that developers who fail to certify their products’ security might be disqualified from obtaining liability insurance.

Greater transparency would help as well.

“There’s a real lack of transparency when you download an app, for example, and want to know what information it’s collecting,” says Abbate. Even if you take the time to read the 10-page privacy policy before clicking “Accept,” you may still not have a clue about how your data will be used. Requiring app developers to disclose their practices in clear, plain language would empower consumers to make informed decisions, protect their privacy, and avoid bad actors, says Abbate.

At the same time, Abbate rejects the idea of setting up a federal agency to oversee the internet, or any sort of Tech Czar. “I think that would just be a disaster,” she says. Instead, she suggests that individual sectors of the government regulate cybersecurity by industry, such as the Federal Reserve ensuring that financial institutions are protected against hackers.On Good Authority

More inclusivity in the design stages could also ward off potential problems. Shew has a simple solution for improving tech for disabled people, for example.

“I want to see design teams that are led by disabled people,” she says. She points with optimism to programs such as EnableTech at the University of California at Berkeley, in which students design assistive technology with input from “need-knowers,” that is, people with disabilities and their caregivers.

Likewise, having more women permeate the male-dominated tech world could help to eliminate certain types of problems that have plagued tech companies, such as reports of female passengers being raped by drivers involved in ridesharing apps.

“If more women had been involved in developing these platforms,” says Abbate, “the potential risks to women would have been clear much sooner.”

Meet George Jetson
Self-governance could play an important role in ensuring our relationship with technology is less fraught. Cybercriminals can’t steal your information when it’s not online or invade your privacy as easily if you turn off your Alexa.

“It certainly would be possible to say, ‘Okay, let’s not put secure data in the cloud,’ or ‘Let’s not connect certain devices,’” Abbate says. The trend is running in the opposite direction, of course, but she wonders whether some people will eventually rethink whether the convenience of connectivity is worth the risk.

“The frequency of tech fails has made people blasé,” says Abbate. “These incidents happen, we get outraged, and then nothing happens. We’ve become resigned.”

Abbate believes that a key cause of that resignation is that technology is complicated: People have no idea how their smartphones and apps and internet-connected doorbells actually work, so they just shrug and accept privacy invasion and data breaches as a fact of modern life.

For his part, Brantly believes that corporate giants such as Apple, Amazon, Google, and Microsoft are taking seriously the challenge of creating a more streamlined, efficient, and secure technological infrastructure in the future. Yet patience will serve us well as the bugs get worked out.

“We might eventually move to a futuristic model more like The Jetsons, in which technology is both helpful and benign,” he says. “But it will take a while.”

Written by Timothy Gower