Many believe that artificial intelligence (AI) tools have hit a home run, especially the popular ChatGPT application. Or have they?

This week on a national stage, a Virginia Tech professor known for her research on the history and culture of computing and the internet will challenge how society defines the success and failure of AI.

Janet Abbate, an Arlington-based professor of science, technology, and society, will speak during a congressional briefing on Capitol Hill on Friday to discuss historical perspectives on the challenges posed by AI. The briefing, hosted by the American Historical Association, will include Abbate and faculty from Princeton University, Columbia University, and the University of Minnesota.  

“When we automate something done by human intelligence, we are defining what intelligence is,” said Abbate, who has written two books, “Inventing the Internet” and “Recoding Gender: Women’s Changing Participation in Computing.”

“There is this narrow idea of what’s being included in intelligence,” she said. “That's fine if you want to be a computer that plays chess, but now we have AI doing things that are very social. We have not included social intelligence in there.”

Abbate said she hopes legislators will consider some of her questions as they set policies for the nation.  

“These foundational questions that should be asked at the beginning don’t get asked at all,” she said.

Abbate will discuss the following points:

AI lacks moral reasoning and social intelligence.

“We cannot automate moral reasoning or an ethic of care,” Abbate said. 

Meanwhile, social intelligence comes when people live in a society where poeple's actions have consequences for others.  

“AI has prioritized socially disconnected forms of intelligence that may not be appropriate for current uses of AI that are meant to occur within or take the place of interpersonal social interactions,” she said.

What it means for AI to replace or become equivalent to a human being.

AI produces an expected or stereotyped version of a human being.

“Part of the success of AI in imitating or replacing a human being is that we see what we want or expect to see and respond to conversation cues with our social instincts,” Abbate said. “We fill in the blanks and in doing so, we exaggerate the computer's intelligence.”

What it means for AI to solve a problem.

It is important that AI tools have a defined criteria for success, she said. For instance, if AI needs to be trained, who decides what the correct labels are?

Failure also needs a criteria.  

“Should we allow things that steal people’s intellectual property? Or are used to harass or for identity theft?” Abbate said. “Failure is not the absence of success. We have a lot of things that are market successes and societal failures. This is where policy comes in.”

Written by Jenny Kincaid Boone