‘People have bias’: Disinformation-hunting AI by firm close to Pentagon part of ‘Very bad trend’

Sputnik news agency and radio 01:03 GMT 09.10.2020

 

Web developer and technologist Chris Garaffa tells Sputnik that although machine intelligence firm Primer has experienced staff with close government ties, its recent US military contract geared toward combating disinformation provokes questions about the limits of artificial intelligence and the possible misuse of AI by US government officials.

 

"For all the US military’s technical advantages over adversaries, it still struggles to counter disinformation. A new software tool to be developed for the US Air Force and Special Operations Command, or SOCOM, may help change that," said a new Defense One article published on October 2, just a day following the announcement of Primer's multi-million-dollar contract.

 

Garaffa told Radio Sputnik’s Political Misfits on Thursday that this deal made in an effort to combat fake news is part of a “very bad trend to make AI [determine] what is true and what is not.”

 

In fact, designing AI to do “anything [other] than summarizing information that should then be reviewed by a human” is problematic, Garaffa told hosts Bob Schlehuber and Michelle Witte.

 

Garaffa highlighted that Primer is a relatively small machine intelligence company, yet it has major contracts with retail corporation Walmart, the US Air Force and In-Q-Tel, the investment arm of US Central Intelligence Agency (CIA).

 

Amy Heineike and Sean Gourley, former employees of private software and services company Quid, are involved in Primer, as is Brian Raymond, a former CIA officer and ex-director for Iraq for the National Security Council (NSC).

 

Speaking of Raymond, Garaffa stated that “this is somebody who has very, very close ties to the government and to the intelligence community, having been on the NSC.”

 

“I don’t trust AI to do this kind of real-world analysis, in real-time, in this state that it’s in,” they said.

 

Garaffa expressed they are skeptical of the Primer AI’s ability to determine, for example, whether the information in a particular social media post is “actually telling the truth.”

 

“That’s what the Air Force wants it for. [The service] wants it for situational awareness on the ground,” they noted, noting that this tech could later be adopted by an array of federal government agencies, such as the US Department of Homeland Security and its subagency US Immigration and Customs Enforcement.

 

“They could use it to monitor protestswhich they, you know, do,” Garaffa highlighted.

 

There’s also the general issue of trust and how something comes to be regarded as fact versus fiction.

 

“After any kind of situation, there’s a lot of misinformation that comes out, because people are trying to figure out what happened,” Garaffa noted. “Some of it is legitimately misdirecting people, but some of it is … people reporting what they’ve seen … and it turns out to be wrong.”

“There’s no information about how Primer addresses any of these questions, or bias that is inherent in the development of AI,” they emphasized.

“Remember, these algorithms are developed by people. People have bias. People have blind spots.”

 

Viewed : 2762