Artificial intelligence may be able to sift through the vast pool of online information and discern which news are fake and which are true -- or closer to the truth.
A research presentation on the topic was made Thursday at KAIST, a public research university in Daejeon.
Cha Mi-young, a professor at the KAIST School of Computing, proposed a future where artificial intelligence trained through deep learning can tell fake news apart from real.
A research presentation on the topic was made Thursday at KAIST, a public research university in Daejeon.
Cha Mi-young, a professor at the KAIST School of Computing, proposed a future where artificial intelligence trained through deep learning can tell fake news apart from real.
KAIST hosts a monthly forum titled “Dinner and 4.0,” where future technology that will lead the “fourth industrial revolution” is discussed. KAIST professors take the opportunity to present their research to public officials and researchers, as well as students. The forum in November was the seventh such event.
Fake news became a prominent issue following the discovery that the top 20 percent of political news consumed on Facebook during the recent US presidential race had lacked factual grounds.
A significant number of people had presumed the content of articles to be true by looking at accompanying images, headlines and leads, without reading the actual content.
The online neologism “tldr,” short for too-long-didn’t-read, reflected the flaw in news systems.
Cha attributed the problem to the current online news algorithm that registers the most-clicked news as worthy content to share. The media’s obsession with attracting bigger online traffic worsened the spread of falsified stories.
According to Cha’s research, AI that went through deep learning was able to detect over 80 percent of fake news, while the accuracy of human discernment remained at 66 percent.
Cha said there is a pattern in the fake news that AI can pick up. Real news tend to draw attention, or traffic, only at the beginning of an article’s publishing, while fake news prompt continued clicks in sporadic locations by unconnected people.
The detected fake information often dodged responsibility as a credible source by using accompanying phrases such as “I’ve heard,” “Not sure, but” and “Someone said.”
“There was a time when Koreans took at face value that ‘sleeping with a fan would lead to death.’ It was fake news, but this was translated and spread even to Mexico,” Cha said, highlighting the extent to which fake news can travel.
Cha also acknowledged that there are times when people crave real news and when they want to be entertained by light reading.
She added, “Effort should still be made for fast detection of fake news and there should be barriers on the dissemination of false information.”
Cha also suggested a future in which news is categorized according to its credibility, with the most trustworthy information being labeled as first degree and the least as fifth. This could be achieved by first labeling clean information in order to differentiate what is fake and what is true.
Cha expressed a wish to form a team with a startup big data group to lay the groundwork for fact-checking.
By Lim Jeong-yeo (kaylalim@heraldcorp.com)