I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
1236 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
1240 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
1241 days ago

Humans have some learning to do in an A.I. led world
1241 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
1242 days ago

Google AI can create better machine-learning code than the researchers who made it

More than 1 lakh scholarship on offer by Google, Know how to apply

Rise of the sex robots: Life-like doll goes on sale for 15,000 pound

13-year-old Indian AI developer vows to train 100,000 coders

What is Deep Learning and Neural Network

Bot armies of fake followers are the foot soldiers of fake news

By Rajendra |Email | Aug 17, 2017 | 9015 Views

We've been hearing for some time now that fake news is real  as in, it exists. A lot of it. We've also been hearing that one of the major ways it spreads is fake  through bots, not humans.

And now comes a team of researchers from Indiana University who say they have the data to confirm it. In a paper titled "The Spread of Fake News by Social Bots," they reported that an analysis of 14m social media messages regarding 400,000 claims on Twitter during and following the 2016 US presidential campaign and election provided, "evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots".

Propaganda is nothing new, of course. It has always been a reality of human societies, especially when it comes to politics. What makes things different now is scale and distribution  it is disseminated not as much by word of mouth, speeches and traditional media outlets but by an army of bots that can amplify fraudulent stories in seconds, pushing out millions of tweets or posts on other social media platforms before the fact-checkers even get in gear.

The researchers said several fact-check sites list 122 websites that "routinely" publish fake news, which then gets picked up and amplified by the bots.

The bot accounts are, of course, designed to trick other users into thinking they are real people expressing opinions or promoting agendas. The scale and reach of bots has also been growing  no surprise, since it doesn't take much time or money to unleash them. They've been around for several election cycles.

Gawker reported back in 2011 that up to 80% of then Republican presidential hopeful Newt Gingrich's alleged 1.3m Twitter followers were fake  generated by agencies Gingrich hired to boost the number.

Trump himself claims to have 100m social media followers, including about 32.4m on Twitter alone. But most estimates have concluded that about a third of his Twitter audience 11.6m  are bots.

Which doesn't make the president an outlier. Every celebrity has fake followers  some of them in the millions. A couple of years ago, singer Katy Perry supposedly had 64m Twitter followers  of which a TwitterAudit sample report said 65% were fake.

Twitter itself is a participant in the inflation game. Naked Security reported in March that the company's own estimate that up to 8.5% of its accounts are managed by bots was low seriously low. It cited a UK Sunday Times report that, "up to 48m or 15%  of the social media giant's 319m users were in fact bots".

Perhaps the only thing reality has going for it is that when it comes to fake news, bots are less and less under the mainstream radar. They are now very big news, which has to be good for public awareness and good timing for the paper's authors: Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Alessandro Flammini and Filippo Menczer.

Last week, on the day that the MIT Technology Review published a review of their work, the Washington Post also carried a story about a Nicole Mincey, a "super fan" of President Donald Trump, is likely a fake. Twitter suspended the account after other users complained.

The Post cited experts who said the account "bears a lot of signs of a Russia-backed disinformation campaign". They included Clint Watts, a former FBI agent and fellow at the Foreign Policy Research Institute who is the creator of Hamilton 68, a dashboard tracking Russian propaganda on Twitter.

This, of course, was after the president had tweeted his praise of "her" to his 32.4m (minus 11.6m) followers.

Shortly after, the syndicated National Public Radio political talk show "On Point" did an hour on the topic.

According to the researchers, some small comfort may be that the use of bots is bipartisan. "Successful sources of fake news in the US, including those on both ends of the political spectrum, are heavily supported by social bots," they wrote. They also listed "manipulation strategies" that the bots use to be more effective in influencing public opinion:

First, bots are particularly active in amplifying fake news in the very early spreading moments, before a claim goes viral.

Second, bots target influential users through replies and mentions.

Finally, bots may disguise their geographic locations. People are vulnerable to these kinds of manipulation, retweeting bots who post false news just as much as other humans.
And they said other platforms  Facebook, Instagram, Snapchat and others  can be just as easily manipulated "automatically and anonymously".

What to do? The researchers offered a couple of suggestions, but acknowledge that those have limits. One is to create, "partnerships between social media platforms and academic research. For example, our lab and others are developing machine-learning algorithms to detect social bots."

That, however, can be "fraught with peril," since algorithms are not perfect. "Even a single false-positive error leading to the suspension of a legitimate account may foster valid concerns about censorship," they wrote.

Another is to use CAPTCHAs  challenge-response tests  to determine if a user is human. They has been effective in curbing spam and other online abuses, but they do add, as the researchers delicately put it, "undesirable friction" to legitimate uses of automation by organizations like the press or emergency response systems, saying:

These are hard trade-offs that must be studied carefully as we contemplate ways to address the fake news epidemics.

Source: Naked Security