According to computer scientists, there is a strong possibility that artificial intelligence backed bots and fake accounts could wreak havoc during the 2020 elections and that they will be much harder to detect.
This has been suggested through a study by USC Information Sciences Institute (USC ISI) computer scientist that bots and fake accounts have evolved and are now better able to copy human behaviors in order to avoid detection. In the study published in journal First Monday examine bot behavior during the US 2018 elections compared to bot behavior during the US 2016 elections.
For their research, scientists studied almost 250,000 social media active users who discussed the US elections both in 2016 and 2018, and detected over 30,000 bots. They found that bots in 2016 were primarily focused on retweets and high volumes of tweets around the same message. However, as human social activity online has evolved, so have bots. In the 2018 election season, just as humans were less likely to retweet as much as they did in 2016, bots were less likely to share the same messages in high volume.
Bots will more likely be using a multi-bot approach to give a false indication of human engagement. Also, during the 2018 elections, as humans were much more likely to try to engage through replies, bots tried to establish voice and add to dialogue and engage through the use of polls, a strategy typical of reputable news agencies and pollsters, possibly aiming at lending legitimacy to these accounts.
This post was originally published on Software Market