Hundreds of thousands of “abusive applications” go undetected by Twitter for months, according to a recent study, despite an “ongoing arms race” between the social media company and internet trolls.
The study, conducted by Professor Zubair Shafiq and graduate student Shehroze Farooqi of the University of Iowa observed the social media site for 16 months, examining the effectiveness of Twitter’s attempts to clamp down on third-party applications generating misinformation, spam, scams, and other abusive content. It found that Twitter is having trouble keeping up.
Third-party applications, which can control multiple Twitter accounts at once, are “widely used for benign purposes,” the study says. “[Unfortunately], they can also be exploited by attackers to compromise and orchestrate a large number of accounts for nefarious purposes.”
Malicious actors can use these applications to spread spam and malware, according to the study. They are also used to inflate the online reputation of particular accounts by interacting with them through likes, follows, or retweets.
Shafiq and Farooqi found “attackers” create an average of 423 new abusive applications each day and post more than 43,000 tweets. Over the 16-month study, they reported tracking a total of 16 million tweets posted by 167,000 abusive accounts.
According to the researchers, their study observed a sample of “at most” 1 percent of all tweets posted each day.
“[The] actual number of tweets posted by abusive application is likely higher by roughly two orders of magnitude,” Farooqi and Shafiq wrote. “Thus, we estimate the number of daily and total tweets posted by abusive application to be in the order of millions and billions, respectively.”
Each online attacker tends to manage thousands of active abusive applications at one time, the study says. On average, Farooqi and Shafiq found, it takes Twitter more than six months to shut down these applications.
“Third-party Twitter applications present a convenient way for attackers to orchestrate fake or compromised accounts through” the site’s developer platform, Farooqi and Shafiq wrote. “…While recently announced countermeasures by Twitter detect some abusive applications, we note that a vast majority of abusive applications still go undetected for a long time period.”
Attackers use applications to control fake accounts, which they create themselves or purchase from underground marketplaces, or they can access and control real accounts through phishing or other internet scams. Some even bribe real users (through money, followers, or other means) for access to their accounts.
Large tech companies, including social media sites like Twitter and Facebook, have come under public scrutiny in recent years due to the prevalence of bot accounts on their sites and the role these accounts play in spreading misinformation and abuse.
Tech executives like Jack Dorsey of Twitter and Mark Zuckerberg of Facebook have appeared before Congress in the last year to speak with lawmakers about foreign influence on U.S. elections through social media. Social media companies have been criticized for unfairly applying their code of conduct policies, particularly by political conservatives.
Earlier this month, Facebook blocked 22 pages linked to online provocateur Alex Jones and his website InfoWars, besides dozens of others. Jones himself was banned from the site in August for rules related to “hate speech, bullying, and graphic violence,” according to The Verge.
Besides demonstrating the problem, Farooqi and Shafiq’s study proposed a solution — or at least a partial one. They developed a machine-learning system which purports to detect abusive applications within their first seven tweets — whereas 60 percent of the time Twitter waits to flag an application as abusive until it has sent more than 100 tweets.
Farooqi and Shafiq claim their system could detect abusive accounts with nearly 93 percent precision, including “a large fraction” of those missed by Twitter.
Image courtesy of Pexels.com.