Bots are automatic software programs that perform repetitive tasks to gather data from the internet. Bots can automate tedious and time-consuming processes efficiently, but they can also be deployed to mine users’ data or manipulate public opinion. The Imperva Incapsula security company’s Bot Traffic Report 2016 estimates that approximately 30% of internet traffic is produced by malicious bots.
In social media, bots collect information that might be of interest to users by crawling the internet for specific information and sharing it on sites like Facebook and Twitter. Bots use keywords and hashtags in their searches. Some social bots were developed to behave like a human—using emojis in their posts, only posting at reasonable hours of the day, or limiting the amount of information they share. They have become increasingly sophisticated, making it difficult to distinguish a bot-generated internet persona from a live human. In 2014, Twitter revealed in a Securities and Exchange Commission filing that approximately 8.5% of all its users were bots, and that number may have increased to as much as 15% in 2017.
People who are unaware that they are interacting with a bot can easily be supplied with false information. According to research published in the Communications of the Association for Computing Machinery in 2016, more than 20% of authentic Facebook users accept friend requests indiscriminately. People with a large network of friends are more likely to accept requests from people they don’t know. This can make it relatively easy for bots to infiltrate a network of social media users.
Can technology help?
Technology to detect bots is in its infancy, and digital security experts are working on approaches to recognize them automatically. Indiana University launched the Observatory on Social Media project (previously known as Truthy), which compares suspected Twitter accounts with the characteristics of known bots collected in its database.
The Defense Advanced Research Projects Agency, an arm of the US Defense Department responsible for developing new technologies for the armed forces, sponsored a competition in 2015 to identify bots that simulate human behavior and attempt to influence opinions on social networks. The project discovered that a semiautomated process combining inconsistency detection, behavioral modeling, text analysis, network analysis, and machine learning is the most effective means of identifying malicious bots.
Do not accept friend requests from accounts that lack a profile picture or have confused or misspelled handles.
A main difficulty in identifying some bots is their short lifespan. Bots often are created for a specific task; once that task is complete, their accounts are eliminated. Detecting bots before they can do any harm is critical to shutting them down.
Bot literacy
The good news is that it is possible to watch for bots on social networks. To protect ourselves from bots that spread misinformation, we can use the following methods and teach our users the same:
- Do not accept friend requests from accounts that lack a profile picture, have confused or misspelled handles, have low numbers of tweets or shares, and follow more accounts than they have followers.
- Report any bots that you’ve identified. Social media sites provide links for reporting misuse.
- Rather than relying on a single hashtag, use a wide variety of hashtags and change them on a regular basis.
- Check the number of followers for new friends. If accounts you follow gain large numbers of followers overnight, bots are probably involved.
- Read before sharing. Many people share articles without reading anything but the headline, which may be misleading or unrelated to the story it is attached to.
- Be skeptical. Verify sources, and use such fact-checking sites as Snopes or PolitiFact.
If our students and patrons are taught to be skeptical about information sources, they are more likely to discern the truth. Helping individuals learn information literacy is one of the most important skills we can offer.