Let's Play Spot the bot
Have you ever found yourself browsing the internet when you’ve been asked to do a series of tests to ensure you’re not a robot?
Well, the reason websites need these tests can be traced back to the exponential growth in the use of bots online.
Bots are a type of automated software application that performs specific repetitive tasks. These tasks are coded by a human and could perform anything that replaces normal human internet tasks such as logging into an account, web crawling, chatting with other users, or even sending spam.
Some bots can be useful, such as customer service chatbots, which would be classified as a ‘good bot’, however, ‘bad bots’ are becoming more and more common.
Bad bots are often used to carry out illegal cyber activities such as running scams, and non-illegal but still malicious behaviours such as click fraud, email harvesting, spamming, or website stuffing. Web crawling is the most prevalent use of bots on the internet today. Another kind of bad bot is the social media bot – which is usually an account created to look like a real person but used to spread misinformation and deceive others.
The first bots appeared in 1988 along with IRC, better known as Internet Relay Chat – which kept services from closing or ending due to prolonged inactivity.
So how have bots changed since 1988, and how have they shaped the internet we know today?
Well, according to a study by cybersecurity firm Imperva, bot activity made up 40% of all internet activity in 2020.
Twitter also reported that over 15% of all Twitter users could be bots in January of 2018. In a study by the Pew Research Centre, they found that ⅔ of all tweeted links had been shared and tweeted by suspected bots. These bots have directly contributed to fake news and conspiracy theories online, and the aim is the dangerous spread of misinformation.
This is a huge amount of the activity that we are seeing online. Twitter and Social Media bots have become even more intelligent throughout the years, producing more human-like conversations.
These bots are used to create or add to conversations for political, or financial gain – for example, bots are often used to publish Facebook posts or Tweets to give the impression that they are a real issue. These tweets are often political and the intention is to spread misinformation and deceive others. These bots are manipulating the conversations that are taking place on social media every day and are fuelling misinformation campaigns with ease.
So if bots can be used to manipulate social media conversations regarding elections and political events – why aren’t we stopping them?
The answer is that the issue is vast and these bots are extremely hard to find. Twitter began to mass remove suspected bot accounts in 2018. One Twitter staff member suggested that between 8.5-10 million bots are removed every week based on suspicious behaviour.
In the first quarter of 2019, it is estimated that Facebook removed 2.2 Billion fake accounts. This process is slow, and unfortunately, there are more bot accounts being created every day in an attempt to influence the unsuspecting.
If you are unsure of how to sort real from fake, the following steps will help you assess bots, trolls, or fake accounts on social media.
- Look at Account Names
While celebrity and official accounts across all social media platforms usually come with a blue tick, to verify that they’re legitimate – it can be harder to differentiate bots from personal accounts.
One example of this kind of bot may be an account impersonating someone else. This person could be someone famous like Donald Trump. An account impersonating Donald Trump may look identical to his official account, but if you look closely there are dead giveaways that the account is a fake, such as the username being misspelled ‘@donalddtrumpp’.
A usual giveaway of a personal account that is most likely a bot being used to spread misinformation, is the lack of a real name, or a randomly generated Twitter handle, usually given away by a string of numbers at the end of the handle.
2. Identifying a Pattern of Speech
Are their posts reading similarly? Often, bots are run off a single algorithm and are all programmed with identical language and patterns of speech. A giveaway of bot behaviour is if you see multiple accounts with the same tweet, or using an article heading as the body of the tweet.
Bots can even tag real people in an attempt to get them involved in the conversations to appear like real people.
3. Profile Pictures
Does the account you’re looking at have a photo of themselves as their profile picture? No? Then you could be looking at a bot.
A simple google reverse image search will usually help you identify where the photo originated and determine whether or not this account is a bot.
This google reverse image search also works for the content suspicious accounts share. These accounts also circulate doctored or fake images, so doing a reverse image search is a great way of finding out whether or not this account is a trustworthy source of information.
4. Checking Their Timeline
How often is this account posting? Is it posting 24 hours a day?
If an account is posting content every few minutes, or at crazy hours of the night, the probability of it being a bot is high. Most real humans don’t have the time to post 20 plus political tweets per day.
Another way to detect a bot is by looking at the followers to following ratio. If the account has little to no followers, while it’s following a few thousand people – you’re most likely looking at a bot.
At RiskEye, we are moving to a time of needing new skills to see this new digital world in a new way. RiskEye monitors your brand online 24/7, using real people to send you alerts, so you don’t have to spend so much time watching for risk.