Answer to the most common question: Why am I suddenly getting a lot of spam? The good news: increased spam rarely lasts more than part of a week. What happens:
- Your account name is on a list used by a group of spam accounts going active. As they are suspended, your spam will drop to previous levels.
- You’ve been tweeting about things that generate spam, like iPads. If you want to converse a lot about “spammish” topics, best to do it by DM.
- If your spam is by DM, one or more of the people you follow has signed up for a spammy service, or been hijacked (hijackers are good at fooling people)…or you’re following the wrong people!
Spammers on Twitter have gotten very sophisticated in the last couple of years. Tools refined by large spam website networks have come to Twitter. The most serious side of this problem is that if it grows out of control, it can make Twitter unusable for real people. And there is little Twitter can do about it. Even if they strongly lock down the creation of new accounts, many existing spam accounts are mostly undetectable, and you can’t stop spammers from creating accounts completely. All the more important to be careful about who you follow and engage with on Twitter!
What can you do?
Opt out of common sources of spammy tweets:
- Send a DM to @optmeout (they will follow you back after you follow them)
- http://lolquiz.com/optout Click “opt out”
- http://pollpigeon.com/optout/ Click “Don’t send me any Direct Messages”
Use tools to find and avoid Twitter spam:
How advanced spam tools work
Most of what we think of as spam—tweets that are obvious ads, for example—are not very sophisticated. The most sophisticated spammers create hundreds or thousands of fake accounts through networks of servers usable for this purpose (typically proxies or bot networks/infected computers), each account tweeting human-like content that is fully or partially auto-generated. Instead of typical spam, these are fake accounts, created and run by bots (computer software) that appear to be real people—but are completely fake.
Automatic content creators have been around a long time for website spammers. Originally they used content from existing sites as partial sources and combined bits and pieces of text together using grammatical rules to create new articles. Nowadays they are much more sophisticated, generating machine-written articles on any topic given a set of keywords and topic guidelines. These tools have been adapted for Twitter. Using a database of tweet types, sample content and language rules, they create machine-written tweets that look approximately “human.” They can also do things such as retweet something from new followers occasionally to seem more engaged and real.
But, as @janpz points out, they often seem a little “off,” as if these are mentally challenged individuals with Twitter accounts, or sometimes people struggling as if they are not familiar enough with the language they are tweeting in.
What large Twitter spam account networks can do
While you might not be fooled by these accounts if you read dozens of their tweets, most of us would be fooled by only seeing a few of their tweets.
But their first goal is to fool Twitter’s ever-more sophisticated automatic spam detection so they are not suspended. (You can be sure Google is investigating ways to tell real from faked Twitter account reputation as well.) And the accounts are getting harder to distinguish all the time, such as by retweeting more (hard to tell if a human did it), and tweeting less (easier to tell if a human did it). The accounts try to mimic things humans do as closely as possible in following, replies, retweets, tweet topics and types. Large groups of them won’t share links at first (or ever) or send @msgs to many people, since these two kinds of tweets can be auto-detected as spam.
So what is their purpose if they don’t share links, don’t contact other users, and can be identified as fake accounts by a close read of a large group of their tweets? First, realize that an account can go a long time, building a false reputation, before tweeting its first link.
But there are several ways spammers use fake accounts. Most important is to have hundreds of accounts that can be controlled as a group. Then you can have a large portion of them act to make something popular. For example:
- These accounts can follow other accounts, making them seem “popular,” and giving them a falsely positive “reputation.”
- They can retweet other accounts, another method by which a group of accounts can falsely increase the reputation of other accounts.
- They can tweet links or hashtags or use retweets to influence trends. The trends influenced are not typically Twitter’s tracking of trending topics, but third party sites (such as Twittersphere) that track the most popular links on any given day.These viral topic aggregators drive a lot of traffic to websites, because people visit them to find interesting links. Also, third-party sites sometimes link to one another, for example popurls.com uses Twittersphere in its column showing the most popular links on Twitter at any given time. So a group of fake accounts might all start tweeting about something at the same time to make that content “go viral” for a few hours.
- They can send @ messages to Twitter users they have identified as “probably real,” for example by tweeting only to users with a minimum Klout score who have written tweets on a particular topic before. A bot network of Twitter accounts might them promote a music event for an unsuspecting paying client, for example. Hundreds of fake accounts sending seemingly friendly messages about the event to thousands of real accounts is one way for spammers to sell advertising units to social media managers or individual clients.
- There is also the disturbing potential for them to attack other accounts, reporting them as spam,say, or creating tweets designed to question their reputation for political or other purposes.
So on their own, or for paying clients, spammers can apply their network of bot Twitter accounts to a variety of purposes. Accounts that are built up in popularity can also sold or traded, in which case existing tweets can be deleted so the new owner can have 100% of their own content showing.
Spammers provide services to fake social media “gurus”
Spammers nearly always generate accounts around different topics or interest areas. Each account has its own “personality” centered around a few keyword topics. For example, an account may be designed to appear:
- Politically conservative
- Sports oriented
- Etc., etc., etc.
That way, just like shady SEO consultants, Twitter bot network owners can provide seemingly legitimate services to “social media gurus.”
How the problem starts
Say you want to help a friend get more followers for their health food store (already the wrong goal, but a very popular objective of new accounts to gain “reputation”).
You see an advertisement for a service offering to help you find Twitter followers on any topic. A spam network with dozens (or many more) Twitter accounts that are health related simply enters in the name of your friend’s health food store Twitter account, and a length of time, such as 6 weeks. Over those six weeks, all the health-related accounts will have followed your friend’s account. They may even also subcontract with a service that promotes the Twitter account name on health-oriented spam websites, pulling in a few real people with Twitter accounts as followers.
Spammers also sell tweeting services to social media gurus/managers, such as finding relevant tweets to retweet. Retweets can be fully automated without making an account seem as “unhuman,” i.e. a bot retweets real humans to look human itself.
Another example: Say you are a new age musician with a new album. You want to use Twitter to promote your music. You hire a social media manager. They use a spam tool to find tweets that your account can retweet that seem to be the kind of things a human would retweet. Enter some settings, and the account will automatically retweet “music” and “new age” tweets from a database on a schedule, mixing in some appropriate general interest tweets. Then they also tweet some human-written things (that they write themselves), plus a few things the musician writes. The account becomes a combination of real, partial, and fully “fake” tweets under partial automatic control.
Using automation on a Twitter account isn’t always bad, but it’s a slippery slope. Since the more automation you use, the less time you spend, social media managers can develop an unhealthy addiction to these tools, and do their clients a disservice. Client who don’t know any better can be entranced by low-cost services that seem to promise magic. “Get your message out to thousands of highly-engaged Twitter users this month!” I see many accounts retweet the same tweets on an approximately 6-week schedule. If you’re paying attention, it’s clear something “non-human” is going on. But when you check many of the accounts, you can also see there are real people there.
So a Twitter account or it’s followers can be almost any percentage of real or fake.
You can’t stop spammers, because they can always use networks of real humans to do part of their work. This happens with recaptchas or captchas, for example, those hard-to-read codes you are asked to read and write in the characters for before your password is accepted. The captcha is designed to make a sure there is a real person completing a form/logging in. Spammers can use machines to fill out all of a form except for the captcha, sending a screenshot of it to a human to complete. This is how the automation of getting new Twitter accounts created can happen. (Spammers would also have to use a network of computers so that not too many accounts are created from any one IP address, as Twitter doesn’t allow many accounts to be created rapidly from any one IP address.)
No matter how sophisticated methods get, a machine will simply do the parts it can do, and pass off work to a network of humans for the rest. And over time, machines get smarter at doing the work humans used to have to do. Those captchas are getting harder and harder to read, because machines can read most of them reliably nowadays, with no need to involve humans. In fact, a machine can attempt to enter a captcha code, and if it’s unsuccessful, only then pass the next one on to a human to complete.
What about advertising?
Most spammers don’t tweet ads for others. They have their own websites they want to send you to. If you want to learn more about advertising on Twitter, read So How Does Twitter Make Money, Anyway? which has a section on third-party ad networks.
Facebook is an even bigger target for spammers than Twitter, though it is more difficult to crack. Generally only the more sophisticated tools are used there. In fact, spammers are on every social network in existence, and will be there on all future ones as well. This is by no means a problem specific to Twitter.
So many people are on the internet that you only need a tiny, tiny percentage of people to engage with advertisements to make money. To spammers, the internet is like the world’s biggest carnival, and every huckster in the world wants to be there to try their luck on the unsuspecting.
This is a dirty secret of all networks: some of the accounts are not run by humans. So whatever number of accounts you hear a network has, realize it will NEVER represent the number of real humans involved.