Elon Musk hit the brakes on his bid to buy Twitter, stating recently that the platform’s fake account issue is a potential showstopper.
Musk doesn’t believe Twitter’s claim that fake users/bots account for only 5% of its users, and he wants proof to back their claim or the deal is off.
Musk isn’t alone in his suspicions, as some believe that up to 15% of Twitter accounts are bots–or approximately 48 million accounts.
Whether Musk’s belief that Twitter is crawling with fake users is true or fake news, bogus accounts that generate automated spam continue to plague every social platform and turn customers (and potential buyers) off.
How can Twitter get rid of fake accounts to get the deal done?
The answer may be in the (phone) numbers. Twitter and other social platforms can reduce and even eliminate fake accounts by requiring phone number for verification.
Only a few years ago, fake accounts were easy to spot and dismiss. On Twitter, the telltale sign of an ‘egghead’ instead of a profile picture usually signaled an account created in a hurry with nefarious intent.
Today fake accounts look legitimate–with photos of real people and videos depicting recent vacations and the family dog. These fake users may even post complaints about Twitter’s term of service or its verification process, just like a real user.
Though today’s fake accounts can be harder to spot, it is possible.
Bad actors usually create bots to automate account creations. Though realistic on the surface, bot-created profiles often share characteristics, such as similar names (with multiple numbers appended), tweet frequency and type, and number of followers.
Bots create profiles in batches, so the way to spot them is to look for shared characteristics.
A bot is an automated program used to interact on social media, created from a combination of artificial intelligence, big data, and public databases to imitate human behavior.
Social media bots allow the creator to post and share without human interaction. As a result, a bot can automatically retweet every time a specific hashtag is used, whenever a specific account tweets, and much more.
While bots can be useful and practical–such as chatbots that allow customers to engage and receive real-time assistance from life-like customer assistants–bots are predominately used in dishonest and nefarious ways.
Twitter’s coveted blue verified badge requires additional identity scrutiny to ensure the person or organization (usually well-known) is legitimate. But this same level of security is not extended to all users, making fake accounts easier to create.
Although the issues around Twitter’s sale are likely complex, the challenge of how to rid the platform of fake accounts can be solved through a simple request to users: Provide your phone number.
Twitter offers new users the choice between an email address or phone number during onboarding. Here’s how they can improve that process:
While some people may not feel safe providing their phone number–either from a fear of providing personal information or opening themselves up to robocalls–collecting both email addresses and phone numbers provides additional layers of security and risk analysis needed to stop automated account creations.
There are valuable insights tied to a person’s phone number: Phone data attributes, traffic patterns, carrier data, and more. With machine learning and data science that draws on more than 15 years of data patterns, the phone number serves as a powerful trust anchor and can help Twitter discern between human and non-human behavior.
Even if Musk isn’t on the market for your business, it’s critical to identify and shut down fake users before they drive away your customers.
You can use AI to stop AI fraud: Artificial intelligence and digital identity intelligence can verify the legitimacy of your users in a matter of seconds. It helps secure your platform and keeps the fake users out while providing less friction for good customers.
The best way to keep your platform safe and free of fake users? Stop them during your onboarding process.