The WhatsApp chronicles of ice and fury
Behold the tales of how one single person can spread two million fake news messages per day and why Facebook's messaging app will soon become an even worse misinformation machine
As the most expensive acquisition in technology — $19 billion in 2014 — WhatsApp has a curious cachet in the United States. Despite being the leading messaging platform in 169 countries, WhatsApp’s share in the domestic market still hasn’t taken off: only one in four smartphone owners use the app.
This is probably why every time the issue ‘Fake News’ resurfaces in the American media cycle, WhatsApp is given a reprieve. It receives minimal attention from tech sites, it is rarely mentioned during Congress’ ‘Fake News’ hearings, and it barely appears in those trending docs aka The social dilemma or The great hack. If you land today in the United States to investigate fake news, hate speech, conspiracy theories, propaganda… the whole misinformation pack, WhatsApp simply does not exist.
Which is nuts, especially considering that WhatsApp has reached two billion users worldwide in February, a figure that places the messaging app far ahead of the “official” Facebook Messenger (1.3 billion), for example, and at the same level of YouTube in popularity.
So where is WhatsApp’s crowd if not in the U.S.? In absolute terms, the biggest WhatsApp base is India for sure, with over 400 million people. But the top five countries in market penetration are the Philippines (90%), Vietnam (78%), Colombia, Mexico, and Thailand (75%). These places, however, are just the tip of an A68-style iceberg, which includes another 163 nations. Outside the U.S., immersed in the icy waters of shadowy schemes and heavy encryption, the coupling of fake news and WhatsApp is a catastrophic combo.
Unlike Twitter or Facebook, which are public platforms, WhatsApp is mainly a private network, made up of single connections and groups. One person can spread whatever they want in, say, 10 (or more) groups and reach 2,500 (or more) people. Even if some user challenges the veracity of that information in one group (which rarely happens), that counterclaim wouldn’t reach the users of the other groups, nor would it reach the users who already received the message somewhere else. Furthermore, it is impossible to track the source or make the origin information “unavailable” or labeled as “misleading,” for instance, which basically turns WhatsApp into a dark wilderness of misinformation.
And this illustration only demonstrates the reach of one person with one malicious intention in mind.
Now picture one person with several malicious intentions in mind — and one SIM card server to do the heavy lifting.
As the name suggests, a SIM card server is a piece of hardware where someone can connect multiple SIM cards at the same time in order to control several numbers on any social platform that requires phone identification — like WhatsApp. It is not the kind of equipment you can buy at the nearest Walmart, but it is easily found in any C2C or B2C marketplace, such as eBay.
Over the last week I have spotted several of these machines on a South American website, with prices varying from $500 to $16,000. One of the ads proudly proclaimed: “It allows up to 140,000 audio messages and up to 550,000 text messages a day, and it is possible to add extra channels, expanding its capacity to send up to 500,000 audio messages and 2,000,000 text messages a day. It comes with a software that mimics human usage, thus allowing the lines to work longer.”
In sum, forget those images of hundreds of phones in a wall, that became famous when the first click farms were exposed in Asia: today’s scheme to spread fake news and/or simulate online crowds comes down to one guy, a SIM card server, and a computer. That’s all it takes to generate 2,000,000 messages a day.
If this new geometric progression wasn’t enough to scare us, there’s encryption. The technical paper published by WhatsApp describes the following: “The end-to-end encryption protocol is designed to prevent third parties and WhatsApp from having plaintext access to messages or calls. Even if encryption keys from a user’s device are ever physically compromised, they cannot be used to go back in time to decrypt previously transmitted messages.” Translation: no matter what’s in there — from child pornography to kitty videos, from neo-Nazi propaganda to graduation pics — WhatsApp (nor law enforcement agencies) cannot “see” it.
Last August, the award-winning Brazilian journalist Patrícia Campos Mello launched a comprehensive book about the role of fake news during Brazil’s 2018 presidential election — it is not only a meticulously researched work, but an extremely factual one. Right at the beginning, A Máquina do Ódio (The Hate Machine, that hasn’t been translated to English) details how “marketing” agencies offered “WhatsApp services” to candidates all over the country. “When the candidate also bought their database — a mobile phone number registry — each message sent would cost a little more, from R$ 0,01 to R$ 0,02”. The Brazilian electoral legislation prohibits buying third-party databases, Campos Mello points out.
As you might have guessed, a lot of these services were related to a long-standing practice in politics: the nefarious art of deconstructing your opponent’s image. And since all this traffic circulated freely through the privacy of mobile lines, very little of what was spread via WhatsApp during the Brazilian elections was, in fact, debunked. “Very little” is a major understatement, since Brazilians exchange roughly seven billion messages per day using the service.
It took one year for WhatsApp publicly admit that. On October 2019, Ben Supple, WhatsApp Global Election & Politics and Government Lead, recognized something that Brazilians knew all along: “During last year’s Brazilian election, there was an action by companies that provided mass messages that violated our terms of use.” That is to say, Brazil’s presidential election was heavily influenced by mass delivery of messages by agencies and campaigns that used automated systems.
“The conditions were ideal for the dissemination of misinformation. In Brazil, many people use WhatsApp as the primary source of information and have no means to verify the veracity of the content.” This second sentence may sound like me rambling again, but no, it is still Supple, the guy from WhatsApp, in that same day.
One year after his public confessional, nothing has changed. Yesterday, Brazil wrapped up another election period and the mass messaging via WhatsApp, once again, was a major issue. Last month, the same Brazilian journalist, Campos Mello, published another important story revealing how “marketing” agencies managed to circumvent electoral and WhatsApp rules to keep the mass messaging industry in play. When confronted with the issue, the director of Public Policies for WhatsApp in Brazil, Dario Durigan, sounded like a broken record: “We’re committed to fighting automation and mass messaging.” In the same interview, though, he added: “But it is necessary to make candidates aware, so they don’t hire this kind of service”.
Okay, let’s hope this is not a global plan, because if relying on the honesty and self-awareness of politicians is going to be WhatsApp’s main strategy to stop misinformation on the platform, Brazil and the other 168 countries are doomed.
The fusion of mass messaging and encryption in a private network is problematic because it becomes an express highway for any type of unverified content. Even worse: unverified content that targets particular groups. It’s worth noting that this is not about sending information that is clearly fake to some ‘Class of 99’ WhatsApp group. This is about spreading false information or distorted narratives about sensitive issues in order to trigger a specific reaction. That’s when you face a very sinister weapon.
In addition to Campos Mello’s book, another work that addresses this theme with in-depth investigation and impressive honesty is Les ingénieurs du chaos, by Giuliano Da Empoli (The engineers of chaos, that also hasn’t been translated to English). In less than 200 pages, Da Empoli manages to identify the “engineers” behind these political schemes (political advisers, data scientists, IT experts) and how they have been operating to boost several techno-populist movements in Italy, Hungary, UK and the U.S. “During this phase [2012], Movimento 5 Stelle experienced a qualitative leap in the production of parallel reality (…) The information is already custom-made to go viral on other social networks. And after this information — sometimes true, but very often false — is released, people are invited to engage.”
When “bad actors” (to use Zuckerberg’s pet expression) combine this strategy described in Da Empoli’s book with the huge ad targeted machine from all Facebook platforms, there is pretty much no limit to the damage it can cause. Especially if this profusion of messages is concentrated on a private and encrypted network, like WhatsApp.
Last month Facebook’s CEO decided to approach an old internet debate in a very peculiar way. The day the Senate Commerce Committee held a hearing about Section 230 of the Communications Decency Act, Zuckerberg came out with this: “People want to know that companies are taking responsibility for combatting harmful content on their platforms. (…) And they want to make sure that platforms are held accountable”. In short, Section 230 states that, with some exceptions, internet companies are not legally responsible for the content they host when published by someone else, and Zuckerberg never addressed this matter this way.
There is a good reason for that.
Among other interests, Zuckerberg tweaked his Section 230 tone because, deep down, he’s much more worried about the EARN IT Act, the one that deals with encryption and user privacy. In an attempt to extend the 230 discussion as much as possible, and save some time, he intends to support some version of the PACT Act, which is still a “response” to those who want to see big tech being held accountable, but it is a much lighter response than a truly regulatory action.
What most people haven’t yet noticed, probably because WhatsApp’s lack of popularity in the U. S., is that Facebook intends to fully integrate the messaging services of its products (Instagram, Messenger, and WhatsApp) — in fact, that already has been announced.
Moreover, it is highly likely that the company will expand features like groups and broadcast lists — which are a big hit on WhatsApp — to the other two products.
The third step of this process will cover how to monetize these groups, which will inevitably result in some version of the ad targeting system that Facebook already uses. And here’s the catch: targeting audiences within groups will be much more efficient (and profitable) than it is today on Facebook itself (for those who think users won’t tolerate ads in their groups, I suggest you take a closer look at your GMail inboxes).
And do you know who is going to oversee and regulate all this message exchanging and ad targeting flowing through Facebook’s new private and encrypted ecosystem? Exactly: nobody.
It’s not just that Facebook has no intention of combatting misinformation, fake news, and harmful content in its platform, it’s a bit more problematic than that. Facebook is actually building a factory of message exchanging with no parallel in the digital world that will be impossible to control or regulate because it will live under the auspices of encryption and user privacy forever.
Then, all the other countries where Facebook operates, but where WhatsApp is not yet popular — like the U.S. — will finally understand how dangerous and profound that iceberg is. And they will also know why leaving out WhatsApp from the fake news discussion happening now is the textbook definition of trying to avoid the inevitable.