This Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message

Engineers warned Meta that nations can monitor chats; staff fear Israel is using this trick to pick assassination targets in Gaza.

The Instagram, Facebook and WhatsApp apps seen on a smartphone, reflecting the logo of Meta.
The Instagram, Facebook, and WhatsApp apps seen on a smartphone, reflecting the Meta logo. Photo: Jens B'ttner/picture-alliance/dpa/AP Images

In March, WhatsApp’s security team issued an internal warning to their colleagues: Despite the software’s powerful encryption, users remained vulnerable to a dangerous form of government surveillance. According to the previously unreported threat assessment obtained by The Intercept, the contents of conversations among the app’s 2 billion users remain secure. But government agencies, the engineers wrote, were “bypassing our encryption” to figure out which users communicate with each other, the membership of private groups, and perhaps even their locations.

The vulnerability is based on “traffic analysis,” a decades-old network-monitoring technique, and relies on surveying internet traffic at a massive national scale. The document makes clear that WhatsApp isn’t the only messaging platform susceptible. But it makes the case that WhatsApp’s owner, Meta, must quickly decide whether to prioritize the functionality of its chat app or the safety of a small but vulnerable segment of its users.

“WhatsApp should mitigate the ongoing exploitation of traffic analysis vulnerabilities that make it possible for nation states to determine who is talking to who,” the assessment urged. “Our at-risk users need robust and viable protections against traffic analysis.”

Against the backdrop of the ongoing war on Gaza, the threat warning raised a disturbing possibility among some employees of Meta. WhatsApp personnel have speculated Israel might be exploiting this vulnerability as part of its program to monitor Palestinians at a time when digital surveillance is helping decide who to kill across the Gaza Strip, four employees told The Intercept.

“WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works,” said Meta spokesperson Christina LoNigro.

Though the assessment describes the “vulnerabilities” as “ongoing,” and specifically mentions WhatsApp 17 times, LoNigro said the document is “not a reflection of a vulnerability in WhatsApp,” only “theoretical,” and not unique to WhatsApp. LoNigro did not answer when asked if the company had investigated whether Israel was exploiting this vulnerability.

Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. “Even assuming WhatsApp’s encryption is unbreakable,” the assessment reads, “ongoing ‘collect and correlate’ attacks would still break our intended privacy model.”

“The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques.

As war has grown increasingly computerized, metadata — information about the who, when, and where of conversations — has come to hold immense value to intelligence, military, and police agencies around the world. “We kill people based on metadata,” former National Security Agency chief Michael Hayden once infamously quipped.

But even baseless analyses of metadata can be lethal, according to Matthew Green, a professor of cryptography at Johns Hopkins University. “These metadata correlations are exactly that: correlations. Their accuracy can be very good or even just good. But they can also be middling,” Green said. “The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

It wasn’t until the April publication of an exposé about Israel’s data-centric approach to war that the WhatsApp threat assessment became a point of tension inside Meta.

A joint report by +972 Magazine and Local Call revealed last month that Israel’s army uses a software system called Lavender to automatically greenlight Palestinians in Gaza for assassination. Tapping a massive pool of data about the Strip’s 2.3 million inhabitants, Lavender algorithmically assigns “almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant,” the report states, citing six Israeli intelligence officers. “An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination.”

WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death.

The report indicated WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death, citing a book on AI targeting written by the current commander of Unit 8200, Israel’s equivalent of the NSA. “The book offers a short guide to building a ‘target machine,’ similar in description to Lavender, based on AI and machine-learning algorithms,” according to the +972 exposé. “Included in this guide are several examples of the ‘hundreds and thousands’ of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant.”

The Israeli military did not respond to a request for comment, but told The Guardian last month that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.” The military stated that Lavender “is simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations. This is not a list of confirmed military operatives eligible to attack.”

It was only after the publication of the Lavender exposé and subsequent writing on the topic that a wider swath of Meta staff discovered the March WhatsApp threat assessment, said the four company sources, who spoke on the condition of anonymity, fearing retaliation by their employer. Reading how governments might be able to extract personally identifying metadata from WhatsApp’s encrypted conversations triggered deep concern that this same vulnerability could feed into Lavender or other Israeli military targeting systems.

Efforts to press Meta from within to divulge what it knows about the vulnerability and any potential use by Israel have been fruitless, the sources said, in line with what they describe as a broader pattern of internal censorship against expressions of sympathy or solidarity with Palestinians since the war began.

Related

Israeli Group Claims It’s Working With Big Tech Insiders to Censor “Inflammatory” Wartime Content

Meta employees concerned by the possibility their product is putting innocent people in Israeli military crosshairs, among other concerns related to the war, have organized under a campaign they’re calling Metamates 4 Ceasefire. The group has published an open letter signed by more than 80 named staff members. One of its demands is “an end to censorship — stop deleting employee’s words internally.”

Meta spokesperson Andy Stone told The Intercept any workplace discussion of the war is subject to the company’s general workplace conduct rules, and denied such speech has been singled out. “Our policy is written with that in mind and outlines the types of discussions that are appropriate for the workplace. If employees want to raise concerns, there are established channels for doing so.”

MENLO PARK, CALIFORNIA - NOVEMBER 3: Crowds are gathered outside of Meta (Facebook) Headquarters to protest Mark Zuckerberg and Meta's censoring about Palestine posts on social platforms in Menlo Park, California, United States as they protest and condemn recent actions by the government of Israel and calling U.S. to stop aiding to Israel, on November 3, 2023. (Photo by Tayfun Coskun/Anadolu via Getty Images)
Crowds gather outside of Meta headquarters in Menlo Park, Calif., to protest Mark Zuckerberg and Meta’s censoring of Palestine posts on social platforms, on Nov. 3, 2023. Photo: Tayfun Coskun/Anadolu via Getty Images

According to the internal assessment, the stakes are high: “Inspection and analysis of network traffic is completely invisible to us, yet it reveals the connections between our users: who is in a group together, who is messaging who, and (hardest to hide) who is calling who.”

The analysis notes that a government can easily tell when a person is using WhatsApp, in part because the data must pass through Meta’s readily identifiable corporate servers. A government agency can then unmask specific WhatsApp users by tracing their IP address, a unique number assigned to every connected device, to their internet or cellular service provider account.

WhatsApp’s internal security team has identified several examples of how clever observation of encrypted data can thwart the app’s privacy protections, a technique known as a correlation attack, according to this assessment. In one, a WhatsApp user sends a message to a group, resulting in a burst of data of the exact same size being transmitted to the device of everyone in that group. Another correlation attack involves measuring the time delay between when WhatsApp messages are sent and received between two parties — enough data, the company believes, “to infer the distance to and possibly the location of each recipient.”

The internal warning notes that these attacks require all members of a WhatsApp group or both sides of a conversation to be on the same network and within the same country or “treaty jurisdiction,” a possible reference to the Five Eyes spy alliance between the U.S., Australia, Canada, U.K., and New Zealand. While the Gaza Strip has its own Palestinian-operated telecoms, its internet access ultimately runs through Israeli fiber optic cables subject to Israeli state surveillance. Although the memo suggests that users in “well functioning democracies with due process and strong privacy laws” may be less vulnerable, it also highlights the NSA’s use of these telecom-tapping techniques on U.S. soil.

“Today’s messenger services weren’t designed to hide this metadata from an adversary who can see all sides of the connection,” Green, the cryptography professor, told The Intercept. “Protecting content is only half the battle. Who you communicate [with] and when is the other half.”

The assessment reveals WhatsApp has been aware of this threat since last year, and notes the same surveillance techniques work against other competing apps. “Almost all major messenger applications and communication tools do not include traffic analysis attacks in their threat models,” said Donncha Ó Cearbhaill, head of Amnesty International’s Security Lab, told The Intercept. “While researchers have known these attacks are technically possible, it was an open question if such attacks would be practical or reliable on a large scale, such as whole country.”

The assessment makes clear that WhatsApp engineers grasp the severity of the problem, but also understand how difficult it might be to convince their company to fix it. The fact that these de-anonymization techniques have been so thoroughly documented and debated in academic literature, Green explained, is a function of just how “incredibly difficult” it is to neutralize them for a company like Meta. “It’s a direct tradeoff between performance and responsiveness on one hand, and privacy on the other,” he said.

Asked what steps the company has taken to shore up the app against traffic analysis, Meta’s spokesperson told The Intercept, “We have a proven track record addressing issues we identify and have worked to hold bad actors accountable. We have the best engineers in the world proactively looking to further harden our systems against any future threats and we will continue to do so.”

The WhatsApp threat assessment notes that beefing up security comes at a cost for an app that prides itself on mass appeal. It will be difficult to better protect users against correlation attacks without making the app worse in other ways, the document explains. For a publicly traded giant like Meta, protecting at-risk users will collide with the company’s profit-driven mandate of making its software as accessible and widely used as possible.

“The tension is always going to be market share, market dominance.”

“Meta has a bad habit of not responding to things until they become overwhelming problems,” one Meta source told The Intercept, citing the company’s inaction when Facebook was used to incite violence during Myanmar’s Rohingya genocide. “The tension is always going to be market share, market dominance, focusing on the largest population of people rather than a small amount of people [that] could be harmed tremendously.”

The report warns that adding an artificial delay to messages to throw off attempts to geolocate the sender and receiver of data, for instance, will make the app feel slower to all 2 billion users — most of whom will never have to worry about the snooping of intelligence agencies. Making the app transmit a regular stream of decoy data to camouflage real conversations, another idea floated in the assessment, could throw off snooping governments. But it might also have the adverse effect of hurting battery life and racking up costly mobile data bills.

To WhatsApp’s security personnel, the right approach is clear. “WhatsApp Security cannot solve traffic analysis alone,” the assessment reads. “We must first all agree to take on this fight and operate as one team to build protections for these at-risk, targeted users. This is where the rubber meets the road when balancing WhatsApp’s overall product principle of privacy and individual team priorities.”

The memo suggests WhatsApp may adopt a hardened security mode for at-risk users similar to Apple’s “Lockdown Mode” for iOS. But even this extra setting could accidentally imperil users in Gaza or elsewhere, according to Green. “People who turn this feature on could also stand out like a sore thumb,” he said. “Which itself could inform a targeting decision. Really unfortunate if the person who does it is some kid.”

Join The Conversation