The Intercept https://theintercept.com/technology/ Fri, 19 Jul 2024 06:51:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 220955519 <![CDATA[How I Got a Truly Anonymous Signal Account]]> https://theintercept.com/2024/07/16/signal-app-privacy-phone-number/ https://theintercept.com/2024/07/16/signal-app-privacy-phone-number/#respond Tue, 16 Jul 2024 10:03:00 +0000 https://theintercept.com/?p=472400 Yes, you can use Signal without sharing your personal phone number. Here’s how I did it.

The post How I Got a Truly Anonymous Signal Account appeared first on The Intercept.

]]>
The messaging app Signal is described by security professionals as utilizing the gold standard of cryptography. Unlike many competitors, its default is end-to-end encryption — and on top of that, the app minimizes the amount of information it stores about users. This makes it a powerful communication tool for those seeking a private and secure means of chatting, whether it’s journalists and their sources, activists and human rights defenders, or just ordinary people who want to evade the rampant data-mining of Big Tech platforms.

Related

Signal’s New Usernames Help Keep the Cops Out of Your Data

Signal continues to introduce privacy-enhancing features such as usernames that can be used in lieu of phone numbers to chat with others — preventing others from finding you by searching for your phone number. But the app still requires users to provide a working phone number to be able to sign up in the first place.

For privacy-conscious individuals, this can be a problem.

In response to subpoena requests, Signal can reveal phone numbers. Relying on phone numbers has also led to security and account takeover incidents. Not to mention that the phone number requirement costs Signal more than $6 million annually to implement.

Signal insists on its site that phone numbers are a requirement for contact discovery and to stymie spam. (Signal did not respond to a request for comment). Other encrypted messaging platforms such as Session and Wire do not require phone numbers. 

There are some ways around Signal’s phone number policy that involve obtaining a secondary number, such as using temporary SIM cards, virtual eSIMs, or virtual numbers. But these approaches involve jumping through hoops to set up anonymous payment measures to procure the secondary numbers. And sometimes they don’t work at all (that was my experience when I tried using a Google Voice number to sign up for Signal).

I wanted a way to get a Signal account without leaving any sort of payment trail — a free and anonymous alternative. And thus began my long and tedious journey of registering Signal with a pay phone. 

Finding a Pay Phone

The first step was actually finding a pay phone, a task which is dismally daunting in 2024.

The Payphone Project lists around 750,000 pay phones, but after attempting to cross-check a sampling of the hundreds of alleged pay phones in my town with Google Street View and Google Earth satellite images, I came to the quick realization that the list was woefully outdated. Many of these phones no longer exist. 

A Google Maps search for pay phones in my area brought up of a half-dozen pins. Using Street View, I found that four locations seemed to have something resembling a pay phone box. Trekking out to them, however, revealed that one no longer had a pay phone, though discoloration of the store façade revealed the precise spot the pay phone used to be; another pay phone looked like it had been the victim of a half-hearted arson attack; the third and fourth lacked dial tones. 

Asking on a community subreddit resulted in suggestions that once again led me to places without any working pay phones, or posts berating me for needing a pay phone in 2024 and inquiring about the legality of the endeavors I wished to pursue which would necessitate pay phone usage.

Failing at finding a functional pay phone through a systemic approach, I resorted to brute opportunism — keeping my eyes peeled for pay phones as I went through the dull drudgery of a modern life made ever bleaker by the lack of public phone access. 

A Working Pay Phone, That Is

I didn’t just need to find a working pay phone — no small feat in 2024. I also needed to find one able to receive incoming calls, so I could get Signal’s activation message.

On a recent visit to Tampa, where I travel annually to discuss security matters and set things on fire, I spotted a pay phone while leaving Busch Gardens. Picking up the receiver, I was delighted to hear the telephonic equivalent of a pulse: a dial tone. 

Now that I had a phone with a dial tone, the next step was to test whether it could receive incoming calls. This is because Signal’s registration process requires a phone number that can either receive a text message or a verification call.

To test whether a pay phone can receive incoming calls, you need to know one thing: the pay phone’s own phone number. Some pay phones reveal their numbers on the phones themselves, but not always. 

If the number isn’t listed on the phone — it wasn’t in this case — there’s a workaround that doesn’t involve a paper trail leading back to your cellphone. Use the pay phone to call what’s known as an ANAC (automatic number announcement circuit), which provides an ANI (automatic number identification) service. In other words, it’s a phone number you can call which then reads out the phone number you are calling from. Lists of ANAC numbers have been bantered about for years, though like pay phone lists, almost all are now defunct. 

One stalwart ANAC number that has withstood the test of time for over 30 years, however, is 1-800-444-4444. Feel free to try it. Call the number, and it should read yours back to you.

Back at Busch Gardens, I rang up the ANAC and had a number read back to me. The next and final step was to test whether the number actually accepted incoming calls. Unfortunately, when I called the number the ANAC line had read back to me, I reached the Busch Gardens main line, asking me to enter my party’s extension. In other words, this wasn’t actually the pay phone’s number, it was just the general theme park number.

Days later, during a layover on my trip home from Tampa, I noticed a small bay of pay phones at a small regional airport. I repeated the above rigamarole, and lo and behold, when I called the pay phone’s number from the neighboring pay phone, I was able to answer and talk to myself. Finally, success.

I took out a burner phone on which I wanted to set up Signal, which had no SIM or eSIM of any kind, and proceeded to enter the pay phone’s phone number when setting up Signal. Signal first insists on attempting to send a verification code via an SMS text message, so you have to initially go through that fruitless route. But after a few minutes, you can then select the option to receive the verification code via a voice call.

Moments later, the pay phone rang, and I was finally able to set up a Signal account. 

The next and final step was to set up a PIN and enable a registration lock so that someone else wouldn’t be able to take over the account by going to the same pay phone and registering their own version of Signal with that same number. The registration lock expires after a week of inactivity, so you also have to keep using the Signal account. It took a while, owing to Signal’s onerous registration requirements coupled with the increasing lack of public phone access, but in the end I proved there is a way to use Signal with an untraceable phone number.

A Step-by-Step Guide

  1. Obtain a phone. It doesn’t need to have an active phone number associated with it, and can be either an old phone you have around or a dedicated burner phone.
  2. Locate a pay phone. 
  3. Find the pay phone’s phone number (call 1-800-444-4444 if it’s not written on the phone).
  4. Make sure the pay phone can receive incoming calls.
  5. Enter the pay phone number into Signal, and use the ‘Call me’ option to receive a verification call (this option shows up only after the SMS timer runs out).
  6. Input the confirmation code, set up a PIN and enable Registration Lock in the Signal app. 

The post How I Got a Truly Anonymous Signal Account appeared first on The Intercept.

]]>
https://theintercept.com/2024/07/16/signal-app-privacy-phone-number/feed/ 0 472400
<![CDATA[“Gay Furry Hackers” Claim Credit for Hacking Heritage Foundation Files Over Project 2025]]> https://theintercept.com/2024/07/09/gay-furry-hackers-claim-credit-for-hacking-heritage-foundation-over-project-2025/ https://theintercept.com/2024/07/09/gay-furry-hackers-claim-credit-for-hacking-heritage-foundation-over-project-2025/#respond Tue, 09 Jul 2024 15:01:41 +0000 https://theintercept.com/?p=472024 The hacker collective SiegedSec says it infiltrated the conservative think tank to oppose its campaign against trans rights.

The post “Gay Furry Hackers” Claim Credit for Hacking Heritage Foundation Files Over Project 2025 appeared first on The Intercept.

]]>
SiegedSec, a collective of self-proclaimed “gay furry hackers,” has claimed credit for breaching online databases of the Heritage Foundation, the conservative think tank that spearheaded the right-wing Project 2025 playbook. SiegedSec released a cache of Heritage Foundation material as part of a string of hacks aimed at organizations that oppose transgender rights, although Heritage disputed that its own systems were breached.

In a post to Telegram announcing the hack, SiegedSec called Project 2025 “an authoritarian Christian nationalist plan to reform the United States government.” The attack was part of the group’s #OpTransRights campaign, which recently targeted right-wing media outlet Real America’s Voice, the Hillsong megachurch, and a Minnesota pastor.

In his foreword to the Project 2025 manifesto, the Heritage Foundation’s president, Kevin Roberts, rails against “the toxic normalization of transgenderism” and “the omnipresent propagation of transgender ideology.” The playbook’s other contributors call on “the next conservative administration” to roll back certain policies, including allowing trans people to serve in the military.

“We’re strongly against Project 2025 and everything the Heritage Foundation stands for,” one of SiegedSec’s leaders, who goes by the handle “vio,” told The Intercept.

In its Telegram post, SiegedSec said it obtained passwords and other user information for “every user” of a Heritage Foundation database, including Roberts and some U.S. government employees. Heritage Foundation said in statement Wednesday that SiegedSec only obtained incomplete password information.

The remainder of more than 200GB of files the hackers obtained were “mostly useless,” SiegedSec said.

The Intercept reviewed copies of files provided to the transparency collective Distributed Denial of Secrets. They included an archive of the Heritage Foundation’s blogs and a Heritage-aligned media site, The Daily Signal, as of November 2022.

This is at least the second hack against the Heritage Foundation this year. In April, Heritage shut down its network following a cyberattack tentatively attributed to nation-state hackers. SiegedSec targeted the Heritage Foundation in early June, according to vio, who denied involvement in the earlier attack.

A spokesperson for the Heritage Foundation said Wednesday that the files were not obtained by hacking its systems, but that SiegedSec discovered them on a third party’s site.

“An organized group stumbled upon a two-year-old archive of The Daily Signal website that was available on a public-facing website owned by a contractor,” said Noah Weinrich, a Heritage spokesperson. “No Heritage systems were breached at any time, and all Heritage databases and websites remain secure, including Project 2025. The data at issue has been taken down, and additional security steps have since been taken as a precaution.”

SiegedSec’s other recent operations have targeted NATO and Israeli companies to oppose the war in Gaza. 

Update: Wednesday, July 10, 6:36 p.m. ET

This article was updated to include comment from the Heritage Foundation disputing that the files released by SiegedSec were the result of a hack of its systems and were hosted instead on a third party’s website.

The post “Gay Furry Hackers” Claim Credit for Hacking Heritage Foundation Files Over Project 2025 appeared first on The Intercept.

]]>
https://theintercept.com/2024/07/09/gay-furry-hackers-claim-credit-for-hacking-heritage-foundation-over-project-2025/feed/ 0 472024
<![CDATA[New York Times Experiments With a New Headline Writer: OpenAI]]> https://theintercept.com/2024/07/08/new-york-times-openai-headlines-chatgpt/ https://theintercept.com/2024/07/08/new-york-times-openai-headlines-chatgpt/#respond Mon, 08 Jul 2024 16:05:41 +0000 “You are a headline writter [sic] for The New York Times,” says a prompt for the paper, which is suing OpenAI for copyright infringement.

The post New York Times Experiments With a New Headline Writer: OpenAI appeared first on The Intercept.

]]>
In the courtroom, the New York Times has taken a hard line against OpenAI. The newspaper sued the artificial intelligence startup alongside investor and partner Microsoft, alleging that OpenAI scraped articles without permission or compensation. The Times wants to hold OpenAI and Microsoft responsible for billions of dollars in damages.

At the same time, however, the Times is also embracing OpenAI’s generative AI technology.

The Times wants to hold OpenAI and Microsoft responsible for billions of dollars in damages.

The Times’s use of the technology came to light thanks to leaked code showing that it developed a tool that would use OpenAI to generate headlines for articles and “help apply The New York Times styleguide” — performing functions that, if applied in the newsroom, are normally undertaken by editors at the newspaper.

“The project you’re referring to was a very early experiment by our engineering team designed to understand generative A.I. and its potential use cases,” Times spokesperson Charlie Stadtlander told The Intercept. “In this case, the experiment was not taken beyond testing, and was not used by the newsroom. We continue to experiment with potential applications of A.I. for the benefit of our journalists and audience.”

Media outlets are increasingly turning to artificial intelligence — using large language models, which learn from ingesting text to then generate language — to handle various tasks. AI can be employed, for example, to sort through large data sets.

More public-facing uses of AI have sometimes resulted in embarrassment. Sports Illustrated deleted posts on its site after its readers exposed that some of its authors were AI-generated.

Some newsroom applications for AI are furtive, but in some cases, outlets publicize their AI work. Newsweek, for instance, announced an expansive, if nebulous, embrace of AI.

Like other outlets, the Times has not been shy about its use of AI. The website for the paper’s Research and Development team says, “Artificial Intelligence and journalism intersect in our reporting, editing and engagement with readers.” The Times site highlights 24 use cases of AI at the company; the style guide and headline project aren’t listed among them.  

Because of its regurgitative way of operating, many view AI in newsrooms with skepticism. As the media industry sheds jobs — more than 20,000 positions were lost in 2023 — there are also worries that AI could take away even more roles for journalists. In 2017, the Times eliminated its copy edit desk, shifting some of the copy editors into other roles. The desk used to be responsible for enforcing the style guide, one of the tasks the publication tested with AI.

A screenshot of a tool built by the New York Times’s research and development team to harness OpenAI for headline writing. Screenshot: The Intercept

The Times code became public last month when an anonymous user on the 4chan bulletin board posted a link to a collection of thousands of the New York Times’s GitHub repositories, essentially storage of collections of code for collaborative purposes. A text file in the leak said the material constitutes a “more or less complete clone of all repositories of The New York Times on GitHub.”

The Times confirmed the authenticity of the leak in a statement to BleepingComputer, saying the code was “inadvertently made available” in January.

The leak contains more than 6,000 repositories totaling more than 3 million files. It consists of a wide-ranging collection of materials covering the engineering side of the Times, but little from the newsroom or business side of the organization appears to be included in the leak.

The Times has spent $1 million on its suit alleging copyright infringement against OpenAI and Microsoft. The Intercept is engaged in a separate lawsuit against OpenAI and Microsoft under the Digital Millennium Copyright Act.

“Do Not Improvise”

One of the New York Times’s AI projects, titled “OpenAI Styleguide,” is described in its accompanying documentation as a “prototype using OpenAI to help apply The New York Times styleguide.” The project also includes a headline generator.

The style guide checker utilizes an OpenAI large language model known as Davinci to correct all errors in an article’s headline, byline, dateline, or copy — the text of the article — that violate the Times’s style guide. The prompt tells the OpenAI bot, “Do not improvise. Do not use rules you may have learned elsewhere.”

A copy of the Times style guide resides in a separate repository, which is also available in the leak.

The headline generator component of the project tells OpenAI: “You are a headline writter [sic] for The New York Times. Use the following Article as context. Do not use prior knowledge.” The generator also allows the operator to impose several constraints, such as specifying which words must and must not be used.

“Generate a numbered list of three sober, sophisticated, straight news headline based on the first three paragraphs of the following story.”

The OpenAI Styleguide project is not the only instance the Times has experimented with using OpenAI for headline generation. Another repository contains a separate headline generation project using an OpenAI chatbot called ChatGPT. The project was part of a Maker Week at the Times, in which staff work on “self-directed projects.” The Maker Week headline generator tells the bot: “Generate a numbered list of three sober, sophisticated, straight news headline based on the first three paragraphs of the following story.”

Another Maker Week project that uses OpenAI tools is “counterpoint,” described as an “application that generates counterpoints to opinion articles.” The project appears to be unfinished, only instructing the bot to extract keywords from an article.

Aside from OpenAI Styleguide, the Times Github leak contains source code for various applications. Some are research projects — “an effort to build a predictive model of print subscription attrition” — and others cover things like technical job interview test questions, staff training materials, authentication credentials, prototypes for unreleased games, and various personal information.

Correction: July 10, 2024
Due to an editing error, this story has been updated to remove a reference to mistakes in a Daily Mail article that online critics said were AI generated. The Daily Mail later attributed the mistakes to “regrettable human error.”

The post New York Times Experiments With a New Headline Writer: OpenAI appeared first on The Intercept.

]]>
https://theintercept.com/2024/07/08/new-york-times-openai-headlines-chatgpt/feed/ 0 470964
<![CDATA[Israel Opposes Rebuilding Gaza’s Internet Access Because Terrorists Could Go Online]]> https://theintercept.com/2024/06/21/israel-gaza-internet-rebuild/ https://theintercept.com/2024/06/21/israel-gaza-internet-rebuild/#respond Fri, 21 Jun 2024 09:00:00 +0000 https://theintercept.com/?p=470816 Israel destroyed much of Gaza’s internet infrastructure. A Saudi proposal to rebuild it was watered down after Israeli and U.S. protests.

The post Israel Opposes Rebuilding Gaza’s Internet Access Because Terrorists Could Go Online appeared first on The Intercept.

]]>
Israel opposed a proposal at a recent United Nations forum aimed at rebuilding the Gaza Strip’s war-ravaged telecommunications infrastructure on the grounds that Palestinian connectivity is a readymade weapon for Hamas.

The resolution, which was drafted by Saudi Arabia for last week’s U.N. International Telecommunication Union summit in Geneva, is aimed at returning internet access to Gaza’s millions of disconnected denizens.

It ultimately passed under a secret ballot on June 14 — but not before it was watered down to remove some of its more strident language about Israel’s responsibility for the destruction of Gaza. The U.S. delegate at the ITU summit had specifically opposed those references.

Israel, for its part, had blasted the proposal as a whole. Israel’s ITU delegate described it as “a resolution that while seemingly benign in its intent to rebuild telecommunications infrastructure, distorts the reality of the ongoing situation in Gaza,” according to a recording of the session reviewed by The Intercept. The delegate further argued the resolution does not address that Hamas has used the internet “to prepare acts of terror against Israel’s civilians,” and that any rebuilding effort must include unspecified “safeguards” that would prevent the potential use of the internet for terrorism.

“Based on this rationale, Gaza will never have internet.”

“Based on this rationale, Gaza will never have internet,” Marwa Fatafta, a policy adviser with the digital rights group Access Now, told The Intercept, adding that Israel’s position is not only incoherent but inherently disproportionate. “You can’t punish the entire civilian population just because you have fears of one Palestinian faction.”

The Israeli Ministry of Communications did not respond to a request for comment.

Getting Gaza Back Online

When delegations to the ITU, a U.N. agency that facilitates cooperation between governments on telecommunications policies, began meeting in Geneva in early June, the most pressing issue on the agenda was getting Gaza back online. Israel’s monthslong bombardment of the enclave has severed fiber cables, razed cellular towers, and generally wrecked the physical infrastructure required to communicate with loved ones and the outside world.

A disconnected Gaza Strip also threatens to add to the war’s already staggering death toll. Though Israel touts its efforts to warn civilians of impending airstrikes, such warnings are relayed using the very cellular and internet connections the country’s air force routinely levels. It is a cycle of data degradation that began at the war’s start: The more Israel bombs, the harder it is for Gazans to know they are about to be bombed.

The resolution that passed last week would ensure “the ITU’s much needed assistance and support to Palestine for rebuilding its telecommunication sector.” While the agency has debated the plight of Palestinian internet access for many years, the new proposal arrives at a crisis point for data access across Gaza, as much of the Strip has been reduced to rubble, and civilians struggle to access food and water, let alone cellular signals and Wi-Fi.

The ITU and other intergovernmental bodies have long pushed for Palestinian sovereignty over its own internet access. But the Saudi proposal was notable in that it explicitly called out Israel’s role in hobbling Gaza’s connection to the world, either via bombs, bulldozers, or draconian restrictions on technology imports. That Saudi Arabia was behind the resolution is not without irony; in 2022, Yemen plunged into a four-day internet blackout following airstrikes by a Saudi-led military coalition.

Without mentioning Israel by name, the Saudi resolution also called on the ITU to monitor the war’s destructive effects on Palestinian data access and provide regular reports. The resolution also condemned both the “widespread destruction of critical infrastructure, failure of telecom services and mobile phone outages that have occurred across the Gaza Strip since the beginning of the aggression by the occupying power” and “the obstacles practiced by the occupying power in preventing the use of new communications technologies.”

In a session debating the resolution, the U.S. delegate told the council, “We have made clear to the sponsors of this resolution that we do not agree with some of the characterizations,” specifically the language blaming the destruction of Gaza and the forced use of obsolete technology on Israel. “The United States cannot support this resolution in its current form as drafted,” the delegate continued, according to a recording reviewed by The Intercept.

Whether or not the U.S. ultimately voted for the resolution — the State Department did not respond when asked — it appears to have been successful in weakening the version that was ultimately approved by the ITU. The version that did pass was stripped of any explicit mention of Israel’s role in destroying and otherwise thwarting Gazan internet access, and refers obliquely only to “​the obstacles practiced in preventing the use of new communication technologies.”

The State Department did not respond to The Intercept’s other questions about the resolution either, including whether the administration shares Israel’s terror-related objections to it.

The U.S. has taken a harsher stance on civilian internet blackouts caused by a military aggressor in the past. Following Russia’s invasion of Ukraine and the ensuing national internet disruptions it caused, the State Department declared, “the United States condemns actions that block or degrade access to the Internet in Ukraine, which sever critical channels for sharing and learning information, including about the war.”

Outdated Technology

The approved resolution also calls on ITU member states to “make every effort” to both preserve what Palestinian telecom infrastructure remains and allocate funds necessary for the “return of communications in the Gaza Strip” in the future. This proposed rebuilding includes the activation of 4G and 5G cellular service. While smartphones in the West Bank connect to the internet with 3G wireless speeds unsuitable for many data-hungry applications, Gazans must make do with debilitatingly slow 2G service — an obsolete standard that was introduced to the United States in 1992.

Fatafta, of Access Now, noted that Israel does have a real interest in preventing Gaza from entering the 21st century: surveillance and censorship. Gaza’s reliance on insecure cellular technology from the 1990s and Israeli fiber connections makes it trivial for Israeli intelligence agents to intercept texts and phone calls and institute internet blackouts at will, as has occurred throughout the war.

The resolution is “an important step, because the current status quo cannot continue,” she said. “There is no scenario where Gaza can be allowed to keep a 2G network where the rest of the world has already moved on to 5G.”

The post Israel Opposes Rebuilding Gaza’s Internet Access Because Terrorists Could Go Online appeared first on The Intercept.

]]>
https://theintercept.com/2024/06/21/israel-gaza-internet-rebuild/feed/ 0 470816 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[Firefox Browser Blocks Anti-Censorship Add-Ons at Russia’s Request]]> https://theintercept.com/2024/06/12/mozilla-firefox-russia-censorship-blocked/ https://theintercept.com/2024/06/12/mozilla-firefox-russia-censorship-blocked/#respond Wed, 12 Jun 2024 13:38:15 +0000 Mozilla, the maker of the popular web browser Firefox, said it received government demands to block add-ons that circumvent censorship.

The post Firefox Browser Blocks Anti-Censorship Add-Ons at Russia’s Request appeared first on The Intercept.

]]>
The Mozilla Foundation, the entity behind the web browser Firefox, is blocking various censorship circumvention add-ons for its browser, including ones specifically to help those in Russia bypass state censorship. The add-ons were blocked at the request of Russia’s federal censorship agency, Roskomnadzor — the Federal Service for Supervision of Communications, Information Technology, and Mass Media — according to a statement by Mozilla to The Intercept.

“Following recent regulatory changes in Russia, we received persistent requests from Roskomnadzor demanding that five add-ons be removed from the Mozilla add-on store,” a Mozilla spokesperson told The Intercept in response to a request for comment. “After careful consideration, we’ve temporarily restricted their availability within Russia. Recognizing the implications of these actions, we are closely evaluating our next steps while keeping in mind our local community.”

“It’s a kind of unpleasant surprise because we thought the values of this corporation were very clear in terms of access to information.”

Stanislav Shakirov, the chief technical officer of Roskomsvoboda, a Russian open internet group, said he hoped it was a rash decision by Mozilla that will be more carefully examined.

“It’s a kind of unpleasant surprise because we thought the values of this corporation were very clear in terms of access to information, and its policy was somewhat different,” Shakirov said. “And due to these values, it should not be so simple to comply with state censors and fulfill the requirements of laws that have little to do with common sense.”

Developers of digital tools designed to get around censorship began noticing recently that their Firefox add-ons were no longer available in Russia.

On June 8, the developer of Censor Tracker, an add-on for bypassing internet censorship restrictions in Russia and other former Soviet countries, made a post on the Mozilla Foundation’s discussion forums saying that their extension was unavailable to users in Russia.

The developer of another add-on, Runet Censorship Bypass, which is specifically designed to bypass Roskomnadzor censorship, posted in the thread that their extension was also blocked. The developer said they did not receive any notification from Mozilla regarding the block.

Two VPN add-ons, Planet VPN and FastProxy — the latter explicitly designed for Russian users to bypass Russian censorship — are also blocked. VPNs, or virtual private networks, are designed to obscure internet users’ locations by routing users’ traffic through servers in other countries.

The Intercept verified that all four add-ons are blocked in Russia. If the webpage for the add-on is accessed from a Russian IP address, the Mozilla add-on page displays a message: “The page you tried to access is not available in your region.” If the add-on is accessed with an IP address outside of Russia, the add-on page loads successfully.

Supervision of Communications

Roskomnadzor is responsible for “control and supervision in telecommunications, information technology, and mass communications,” according to the Russia’s federal censorship agency’s English-language page.

In March, the New York Times reported that Roskomnadzor was increasing its operations to restrict access to censorship circumvention technologies such as VPNs. In 2018, there were multiple user reports that Roskomnadzor had blocked access to the entire Firefox Add-on Store.

According to Mozilla’s Pledge for a Healthy Internet, the Mozilla Foundation is “committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.” Mozilla’s second principle in their manifesto says, “The internet is a global public resource that must remain open and accessible.”

The Mozilla Foundation, which in tandem with its for-profit arm Mozilla Corporation releases Firefox, also operates its own VPN service, Mozilla VPN. However, it is only available in 33 countries, a list that doesn’t include Russia. 

The same four censorship circumvention add-ons also appear to be available for other web browsers without being blocked by the browsers’ web stores. Censor Tracker, for instance, remains available for the Google Chrome web browser, and the Chrome Web Store page for the add-on works from Russian IP addresses. The same holds for Runet Censorship Bypass, VPN Planet, and FastProxy.

“In general, it’s hard to recall anyone else who has done something similar lately,” said Shakirov, the Russian open internet advocate. “For the last few months, Roskomnadzor (after the adoption of the law in Russia that prohibits the promotion of tools for bypassing blockings) has been sending such complaints about content to everyone.”

The post Firefox Browser Blocks Anti-Censorship Add-Ons at Russia’s Request appeared first on The Intercept.

]]>
https://theintercept.com/2024/06/12/mozilla-firefox-russia-censorship-blocked/feed/ 0 470451
<![CDATA[Apple Matches Worker Donations to IDF and Illegal Settlements, Employees Allege]]> https://theintercept.com/2024/06/11/apple-donations-idf-israel-gaza-illegal-settlements/ https://theintercept.com/2024/06/11/apple-donations-idf-israel-gaza-illegal-settlements/#respond Tue, 11 Jun 2024 18:47:49 +0000 https://theintercept.com/?p=470426 In an open letter, a group of self-described Apple workers, former employees, and shareholders are calling on the company to halt donations to nonprofits linked with Israel’s war effort.

The post Apple Matches Worker Donations to IDF and Illegal Settlements, Employees Allege appeared first on The Intercept.

]]>
An open letter from Apple employees and shareholders demands the tech giant stop matching employee donations to organizations with ties to the Israeli military assault on the Gaza Strip and ongoing illegal settlement development in the West Bank. The letter, building on a previous demand by Apple employees for a ceasefire in the conflict, calls on the company to “promptly investigate and cease matching donations to all organizations that further illegal settlements in occupied territories and support the IDF.” 

As with many large corporations, Apple employees can make donations to a number of nonprofit organizations and receive matching contributions from their employer through a platform called Benevity. Among the charitable organizations eligible for dollar-matching from Apple are Friends of the IDF, an organization that collects donations on behalf of soldiers in the Israeli military, as well as a number of groups that contribute to the settlement enterprise in the West Bank, including HaYovel, One Israel Fund, the Jewish National Fund, and IsraelGives.

Apple did not respond to a request for comment.

“Unfortunately, there has been very little scrutiny into 501(c)(3) organizations that openly support illegal activities in the West Bank and Gaza,” said Diala Shamas, a senior staff attorney at the Center for Constitutional Rights, who described the organizations listed in the campaign as among “the worst actors.”

A legislative effort in New York called the “Not On Our Dime Act” is seeking to challenge the ability of nonprofit organizations in the state to fundraise for illegal settlements, including by making them subject to legal liability or loss of their nonprofit status. Laws against funding activities that violate international human rights law are poorly enforced by the IRS, said Shamas, leaving it to companies and individuals themselves to ensure that their contributions are not going toward organizations potentially engaged in illegal activity.

“Companies often rely on the fact that an organization has 501(c)(3) status. But regardless of whether an organization has nonprofit status, it is illegal to aid and abet war crimes,” Shamas said. “Apple should ensure that it is not sending funds to any of these organizations — especially now when there’s no shortage of evidence or information about the unlawful activities of the settlement movement in the West Bank.”

Apple employees, who organized under the name Apples4Ceasefire, had previously objected to the disciplining and firing of Apple Store employees who “dared to express support of the Palestinian people in the form of kaffiyehs, pins, bracelets, or clothing,” according to a public statement published in April.

The letter — signed by 133 people who describe themselves as “a group of shareholders and current and former employees” — comes on the heels of broader activism at tech companies by some workers objecting to perceived complicity between their employers and the ongoing war in Gaza. Earlier this year, Google fired dozens of employees who took part in a protest over the company’s involvement in a cloud-computing project known as Project Nimbus, which provided services to the Israeli government and military. An open letter from employees of Meta — which owns Facebook, Instagram, and WhatsApp — has criticized its treatment of Palestinian solidarity within the company.

The provision of donations to NGOs helping facilitate the illegal occupation of the West Bank has come under increasing scrutiny as the situation in the region has deteriorated since the October 7 attacks by Hamas and subsequent Israeli military onslaught. Tens of thousands of Palestinians, mostly civilians, are believed to have been killed by the Israel Defense Forces in a campaign that has resulted in war crimes charges brought by the International Criminal Court and genocide charges at the International Court of Justice. 

The conduct and discipline of the IDF has come under particular scrutiny as soldiers have been accused of torture, extrajudicial killings, and other abuses against Palestinians, alongside social media footage posted by many IDF service members themselves of apparent looting and mistreatment of Palestinian detainees. Friends of the IDF, one of the charities on Apple’s matching donations list, is registered as a nonprofit organization for the purposes of fundraising for IDF service members and claims to have transferred $34.5 million in donations to the Israeli military in the first weeks after the war began.

This conflict windfall has helped other organizations on Apple’s matching contribution list. An analysis by The Guardian last December showed that the crowdfunding platform IsraelGives received over $5.3 million in donations in just two months after the war to support military, paramilitary, and settlement activity in the West Bank. The same analysis showed that this money came disproportionately from U.S. donors, and included specific funding campaigns to support illegal settlements whose residents had a history of violent attacks against Palestinian civilians. 

Related

Tax-Exempt U.S. Nonprofits Fuel Israeli Settler Push to Evict Palestinians

Other organizations on Apple’s matching contribution list appear to include support for religious extremism or back activity in the West Bank deemed illegal under international law. The One Israel Fund, for example, includes on its website a talk titled “The Arab Takeover of Judea and Samaria: Who Is Behind It; What Can Be Done?” — invoking the religious name of the territory that is deemed to be part of a future Palestinian state under international law. HaYovel, a Christian Zionist organization, states on its website that its goal is to help further the “prophetic restoration” of a region that “many incorrectly refer to as the West Bank.” The charitable status of the Jewish National Fund has come under criticism in both the U.S. and European Union due its historic involvement in the “systematic discrimination” against Palestinians since the founding of the state of Israel, as well as ongoing support for dispossession of Palestinians in the West Bank. 

Like many of its competitors, Apple professes a corporate commitment to “respecting internationally recognized human rights” frameworks, including the U.N. Universal Declaration of Human Rights, according to its website. Since the war began, the U.N. Human Rights Office has repeatedly decried atrocities committed by the IDF.

The post Apple Matches Worker Donations to IDF and Illegal Settlements, Employees Allege appeared first on The Intercept.

]]>
https://theintercept.com/2024/06/11/apple-donations-idf-israel-gaza-illegal-settlements/feed/ 0 470426 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See?]]> https://theintercept.com/2024/06/04/facebook-ads-algorithm-for-profit-colleges/ https://theintercept.com/2024/06/04/facebook-ads-algorithm-for-profit-colleges/#respond Tue, 04 Jun 2024 16:48:24 +0000 https://theintercept.com/?p=469979 Researchers tested for bias in Facebook’s algorithm by purchasing ads promoting for-profit colleges and studying who saw them.

The post One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See? appeared first on The Intercept.

]]>
Facebook’s advertising algorithm disproportionately targets Black users with ads for for-profit colleges, according to new paper by a team of university researchers.

Like all major social media platforms, Meta, which owns Facebook and Instagram, does not disclose exactly how or why its billions of users see certain posts and not others, including ads. In order to put Facebook’s black-box advertising system to the test, academics from Princeton and the University of Southern California purchased Facebook ads and tracked their performance among real Facebook users, a method they say produced “evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.”

The researchers say they focused on for-profit colleges because of their long, demonstrable history of deceiving prospective students — particularly students of color — with predatory marketing while delivering lackluster educational outcomes and diminished job prospects compared to other colleges.

In a series of test marketing campaigns, the researchers purchased sets of two ads paired together: one for a public institution, like Colorado State University, and another marketing a for-profit company, like Strayer University. (Neither the for-profit colleges nor state schools advertised by the researchers were involved in the project).

Advertisers on Facebook can fine-tune their campaigns with a variety of targeting options, but race is no longer one of them. So the researchers found a clever proxy. Using North Carolina voter registration data that includes individuals’ races, the researchers built a sample audience that was 50 percent white and 50 percent Black. The Black users came from one region in North Carolina and white voters from another. Using Facebook’s “custom audiences” feature, they uploaded this roster of specific individuals to target with ads. Though Facebook’s ad performance metrics wouldn’t reveal the race of users who saw each ad, the data showed where each ad was viewed. “Whenever our ad is shown in Raleigh, we can infer it was shown to a Black person and, when it is shown in Charlotte — we can infer it was shown to a White person,” the paper explains.

Theoretically, an unbiased algorithm would serve each ads for each school to an equal number of Black and white users. The experiment was designed to see whether there was a statistically significant skew in which people ultimately saw which ads.

With each pair of ads, Facebook’s delivery algorithm showed a bias, the researchers found. The company’s algorithm disproportionately showed Black users ads for colleges like DeVry and Grand Canyon University, for-profit schools that have been fined or sued by the Department of Education for advertising trickery, while more white users were steered toward state colleges, the academics concluded.

“Addressing fairness in ads is an industry-wide challenge and we’ve been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system,” Meta spokesperson Daniel Roberts told The Intercept. “Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race and we are actively building technology designed to make additional progress in this area.”

Even in cases where these for-profit programs have reformed their actual marketing efforts and “aim for racially balanced ad targeting,” the research team concluded “Meta’s algorithms would recreate historical racial skew in who the ad are shown to, and would do so unbeknownst to the advertisers.”

Ever since a 2016 ProPublica report found Facebook allowed advertisers to explicitly exclude users from advertising campaigns based on their race, the company’s advertising system has been subject to increased scrutiny and criticism. And while Facebook ultimately removed options that allowed marketers to target users by race, previous academic research has shown that the secret algorithm that decides who sees which ads is biased along race and gender lines, suggesting bias intrinsic to the company’s systems.

Related

Facebook’s Ad Algorithm Is a Race and Gender Stereotyping Machine, New Study Suggests

A 2019 research paper on this topic showed that ads for various job openings were algorithmically sorted along race and gender stereotypes, for instance, lopsidedly showing Black users opportunities to drive taxi cabs, while openings for an artificial intelligence developer was skewed in favor of white users. A 2021 follow-up paper found that Facebook ad delivery replicated real-world workplace gender imbalances, showing women ads for companies where women were already overrepresented.

While it withholds virtually all details about the ad delivery algorithm functions, Facebook has long contended that its ads are shown merely to people most likely to find them relevant. In response to the 2021 research showing gender bias in the algorithm, a company spokesperson told The Intercept that while they understood the researchers’ concerns, “our system takes into account many signals to try and serve people ads they will be most interested in.”

Aleksandra Korolova, a professor of computer science and public affairs at Princeton and co-author of the 2019 and 2021 research, told The Intercept that she rejects the notion that apparent algorithmic bias can be explained away as only reflecting what people actually want, because it’s impossible to disprove. “It’s impossible to tell whether Meta’s algorithms indeed reflect a true preference of an individual, or are merely reproducing biases in historical data that the algorithms are trained on, or are optimizing for preferences as reflected in clicks rather than intended real-world actions.”

The onus to prove Facebook’s ad delivery is reflecting real-world preferences and not racist biases, she said, lies with Facebook.

But Korolova also noted that even if for-profit college ads are being disproportionately directed to Black Facebook users because of actual enrollment figures, a moral and social objection to such a system remains. “Society has judged that some advertising categories are so important that one should not let historical trends or preferences propagate into future actions,” she said. While various areas in the United States may have been majority-Black or white over the years, withholding ads for properties in “white neighborhoods” from Black buyers, for example, is illegal, historical trends notwithstanding.

Aside from the ethical considerations around disproportionately encouraging its Black users to enroll in for-profit colleges, the authors suggest Facebook may be creating legal liability for itself too. “Educational opportunities have legal protections that prohibit racial discrimination and may apply to ad platforms,” the paper cautions.

Korolova said that, in recent years, “Meta has made efforts to reduce bias in their ad delivery systems in the domains of housing, employment and credit — housing as part of their 2022 settlement with the Department of Justice, and employment and credit voluntarily, perhaps to preempt lawsuits based on the work that showed discrimination in employment ad delivery.”

But she added that despite years of digging into apparently entrenched algorithmic bias in the company’s products, “Meta has not engaged with us directly and does not seem to have extended their efforts for addressing ad delivery biases to a broader set of domains that relate to life opportunities and societally important topics.”

The post One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See? appeared first on The Intercept.

]]>
https://theintercept.com/2024/06/04/facebook-ads-algorithm-for-profit-colleges/feed/ 0 469979
<![CDATA[After Pegasus Was Blacklisted, Its CEO Swore Off Spyware. Now He’s the King of Israeli AI.]]> https://theintercept.com/2024/05/23/israel-spyware-pegasus-shalev-hulio-ai-inteleye/ https://theintercept.com/2024/05/23/israel-spyware-pegasus-shalev-hulio-ai-inteleye/#respond Thu, 23 May 2024 19:22:45 +0000 Shalev Hulio is remaking his image but is still involved in a web of cybersecurity ventures with his old colleagues from NSO Group.

The post After Pegasus Was Blacklisted, Its CEO Swore Off Spyware. Now He’s the King of Israeli AI. appeared first on The Intercept.

]]>
Shalev Hulio, once dubbed “Israel’s cyber bad boy,” has been working hard to remake himself. By all appearances, it’s been a big success. 

Things were looking dicey a few years ago when his company, the Israeli firm NSO Group, rose to infamy. Its Pegasus spyware had been exposed as enabling human rights abuses. Eventually, NSO was blacklisted by the U.S. government, and in August 2022, Hulio resigned as CEO. 

In the last two years, however, Hulio has become involved in a web of new cybersecurity ventures. He is back, it seems, and better than ever. 

In November, in a video filmed at the Gaza Strip, Hulio announced his new startup, Dream Security, an AI firm focused on defending critical infrastructure. 

In April, according to Israel’s largest newspaper, a co-founder of IntelEye — a company that monitors the “dark web” — identified his former NSO colleague Hulio as an investor. (Another IntelEye official later told The Intercept that Hulio isn’t a shareholder but refused to clarify further.)

Taking the helm of The Institute is the most recent step in Hulio’s makeover from being a public villain to becoming a cyberhero.

Now, Hulio is moving his cybersecurity entrepreneurism into a new arena: the academy. This month, he announced the founding of “The Institute,” a new initiative at Israel’s Ben-Gurion University of the Negev that aims to become an Israeli hub for training and research on artificial intelligence.

Hulio has described his post-NSO career as a move away from “offensive” cybersecurity work. When he launched Dream, Hulio told the press, “We decided to leave the intelligence side, offensive side if you want, and move to the defensive side.”

Taking the helm of The Institute is the most recent step in Hulio’s makeover from being a public villain to becoming a cyberhero, leading a nation’s technological education. At The Institute’s highly publicized launch he shared a stage with Israeli President Isaac Herzog.

The companies Hulio has been involved in — founded, led, launched, or reportedly invested in — feature the same rotating cast of characters. And from NSO to Dream to IntelEye, there are different, sometimes intersecting missions, but one thing is constant: All three support the Israeli government in its war effort. 

Hulio had bragged in November that NSO’s Pegasus software was used to track down Israeli hostages, confirming an October report. Meanwhile, Hulio announced Dream’s founding one month after Hamas’s attack on the Gaza border to show Israel’s resilience and help the government.

IntelEye is involved in direct, offensive intelligence work. At the request of the Israeli government, the company reportedly uncovered information identifying a pair of Palestinian brothers and shutting down Hamas propaganda — leading to the killing of one brother and a police raid on the other.

Exactly what resulted from IntelEye’s work, however, is the subject of conflicting accounts. This much is obvious: The company is in the high-stakes cybersurveillance business.

“We are continuing to monitor and search for terrorist elements that could threaten the State of Israel,” NSO veteran and IntelEye co-founder Ziv Haba told Israel Hayom after his company found the Palestinian brothers. “The surveillance is extremely close, closer than you can imagine.”

“The Institute”

The launch of The Institute at Ben-Gurion University was itself marked by confusion. An article in the Jerusalem Post announcing the initiative described it as a partnership with the Israel Defense Forces’ elite cyberspying unit, known as 8200. NSO’s founders — including Hulio — and many of its employees are veterans of 8200.

Days after the initial article ran, however, all of its references to 8200 were scrubbed without any notice. 

An IDF spokesperson told The Intercept, “The IDF in general and Unit 8200 in particular do not take part in the aforementioned program.” (Shmuel Dovrat, a spokesperson for Ben-Gurion University, said The Institute had not been in touch with the Jerusalem Post after the initial publication, but said, “I’m glad that they changed it because of the wrong information.”)

According to a press release, The Institute will bring together AI luminaries and run training programs and research, with Hulio and other Dream employees among its leaders. In the coming year, The Institute’s research laboratories will strengthen Israel’s hand in the tech world by collaborating with actors across the industry, according to a report in a U.K. tech news site.

“Through hard work born out of love and commitment to the state of Israel, we have built a team of the best entrepreneurs, investors and leading companies in the world to help Israel become a global leader in artificial intelligence,” journalist Sivan Cohen Saban, The Institute’s CEO, said at the launch event on May 8.

On hand at the launch, according to coverage, were officials from global firms like Microsoft and General Motors, as well as top-tier Israeli politicians, like Herzog, the president. (A spokesperson for GM told The Intercept they could not confirm the company’s attendance.)

Herzog said The Institute would help fight Israel’s isolation amid the Gaza war. “History is being made here today,” he said at the launch, in remarks later posted to YouTube in a promotional video. “There are countries that want to sever a relationship with us and only because of you, they don’t do it.”

At The Institute, Hulio is joined in leadership by Dovi Frances, co-founder of the U.S.-based venture capital firm Group 11. Marking its launch, Frances, who also led funding pushes for Dream Security, wrote on LinkedIn: “A historic day.”

“DREAM is proud to be in the forefront of AI technologies and take part in ‘The Institute,’” Tal Veksler, a spokesperson for the company, told The Intercept.

The trainings and other programs offered by The Institute will be run by employees from Dream Security and other leading Israeli tech firms. Among them are Tomer Simon and Alon Haimovich, chief scientist and general manager at Microsoft in Israel, and Nati Amsterdam, Israel’s lead at Nvidia, a California-based giant of the artificial intelligence world. 

Like politicians on hand for the launch, Saban, the CEO, linked the founding of The Institute to the October 7 attack on Israel. “Along with the concern for our soldiers, our abductees, the bereaved families and the situation in the country,” she said in a post on X, “we decided to do this.” 

“The Deep, Dark Web”

At The Institute’s launch, Hulio was not the only NSO veteran present. So was Haba, co-founder of IntelEye, the firm that claims to plumb the depths of the dark web. Haba took part in a panel, according to his company’s LinkedIn profile, sitting alongside Hulio for a discussion on AI cyberattacks.

Hulio and Haba had worked together at NSO until August 2022, when Hulio stepped down. The next month, while still at NSO, Haba was already working on the nascent firm IntelEye, according to social media posts for an event he participated in. (IntelEye would officially launch in June 2023.) 

According to an article in Israel Hayom last month, Haba said that both Hulio and Frances, Hulio’s Dream business partner, are investors in IntelEye. 

In response to a request for comment about Hulio’s relationship to IntelEye, company co-founder Maor Sellek, another NSO veteran, said, “Shalev does not hold any shares in the company.” Sellek declined to explain why Haba confirmed to Israel Hayom that Hulio is an “investor.”

IntelEye’s participation in Israel’s war effort made headlines. Local media reported on the suspenseful cyber-takedown of Mustafa and Mohammed Ayyash, the two Palestinian brothers alleged to have run the Gaza Now Telegram channel. The company’s work, according to Israel Hayom, led to the “coordinated transcontinental effort by government agencies” to shut down the channel.

Sellek, in his emails to The Intercept, said IntelEye works in “assisting police forces and law enforcement agencies in Israel and around the world.” He said the company “helped Law enforcement agencies locate the operators of the Hamas organization’s Telegram channel ‘Gaza Now.’”

Described as “Hamas-aligned” by the Atlantic Council’s Digital Forensic Research Lab, the Gaza Now channel went from having 340,000 subscribers to nearly 1.9 million after October 7. The U.S. Treasury Department accused the channel and its founders of fundraising for Hamas, levying sanctions. 

In Israel Hayom, IntelEye officials claimed they revealed the identities of the channel’s leaders, the Ayyash brothers, and tracked them down in Austria and Gaza. Mustafa ended up under investigation by Austrian police, and Mohammed was reportedly killed in Gaza.

The brothers had reportedly been found by tracking their cryptocurrency use and online habits. Privacy experts pointed out that if this information was already fairly public, it would not have been hard to track. “All this information can have digital breadcrumbs,” said Elies Campo, a digital security researcher who previously worked with Telegram and WhatsApp.

As the Israelis and Austrians caught up with the alleged Gaza Now creators, some sort of misidentification appears to have occurred, with the United Nations saying the wrong Ayyash brother had been killed — Mustafa, it turned out, was still alive in Austria — and then going on to correct that error with a note about the Ayyashs’ relationship that conflicts with all other accounts. 

Mustafa, it turned out, had not even been in Gaza. Even as the U.N. initially reported his death, Mustafa continued to post on X from his home in Austria. In March, the U.S. and U.K. imposed sanctions on him and the Gaza Now channel. Then, in Israel Hayom, the article about IntelEye claimed that Mustafa had been arrested.

The Israel Hayom article questions how the Austrian government found Ayyash. The report notes that “IntelEye investors — Shalev Hulio and Dovi Francis — have ties to a former Austrian chancellor through another company.” Former Austrian Chancellor Sebastian Kurz is a co-founder of Dream. A controversial political figure, Kurz resigned from office amid a corruption probe and was recently convicted of making false statements to a parliamentary inquiry into separate allegations of corruption and given an eight-month suspended statement. (A spokesperson for Dream Security said that the company had “NO relationship whatsoever” to other companies or technologies in this article. Kurz did not respond to a request for comment.)

On exactly how Austrian authorities got the information about the Ayyash brothers, according to Israel Hayom, the people involved remained “tight-lipped.”

Yet there was never an arrest, authorities said. The Linz public prosecutor’s office in Austria told The Intercept that Mustafa was not arrested or restricted in his movement. His home had been raided, and documents and devices were seized for analysis. The office, which said it had no contact with the foreign authorities, told The Intercept that Mustafa is under investigation for terror financing. (Mustafa has posted at length on social media denouncing the police raid and declaring his innocence.)

For its part, an Austrian Ministry of the Interior spokesperson said they are “in contact with international partners” but declined to answer questions about whether the Israelis had provided information.

Israel Hayom claimed that Gaza Now’s Telegram and WhatsApp channels were shut down and “dramatically impaired.” Both, however, remain up and running, with hundreds of thousands of followers. (Telegram did not respond to a request for comment.)

“We understand there were good people involved who helped prevent ‘Gaza Now’ from spreading poison and hatred,” Haba told Israel Hayom. Of Hulio and Frances, he added, “They are super Zionists who want what’s best for Israel.”

The post After Pegasus Was Blacklisted, Its CEO Swore Off Spyware. Now He’s the King of Israeli AI. appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/23/israel-spyware-pegasus-shalev-hulio-ai-inteleye/feed/ 0 468835 Shalev Hulio Made Pegasus Spyware, Now He’s King of Israeli AI Shalev Hulio is remaking his image but is still involved in a web of cybersecurity ventures with his old colleagues from NSO Group. shalev hulio israel spyware ai DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[This Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message]]> https://theintercept.com/2024/05/22/whatsapp-security-vulnerability-meta-israel-palestine/ https://theintercept.com/2024/05/22/whatsapp-security-vulnerability-meta-israel-palestine/#respond Wed, 22 May 2024 17:08:47 +0000 https://theintercept.com/?p=469034 Engineers warned Meta that nations can monitor chats; staff fear Israel is using this trick to pick assassination targets in Gaza.

The post This Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message appeared first on The Intercept.

]]>
In March, WhatsApp’s security team issued an internal warning to their colleagues: Despite the software’s powerful encryption, users remained vulnerable to a dangerous form of government surveillance. According to the previously unreported threat assessment obtained by The Intercept, the contents of conversations among the app’s 2 billion users remain secure. But government agencies, the engineers wrote, were “bypassing our encryption” to figure out which users communicate with each other, the membership of private groups, and perhaps even their locations.

The vulnerability is based on “traffic analysis,” a decades-old network-monitoring technique, and relies on surveying internet traffic at a massive national scale. The document makes clear that WhatsApp isn’t the only messaging platform susceptible. But it makes the case that WhatsApp’s owner, Meta, must quickly decide whether to prioritize the functionality of its chat app or the safety of a small but vulnerable segment of its users.

“WhatsApp should mitigate the ongoing exploitation of traffic analysis vulnerabilities that make it possible for nation states to determine who is talking to who,” the assessment urged. “Our at-risk users need robust and viable protections against traffic analysis.”

Against the backdrop of the ongoing war on Gaza, the threat warning raised a disturbing possibility among some employees of Meta. WhatsApp personnel have speculated Israel might be exploiting this vulnerability as part of its program to monitor Palestinians at a time when digital surveillance is helping decide who to kill across the Gaza Strip, four employees told The Intercept.

“WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works,” said Meta spokesperson Christina LoNigro.

Though the assessment describes the “vulnerabilities” as “ongoing,” and specifically mentions WhatsApp 17 times, LoNigro said the document is “not a reflection of a vulnerability in WhatsApp,” only “theoretical,” and not unique to WhatsApp. LoNigro did not answer when asked if the company had investigated whether Israel was exploiting this vulnerability.

Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. “Even assuming WhatsApp’s encryption is unbreakable,” the assessment reads, “ongoing ‘collect and correlate’ attacks would still break our intended privacy model.”

“The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques.

As war has grown increasingly computerized, metadata — information about the who, when, and where of conversations — has come to hold immense value to intelligence, military, and police agencies around the world. “We kill people based on metadata,” former National Security Agency chief Michael Hayden once infamously quipped.

But even baseless analyses of metadata can be lethal, according to Matthew Green, a professor of cryptography at Johns Hopkins University. “These metadata correlations are exactly that: correlations. Their accuracy can be very good or even just good. But they can also be middling,” Green said. “The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

It wasn’t until the April publication of an exposé about Israel’s data-centric approach to war that the WhatsApp threat assessment became a point of tension inside Meta.

A joint report by +972 Magazine and Local Call revealed last month that Israel’s army uses a software system called Lavender to automatically greenlight Palestinians in Gaza for assassination. Tapping a massive pool of data about the Strip’s 2.3 million inhabitants, Lavender algorithmically assigns “almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant,” the report states, citing six Israeli intelligence officers. “An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination.”

WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death.

The report indicated WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death, citing a book on AI targeting written by the current commander of Unit 8200, Israel’s equivalent of the NSA. “The book offers a short guide to building a ‘target machine,’ similar in description to Lavender, based on AI and machine-learning algorithms,” according to the +972 exposé. “Included in this guide are several examples of the ‘hundreds and thousands’ of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant.”

The Israeli military did not respond to a request for comment, but told The Guardian last month that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.” The military stated that Lavender “is simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations. This is not a list of confirmed military operatives eligible to attack.”

It was only after the publication of the Lavender exposé and subsequent writing on the topic that a wider swath of Meta staff discovered the March WhatsApp threat assessment, said the four company sources, who spoke on the condition of anonymity, fearing retaliation by their employer. Reading how governments might be able to extract personally identifying metadata from WhatsApp’s encrypted conversations triggered deep concern that this same vulnerability could feed into Lavender or other Israeli military targeting systems.

Efforts to press Meta from within to divulge what it knows about the vulnerability and any potential use by Israel have been fruitless, the sources said, in line with what they describe as a broader pattern of internal censorship against expressions of sympathy or solidarity with Palestinians since the war began.

Related

Israeli Group Claims It’s Working With Big Tech Insiders to Censor “Inflammatory” Wartime Content

Meta employees concerned by the possibility their product is putting innocent people in Israeli military crosshairs, among other concerns related to the war, have organized under a campaign they’re calling Metamates 4 Ceasefire. The group has published an open letter signed by more than 80 named staff members. One of its demands is “an end to censorship — stop deleting employee’s words internally.”

Meta spokesperson Andy Stone told The Intercept any workplace discussion of the war is subject to the company’s general workplace conduct rules, and denied such speech has been singled out. “Our policy is written with that in mind and outlines the types of discussions that are appropriate for the workplace. If employees want to raise concerns, there are established channels for doing so.”

MENLO PARK, CALIFORNIA - NOVEMBER 3: Crowds are gathered outside of Meta (Facebook) Headquarters to protest Mark Zuckerberg and Meta's censoring about Palestine posts on social platforms in Menlo Park, California, United States as they protest and condemn recent actions by the government of Israel and calling U.S. to stop aiding to Israel, on November 3, 2023. (Photo by Tayfun Coskun/Anadolu via Getty Images)
Crowds gather outside of Meta headquarters in Menlo Park, Calif., to protest Mark Zuckerberg and Meta’s censoring of Palestine posts on social platforms, on Nov. 3, 2023. Photo: Tayfun Coskun/Anadolu via Getty Images

According to the internal assessment, the stakes are high: “Inspection and analysis of network traffic is completely invisible to us, yet it reveals the connections between our users: who is in a group together, who is messaging who, and (hardest to hide) who is calling who.”

The analysis notes that a government can easily tell when a person is using WhatsApp, in part because the data must pass through Meta’s readily identifiable corporate servers. A government agency can then unmask specific WhatsApp users by tracing their IP address, a unique number assigned to every connected device, to their internet or cellular service provider account.

WhatsApp’s internal security team has identified several examples of how clever observation of encrypted data can thwart the app’s privacy protections, a technique known as a correlation attack, according to this assessment. In one, a WhatsApp user sends a message to a group, resulting in a burst of data of the exact same size being transmitted to the device of everyone in that group. Another correlation attack involves measuring the time delay between when WhatsApp messages are sent and received between two parties — enough data, the company believes, “to infer the distance to and possibly the location of each recipient.”

The internal warning notes that these attacks require all members of a WhatsApp group or both sides of a conversation to be on the same network and within the same country or “treaty jurisdiction,” a possible reference to the Five Eyes spy alliance between the U.S., Australia, Canada, U.K., and New Zealand. While the Gaza Strip has its own Palestinian-operated telecoms, its internet access ultimately runs through Israeli fiber optic cables subject to Israeli state surveillance. Although the memo suggests that users in “well functioning democracies with due process and strong privacy laws” may be less vulnerable, it also highlights the NSA’s use of these telecom-tapping techniques on U.S. soil.

“Today’s messenger services weren’t designed to hide this metadata from an adversary who can see all sides of the connection,” Green, the cryptography professor, told The Intercept. “Protecting content is only half the battle. Who you communicate [with] and when is the other half.”

The assessment reveals WhatsApp has been aware of this threat since last year, and notes the same surveillance techniques work against other competing apps. “Almost all major messenger applications and communication tools do not include traffic analysis attacks in their threat models,” said Donncha Ó Cearbhaill, head of Amnesty International’s Security Lab, told The Intercept. “While researchers have known these attacks are technically possible, it was an open question if such attacks would be practical or reliable on a large scale, such as whole country.”

The assessment makes clear that WhatsApp engineers grasp the severity of the problem, but also understand how difficult it might be to convince their company to fix it. The fact that these de-anonymization techniques have been so thoroughly documented and debated in academic literature, Green explained, is a function of just how “incredibly difficult” it is to neutralize them for a company like Meta. “It’s a direct tradeoff between performance and responsiveness on one hand, and privacy on the other,” he said.

Asked what steps the company has taken to shore up the app against traffic analysis, Meta’s spokesperson told The Intercept, “We have a proven track record addressing issues we identify and have worked to hold bad actors accountable. We have the best engineers in the world proactively looking to further harden our systems against any future threats and we will continue to do so.”

The WhatsApp threat assessment notes that beefing up security comes at a cost for an app that prides itself on mass appeal. It will be difficult to better protect users against correlation attacks without making the app worse in other ways, the document explains. For a publicly traded giant like Meta, protecting at-risk users will collide with the company’s profit-driven mandate of making its software as accessible and widely used as possible.

“The tension is always going to be market share, market dominance.”

“Meta has a bad habit of not responding to things until they become overwhelming problems,” one Meta source told The Intercept, citing the company’s inaction when Facebook was used to incite violence during Myanmar’s Rohingya genocide. “The tension is always going to be market share, market dominance, focusing on the largest population of people rather than a small amount of people [that] could be harmed tremendously.”

The report warns that adding an artificial delay to messages to throw off attempts to geolocate the sender and receiver of data, for instance, will make the app feel slower to all 2 billion users — most of whom will never have to worry about the snooping of intelligence agencies. Making the app transmit a regular stream of decoy data to camouflage real conversations, another idea floated in the assessment, could throw off snooping governments. But it might also have the adverse effect of hurting battery life and racking up costly mobile data bills.

To WhatsApp’s security personnel, the right approach is clear. “WhatsApp Security cannot solve traffic analysis alone,” the assessment reads. “We must first all agree to take on this fight and operate as one team to build protections for these at-risk, targeted users. This is where the rubber meets the road when balancing WhatsApp’s overall product principle of privacy and individual team priorities.”

The memo suggests WhatsApp may adopt a hardened security mode for at-risk users similar to Apple’s “Lockdown Mode” for iOS. But even this extra setting could accidentally imperil users in Gaza or elsewhere, according to Green. “People who turn this feature on could also stand out like a sore thumb,” he said. “Which itself could inform a targeting decision. Really unfortunate if the person who does it is some kid.”

The post This Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/22/whatsapp-security-vulnerability-meta-israel-palestine/feed/ 0 469034 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images) MENLO PARK, CALIFORNIA - NOVEMBER 3: Crowds are gathered outside of Meta (Facebook) Headquarters to protest Mark Zuckerberg and Meta's censoring about Palestine posts on social platforms in Menlo Park, California, United States as they protest and condemn recent actions by the government of Israel and calling U.S. to stop aiding to Israel, on November 3, 2023. (Photo by Tayfun Coskun/Anadolu via Getty Images)
<![CDATA[Scarlett Johansson Isn’t Alone. The Intercept Is Getting Ripped Off by OpenAI Too.]]> https://theintercept.com/2024/05/21/scarlett-johansson-openai-intercept-copyright/ https://theintercept.com/2024/05/21/scarlett-johansson-openai-intercept-copyright/#respond Tue, 21 May 2024 21:46:07 +0000 https://theintercept.com/?p=469056 The Intercept’s lawsuit against OpenAI and Microsoft shows how digital outlets are uniquely vulnerable.

The post Scarlett Johansson Isn’t Alone. The Intercept Is Getting Ripped Off by OpenAI Too. appeared first on The Intercept.

]]>
The Intercept is one of the many media outlets that have sued OpenAI and Microsoft over the past year for using journalists’ work to train ChatGPT without permission or credit. The case, which OpenAI and Microsoft are trying to get tossed from federal court, shows why digital news outlets are particularly exposed to AI grifters.

To be clear, it’s not just outlets like this one that are at risk. Actor Scarlett Johansson on Monday accused OpenAI of mimicking her voice in its new virtual assistant despite reportedly twice rejecting offers from CEO Sam Altman. Larger publications have also raised questions about OpenAI’s approach to human labor. But unlike Hollywood stars and print publications, digital outlets face some unique hurdles in protecting their work.

Just as OpenAI denies casting an actor with a voice “eerily similar” to Johansson’s, OpenAI and Microsoft have attempted to shrug off The Intercept’s lawsuit.

In a lawsuit filed in February, The Intercept alleged that OpenAI and Microsoft violated a federal law, the Digital Millennium Copyright Act, by using copyrighted stories to train ChatGPT without paying any licensing fees to publishers and stripping out basic authorship information. (Full disclosure: In addition to doing my own reporting for The Intercept, I am also one of its attorneys.)

“Open AI and Microsoft have the economic incentive to vacuum up the hard work of online news outlets, ignoring the training, curation, research, and resources those organizations devote to making sure the public is informed with timely, accurate news,” said David Bralow, The Intercept’s general counsel. “They would like us to get lost in the algorithm so that they can continue to free ride.”

Just as OpenAI denies casting an actor with a voice “eerily similar” to Johansson’s, OpenAI and Microsoft have attempted to shrug off The Intercept’s lawsuit. Last month, they filed motions to dismiss the case, which will be argued before a federal judge in Manhattan on June 3.

In the past year, OpenAI has inked deals with many press outlets to license their content, including the Associated Press, Le Monde, the Financial Times, and Axel Springer, the German publisher that owns Politico and Business Insider.

A slew of other outlets have sued OpenAI for various flavors of copyright misfeasance. The New York Times sued in December, followed by the Chicago Tribune, the New York Daily News, and six other daily papers owned by Alden Global Capital last month. Digital outlets Raw Story and AlterNet, represented by the same firm as The Intercept, filed a separate lawsuit in February.

All plaintiffs — traditional and digital alike — noted in court filings that their websites appear prominently in OpenAI’s own lists of which pages it had scraped to train earlier versions of ChatGPT. The Intercept’s website is on OpenAI’s list of “the top 1,000 domains present” in data used to train GPT-2; per OpenAI’s description, one of the datasets contains text scraped from more than 6,400 separate pages from The Intercept’s domain. 

But OpenAI and Microsoft have urged the district court to dismiss The Intercept’s claims on numerous grounds, including that The Intercept cannot point to every article that was ever fed into ChatGPT.

In a brief filed last week, OpenAI argued that The Intercept failed to identify “a single work from which OpenAI supposedly removed copyright management information.”

As The Intercept countered, only OpenAI and Microsoft could possibly know which specific articles are in the ChatGPT training sets, unless the court allows the case to proceed into discovery.

Because of how modern copyright protections work, the New York Times and other print publications have much more straightforward claims than The Intercept and other digital outlets. To qualify for bread-and-butter copyright infringement damages, authors must register their works with the U.S. Copyright Office. It is relatively straightforward to register print news articles in bulk; using an online portal, publications can register an entire month’s worth of print issues at once.

But there is no similar bulk process for online-only outlets, which must register each article individually with the Copyright Office. Earlier this year, the Copyright Office floated a new registration process for news websites, which is still under consideration. But the current registration requirements are costly and time intensive, and thus impractical for budget-constrained nonprofits like The Intercept.

Unable to invoke traditional copyright infringement claims, The Intercept turned to somewhat novel arguments under the DMCA, which Congress passed in 1998. As the Copyright Office summarizes it, the DMCA was meant to “move the nation’s copyright law into the digital age.”

Under the DMCA, it is illegal to intentionally remove “copyright management information” such as a work’s title and author as well as to distribute that work knowing the information was removed. The Intercept and other plaintiffs allege that OpenAI and Microsoft violated both of these provisions by training ChatGPT on journalists’ articles without this attribution information.

“The Intercept is not the first to challenge this technology through claims under the Digital Millennium Copyright Act’s provision concerning removal of copyright management information,” Microsoft’s attorneys wrote in their brief, calling The Intercept’s lawsuit “the skimpiest of the lot of these challenges.”

Next month, the district court will consider whether The Intercept’s lawsuit will proceed.

If the case is dismissed, OpenAI can continue to train ChatGPT to regurgitate words that are “eerily similar” to the work of digital outlets like The Intercept without paying for that work.

The post Scarlett Johansson Isn’t Alone. The Intercept Is Getting Ripped Off by OpenAI Too. appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/21/scarlett-johansson-openai-intercept-copyright/feed/ 0 469056
<![CDATA[An Israeli Company Is Hawking Its Self-Launching Drone System to U.S. Police Departments]]> https://theintercept.com/2024/05/17/israel-orione-drone-us-police-louisiana/ https://theintercept.com/2024/05/17/israel-orione-drone-us-police-louisiana/#respond Fri, 17 May 2024 10:00:00 +0000 A Louisiana sheriff’s department has been testing the drone system, which is already used by the Israeli police and many settlements.

The post An Israeli Company Is Hawking Its Self-Launching Drone System to U.S. Police Departments appeared first on The Intercept.

]]>
An Israeli drone company is proselytizing to American police departments about an autonomous drone system that can automatically launch police drones to fly to the sites of suspected crimes. One sheriff’s department in Louisiana has repeatedly tested the system, called Orion, which is already in use by the Israeli national police and, since October 7, many Israeli settlements, according to the company’s founder.

Created by the Israeli company High Lander, Orion allows users to direct hundreds of drones at once by automating them to navigate and perform actions without user input. The software system turns drones into “next-generation security guards,” according to an Orion brochure.

In February, High Lander held a demo event in Baton Rouge to showcase the “drone-in-a-box solution,” which the East Baton Rouge Sherrif’s Office first tested out last June. “The system will be a game changer for the fight against crime in Baton Rouge,” High Lander wrote in a LinkedIn post about the event, which was attended by officers from around the country.

The company has used its pilot program in Louisiana to encourage other police agencies to check out Orion, and its February event in Louisiana was just one part of a tour that included stops in San Diego, Phoenix, and Miami, according to LinkedIn posts.

Orion’s capabilities are startling. A police force could have drones automatically launch from charging stations when triggered by “events like gunshots, burglaries, and car accidents.” Once they deploy, the drones can perform pre-set tasks: releasing cargo; relaying live video feeds; identifying and searching for people, objects, or vehicles using AI and thermal sensors; and making announcements over loudspeaker. If the system gets multiple calls, Orion can automatically choose which to prioritize. 

A High Lander blog post about the project adds “new capabilities are being discovered all the time.”

The East Baton Rouge Sheriff’s Office held “mock scenario testing” with High Lander’s system “approximately 5 times,” Casey Hicks, the department’s public information director, told The Intercept. Hicks added that the demos were conducted at the sheriff’s range facility and that they “are not aware of any use out in the community at any time.” 

High Lander did not respond to a request for comment.

There is a documented history of U.S.-Israel security tech exchanges, which civil rights and racial justice advocates have long criticized for contributing to the militarization of the police. The Tel Aviv-based High Lander collaborated with Stephenson Technologies Corporation, a Louisiana nonprofit that works with the departments of Defense and Homeland Security, to bring Orion to East Baton Rouge, with up to $1 million in backing from the Israel–U.S. Binational Industrial Research and Development Foundation. That funding comes from an endowment provided equally by the U.S. and Israeli governments. 

“People should be really concerned that our tax dollars are often being put right in the pockets of American and Israeli tech millionaires or billionaires, and that those technologies are then used on us and our neighbors to make people more unsafe,” said Lou Blumberg, an organizer with both Jewish Voice for Peace and Eye on Surveillance, an organization that monitors the adoption of surveillance technology in New Orleans. 

“This technology that’s coming out of Israel is heavily implicated in human rights abuses,” Blumberg said, “because you can’t separate the tech from the apartheid.”

From Israel to Louisiana

In East Baton Rouge, High Lander and Stephenson Technologies integrated the drone platform “with the city of Baton Rouge’s citywide system of gunshot sensors” for testing by the sheriff’s department, according to High Lander’s website. In a December post about the project, the company quoted an employee who said “it was a great feeling to see that first autonomous dispatch.”

The city uses ShotSpotter, a gunshot detection technology recently dropped by the city of Chicago due to critiques of racist bias and inaccuracy.

The East Baton Rouge Sheriff’s Office has long been accused of mistreating and harassing people of color — including using acoustic weapons on protesters without proper training — raising concerns among local advocates about its use of Orion.

When Israeli security tech is exported to the United States, it is “used to surveil and criminalize, mostly, young Black boys,” said Blumberg. Pointing to other types of surveillance technology, including facial recognition, Blumberg added, “There’s actually no statistical evidence that it helps prevent crimes” or “that it helps make people safer.”

The head of training at the East Baton Rouge Sheriff’s Office, Carl Dabadie, participated in a police training program in Israel about a decade ago — and promised to bring his learnings back to the local community.

The Anti-Defamation League invited Dabadie, then the chief of the Baton Rouge Police Department, to attend a National Counter-Terrorism Seminar in Israel in 2014. Seeking airfare for the training, Dabadie said there are “several terrrist targets” [sic] in Baton Rouge, apparently referring to local oil refineries, according to a document obtained through a public records request. He returned from the eight-day seminar, during which participants visited an Israeli police department in Jerusalem and a border outpost, and told local media he had plans to update the department’s riot gear.

“Instead of real bullets and shooting at people, they use foam bullets, tear gas, shields, and even paintballs,” he enthused.

Dabadie left the police department in the wake of his violent handling of protests over the 2016 police killing of Alton Sterling, which earned condemnations from Amnesty International and the United Nations Special Rapporteur on the Rights to Freedom of Peaceful Assembly and of Association. Still, Dabadie defended “our militarized tactics and our militarized law enforcement.” In 2020, he was made head of training for the sheriff’s office.

Protesters and civil rights groups sued several police departments and officials, including Dabadie and East Baton Rouge Sheriff Sid Gautreaux III, for violating their civil rights during the 2016 protests. 

In court proceedings stemming from one of the lawsuits, an officer testified that police had “fooled around” with a crowd-dispersing acoustic weapon and used it against protesters without sufficient training. (Those plaintiffs were awarded a $1.2 million settlement last year.)

The lawyer who filed that lawsuit, William Most, told The Intercept that the sheriff’s department’s track record makes him concerned about its use of an autonomous drone system. 

“Given EBRSO’s past trouble in complying with the Constitution, I would be concerned about it adopting a drone program without clear safeguards to protect the rights of Baton Rouge residents,” said Most.

In Israel, meanwhile, High Lander’s business has flourished amid Israel’s retaliatory war on Gaza. After October 7, Israel passed an emergency measure saying that civilian drones can return to the skies only if they are connected to an approved unmanned traffic management, or UTM, system. High Lander became the first approved UTM system in Israel. (The country’s Air Force previously tested the company’s drone technology.)

The drones have “counter-drone” measures that can take over and land enemy drones, and detect the location of their controllers. In an April presentation, High Lander’s co-founder Alon Abelson, a former Israeli Air Force commander, describes a scenario in which Orion allows users to deploy “hundreds of PTZ [pan-tilt-zoom] cameras that hover and relay images from the air,” a surveillance capacity he described as unprecedented.

The company relies on hundreds of sensors around Israel, Abelson said in his talk, allowing them to turn drone fleets “into part of an information-sharing system that did not exist before.” The system allows settlements’ drones to automatically launch from chargers when triggered by cameras, smoke detectors, or “smart fences.” Since October 7, he said, “we have provided this system to hundreds of settlements throughout the country.”

The post An Israeli Company Is Hawking Its Self-Launching Drone System to U.S. Police Departments appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/17/israel-orione-drone-us-police-louisiana/feed/ 0 468698
<![CDATA[They Exposed an Israeli Spyware Firm. Now the Company Is Badgering Them in Court.]]> https://theintercept.com/2024/05/06/pegasus-nso-group-israeli-spyware-citizen-lab/ https://theintercept.com/2024/05/06/pegasus-nso-group-israeli-spyware-citizen-lab/#respond Mon, 06 May 2024 19:03:09 +0000 NSO Group, which makes Pegasus spyware, keeps trying to extract information from Citizen Lab researchers — and a judge keeps swatting it down.

The post They Exposed an Israeli Spyware Firm. Now the Company Is Badgering Them in Court. appeared first on The Intercept.

]]>
For years, cybersecurity researchers at Citizen Lab have monitored Israeli spyware firm NSO Group and its banner product, Pegasus. In 2019, Citizen Lab reported finding dozens of cases in which Pegasus was used to target the phones of journalists and human rights defenders via a WhatsApp security vulnerability.

Now NSO, which is blacklisted by the U.S. government for selling spyware to repressive regimes, is trying to use a lawsuit over the WhatsApp exploit to learn “how Citizen Lab conducted its analysis.”

The lawsuit, filed in U.S. federal court in 2019 by WhatsApp and Meta (then Facebook), alleges that NSO sent Pegasus and other malware to approximately 1,400 devices across the globe. For more than four years, NSO has failed repeatedly to get the case thrown out.

With the lawsuit now moving forward, NSO is trying a different tactic: demanding repeatedly that Citizen Lab, which is based in Canada, hand over every single document about its Pegasus investigation. A judge denied NSO’s latest attempt to get access to Citizen Lab’s materials last week.

Providing its raw work to NSO, Citizen Lab’s lawyers argued, would “expose individuals already victimized by NSO’s activities to the risk of further harassment, including from their own governments” and chill their future work. (NSO declined to comment about the lawsuit.)

NSO has mounted an aggressive campaign to rehabilitate its image in recent years, particularly since being blacklisted in 2021. Last November, following the October 7 Hamas attacks, the firm requested a meeting with the State Department to discuss Pegasus as a “critical tool that is used to aid the ongoing fight against terrorists.”

The company has faced other lawsuits in U.S. courts over Pegasus, including ongoing suits brought by Salvadoran journalists, Apple, and Hanan Elatr Khashoggi, the widow of murdered journalist Jamal Khashoggi. These lawsuits also rely on Citizen Lab’s research, to varying degrees.

So far, the WhatsApp lawsuit — which NSO recently called “a one-sided ‘show’ trial” in the making — has not gone particularly well for the spyware firm. At first, NSO argued it was entirely immune from being sued in American courts, which a federal appeals court roundly rejected in 2021 and the U.S. Supreme Court declined to consider in early 2023.

Next, NSO argued the lawsuit should have been filed in Israel instead of the U.S. District Court for the Northern District of California, where both WhatsApp and Meta (which also owns WhatsApp) have their headquarters. Judge Phyllis Hamilton rejected that argument too.

In perhaps the biggest blow to NSO, earlier this year Hamilton ordered the company to disclose its software code, not just for Pegasus but also for “any NSO spyware targeting or directed at WhatsApp servers, or using WhatsApp in any way to access target devices.”

During discovery, NSO has already obtained thousands of documents from Meta and WhatsApp regarding Citizen Lab’s investigation into Pegasus. Regardless, NSO has tried and failed twice to use the lawsuit to get more information directly from Citizen Lab, which is based at the University of Toronto. In March, Hamilton denied NSO’s first request to send a cross-border demand — a “letter rogatory” — to her counterparts at the Ontario Superior Court of Justice.

NSO tried again last month. “The evidence Plaintiffs themselves have produced about Citizen Lab’s investigation is incomplete and inadequate,” its lawyers argued, because it did not show “how Citizen Lab conducted its analysis or came to its conclusions” that Pegasus was used to target individuals in civil society, as opposed to criminals or terrorists.

“We are pleased the court has recognized that NSO Group’s request for information was overbroad and not necessary.”

Citizen Lab opposed NSO’s demands on numerous grounds, particularly given “NSO’s animosity” toward its research.

In the latest order, Hamilton concluded that NSO’s demand was “plainly overbroad.” She left open the possibility for NSO to try again, but only if it can point to evidence that specific individuals that Citizen Lab categorized as “civil society” targets were actually involved in “criminal/terrorist activity.”

“We are pleased the court has recognized that NSO Group’s request for information was overbroad and not necessary at this time to resolve the disputed issues,” Citizen Lab’s director, Ronald Deibert, told The Intercept.

The post They Exposed an Israeli Spyware Firm. Now the Company Is Badgering Them in Court. appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/06/pegasus-nso-group-israeli-spyware-citizen-lab/feed/ 0 467949
<![CDATA[Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon]]> https://theintercept.com/2024/05/01/google-amazon-nimbus-israel-weapons-arms-gaza/ https://theintercept.com/2024/05/01/google-amazon-nimbus-israel-weapons-arms-gaza/#respond Wed, 01 May 2024 19:58:49 +0000 Google downplays its military work with Israel, but “Project Nimbus” documents tie the American tech giants to Israel’s deadly military capabilities.

The post Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon appeared first on The Intercept.

]]>
Google and Amazon are both loath to discuss security aspects of the cloud services they provide through their joint contract with the Israeli government, known as Project Nimbus. Though both the Ministry of Defense and Israel Defense Forces are Nimbus customers, Google routinely downplays the military elements while Amazon says little at all.

According to a 63-page Israeli government procurement document, however, two of Israel’s leading state-owned weapons manufacturers are required to use Amazon and Google for cloud computing needs. Though details of Google and Amazon’s contractual work with the Israeli arms industry aren’t laid out in the tender document, which outlines how Israeli agencies will obtain software services through Nimbus, the firms are responsible for manufacturing drones, missiles, and other weapons Israel has used to bombard Gaza.

“If tech companies, including Google and Amazon, are engaged in business activities that could impact Palestinians in Gaza, or indeed Palestinians living under apartheid in general, they must abide by their responsibility to carry out heightened human rights due diligence along the entirety of the lifecycle of their products,” said Matt Mahmoudi, a researcher at Amnesty International working on tech issues. “This must include how they plan to prevent, mitigate, and provide redress for possible human rights violation, particularly in light of mandatory relationships with weapons manufacturers, which contribute to risk of genocide.”

Project Nimbus, which provides the Israeli government with cloud services ranging from mundane Google Meet video chats to a variety of sophisticated machine-learning tools, has already created a public uproar. Google and Amazon have faced backlash ranging from street protests to employee revolts.

The tender document consists largely of legal minutiae, rules, and regulations laying out how exactly the state will purchase cloud computing services from Amazon and Google, which won the $1.2 billion contract in 2021. The Israeli document was first published in 2021 but had been updated periodically, most recently in October 2023.

One of the document’s appendices includes a list of Israeli companies and government offices that are “required to purchase the services that are the subject of the tender from the winning bidder,” according to a translation of the Hebrew-language original.

The tender document doesn’t require any of the entities to purchase cloud services, but if they need these services — ubiquitous in any 21st-century enterprise — they must purchase them from the two American tech giants. A separate portion of the document notes that any office that wants to buy cloud computing services from other companies must petition two government committees that oversee procurement for an explicit exemption.

Some of the entities listed in the document have had relationships with other companies that provide cloud services. The status and future of those business ties is unclear.

Obligatory Customers

The list of obligatory cloud customers includes state entities like the Bank of Israel, the Israel Airports Authority, and the Settlement Division, a quasi-governmental body tasked with expanding Israel’s illegal colonies in the West Bank.

Also included on the list are two of Israel’s most prominent, state-owned arms manufacturers: Israel Aerospace Industries and Rafael Advanced Defense Systems. The Israeli military has widely fielded weapons and aircraft made by these companies and their subsidiaries to prosecute its war in Gaza, which since October 7 has killed over 30,000 Palestinians, including 13,000 children.

These relationships with Israeli arms manufacturers place Project Nimbus far closer to the bloodshed in Gaza than has been previously understood.

Asked how work with weapons manufacturers could be consistent with Google’s claim that Project Nimbus doesn’t involve weapons, spokesperson Anna Kowalczyk repeated the claim in a statement to The Intecept.

“We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” said Kowalczyk, who declined to answer specific questions. “Across Google, we’ve also been clear that we will not design or deploy AI applications as weapons or weapons systems, or for mass surveillance.”

(A spokesperson for Amazon Web Services declined to comment. Neither Rafael nor IAI responded to requests for comment.)

The Israeli document provides no information about exactly what cloud services these arms makers must purchase, or whether they are to purchase them from Google, Amazon, or both. Though the government’s transition to Google and Amazon’s bespoke cloud has hit lengthy delays, last June Rafael announced it had begun transitioning certain “unclassified” cloud needs to Amazon Web Services but did not elaborate.

Google has historically declined to explain whether its various human rights commitments and terms of service prohibiting its users from harming others apply to Israel. After an April 3 report by +972 Magazine found that the Israeli military was using Google Photos’ facial recognition to map, identify, and create a “hit list” of Palestinians in Gaza, Google would not say whether it allowed this use of its software.

“Without such deep and serious process, they can be seen as complicit in Israeli crimes.”

Both Google and Amazon say their work is guided by the United Nations Guiding Principles on Business and Human Rights, which seeks to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.” The U.N. principles, which were endorsed by the U.N. Human Rights Council in 2011, say companies must “identify and assess any actual or potential” rights abuses related to their business.

Michael Sfard, an Israeli human rights attorney, told The Intercept that these guidelines dictate that Google and Amazon should conduct human rights due diligence and vet the use of their technology by the Israeli government.

“Without such deep and serious process,” Sfard said, “they can be seen as complicit in Israeli crimes.”

“Spike” Missiles

Rafael, a state-owned arms contractor, is a titan of the Israeli defense sector. The company provides the Israeli military with broad variety of missiles, drones, and other weapons systems.

It sells the vaunted Iron Dome rocket-defense system and the “Trophy” anti-rocket countermeasure system that’s helped protect Israeli military tanks during the ground offensive in Gaza.

Israel also routinely fields Rafael’s “Spike” line of missiles, which can be fired from shoulder-carried launchers, jets, or drones. Effective against vehicles, buildings, and especially people, Spike missiles can be outfitted with a fragmentation option that creates a lethal spray of metal. Since 2009, analysts have attributed cube-shaped tungsten shrapnel wounds in civilians to Israel’s use of Spike missiles.

Use of these missiles in Gaza continue, with military analysts saying that Spike missiles were likely used in the April 1 drone killing of seven aid workers affiliated with World Central Kitchen.

A view of the destroyed roof of a vehicle where employees from the World Central Kitchen (WCK), including foreigners, were killed in an Israeli airstrike, according to the NGO as the Israeli military said it was conducting a thorough review at the highest levels to understand the circumstances of this "tragic" incident, amid the ongoing conflict between Israel and Hamas, in Deir Al-Balah, in the central Gaza, Strip April 2, 2024. (Photo by Yasser Qudihe / Middle East Images / Middle East Images via AFP) (Photo by YASSER QUDIHE/Middle East Images/AFP via Getty Images)
The destroyed roof of a vehicle where World Central Kitchen aid workers were killed in an Israeli airstrike, in Deir Al-Balah, Gaza Strip, on April 2, 2024. Photo: Yasser Qudihe/Middle East Images via AFP

Elta Systems, a subsidiary of IAI, is also named in the document as an obligatory Nimbus customer. The firm deals mostly in electronic surveillance hardware but co-developed the Panda, a remote-controlled bulldozer Israeli military has used to demolish portions of Gaza.

Israel Aerospace Industries, commonly known as IAI, plays a similarly central role in the war, its weapons often deployed hand in glove with Rafael’s.

IAI’s Heron drone, for instance, is frequently armed with Spike missiles. The Heron provides the Israeli Air Force with the crucial capacity to persistently surveil the denizens of Gaza and launch airstrikes against them at will.

In November, IAI CEO Boaz Levy told the Jerusalem Post, “IAI’s HERON Unmanned Aerial Systems stand as a testament to our commitment to innovation and excellence in the ever-evolving landscape of warfare. In the Iron Swords War” — referring to Israel’s name for its military operation against Hamas — “the HERON UAS family played a pivotal role, showcasing Israel’s operational versatility and adaptability in diverse environments.”

Project Nimbus also establishes its own links between the Israeli security establishment and the American defense industry. While Nimbus is based on Google and Amazon’s provision of their own cloud services to Israel, the tender document says these companies will also establish “digital marketplaces,” essentially bespoke app stores for the Israeli government that allow them to access a library of cloud-hosted software from third parties.

According to a spreadsheet detailing these third-party cloud offerings, Google provides Nimbus users with access to Foundry, a data analysis tool made by the U.S. defense and intelligence contractor Palantir. (A spokesperson for Palantir declined to comment.)

Google began offering Foundry access to its cloud customers last year. While marketed primarily as civilian software, Foundry is used by military forces including U.S. Special Operations Command and the U.K. Royal Navy. In 2019, the Washington Post reported the U.S. Army would spend $110 million to use Foundry “to piece together thousands of complex data sets containing information on U.S. soldiers and the expansive military arsenal that supports them.”

The Israeli military extensively uses Palantir software for targeting in Gaza, veteran national security journalist James Bamford reported recently in The Nation.

Palantir has been an outspoken champion of the Israeli military’s invasion of Gaza. “Certain kinds of evil can only be fought with force,” the company posted on its social media during the first week of the conflict. “Palantir stands with Israel.”

War Abroad, Revolt at Home

That Project Nimbus includes a prominent military dimension has been known since the program’s inception.

In 2021, the Israeli Finance Ministry announced the contract as “intended to provide the government, the defense establishment and others with an all-encompassing cloud solution.” In 2022, training materials first reported by The Intercept confirmed that the Israeli Ministry of Defense would be a Google Cloud user.

Google’s public relations apparatus, however, has consistently downplayed the contracting work with the Israeli military. Google spokespeople have repeatedly told press outlets that Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” Amazon has tended to avoid discussing the contract at all.

The revelation that Google’s lucrative relationship with the Israeli state includes a mandated relationship with two weapons manufacturers undermines its claim that the contract doesn’t touch the arms trade.

“Warfighting operations narrowly defined can only proceed through the wider communications and data infrastructures on which they depend,” Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University, told The Intercept. “Providing those infrastructures to industries and organizations responsible for the production and deployment of weapon systems arguably implicates Google in the operations that its services support, however indirectly.”

Project Nimbus has proven deeply contentious within Google and Amazon, catalyzing a wave of employee dissent unseen since the controversy over Google’s now-defunct contract to bolster the U.S. military drone program.

“Why are we pretending that because my logo is colorful and has round letters that I’m any better than Raytheon?”

While workers from both companies have publicly protested the Nimbus contract, Google employees have been particularly vocal. Following anti-Nimbus sit-ins organized at the company’s New York and Sunnyvale, California, offices, Google fired 50 employees it said participated in the protests.

Emaan Haseem, a cloud computing engineer at Google until she was fired after participating in the Sunnyvale protest, told The Intercept she thinks the company needs to be frank with its employees about what their labor ends up building.

“A lot of us signed up or applied to work at Google because we were trying to avoid working at terrible unethical companies,” she said in an interview. Haseem graduated college in 2022 and said she consciously avoided working for weapons manufacturers like Raytheon or large energy companies.

“Then you just naively join, and you find out it’s all the same. And then you’re just kind of angry,” she said. “Why are we acting any different? Why are we pretending that because my logo is colorful and has round letters that I’m any better than Raytheon?”

The post Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon appeared first on The Intercept.

]]>
https://theintercept.com/2024/05/01/google-amazon-nimbus-israel-weapons-arms-gaza/feed/ 0 467522 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images) A view of the destroyed roof of a vehicle where employees from the World Central Kitchen (WCK), including foreigners, were killed in an Israeli airstrike, according to the NGO as the Israeli military said it was conducting a thorough review at the highest levels to understand the circumstances of this "tragic" incident, amid the ongoing conflict between Israel and Hamas, in Deir Al-Balah, in the central Gaza, Strip April 2, 2024. (Photo by Yasser Qudihe / Middle East Images / Middle East Images via AFP) (Photo by YASSER QUDIHE/Middle East Images/AFP via Getty Images)
<![CDATA[Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military]]> https://theintercept.com/2024/04/10/microsoft-openai-dalle-ai-military-use/ https://theintercept.com/2024/04/10/microsoft-openai-dalle-ai-military-use/#respond Wed, 10 Apr 2024 12:00:00 +0000 https://theintercept.com/?p=465943 Any battlefield use of the software would be a dramatic turnaround for OpenAI, which describes its mission as developing AI that can benefit all of humanity.

The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

]]>
Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

“The point is you’re contributing to preparation for warfighting.”

Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the military use of its machine learning tools.

Related

OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

Update: April 11, 2024
This article was updated to clarify Microsoft’s promotion of its work with the Department of Defense.

The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

]]>
https://theintercept.com/2024/04/10/microsoft-openai-dalle-ai-military-use/feed/ 0 465943
<![CDATA[Forget a Ban — Why Are Journalists Using TikTok in the First Place?]]> https://theintercept.com/2024/04/07/tiktok-ban-journalists-safety/ https://theintercept.com/2024/04/07/tiktok-ban-journalists-safety/#respond Sun, 07 Apr 2024 14:00:20 +0000 I’m a security researcher working in the journalism field, and I’m here to rain on your dangerous, dumb parade.

The post Forget a Ban — Why Are Journalists Using TikTok in the First Place? appeared first on The Intercept.

]]>
The TikTok logo is being displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024. (Photo by Klaudia Radecka/NurPhoto via Getty Images)
The TikTok logo displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024. Photo: Klaudia Radecka/NurPhoto via Getty Images

As far as I know, there are no laws against eating broken glass. You’re free to doomscroll through your cabinets, smash your favorite water cup, then scarf down the shards.

A ban on eating broken glass would be overwhelmingly irrelevant, since most people just don’t do it, and for good reason. Unfortunately, you can’t say the same about another dangerous habit: TikTok.

As a security researcher, I can’t help but hate TikTok, just like I hate all social media, for creating unnecessary personal exposure.

As a security researcher working in journalism, one group of the video-sharing app’s many, many users give me heartburn. These users strike a particular fear into my heart. This group of users is — you guessed it — my beloved colleagues, the journalists.

TikTok, of course, isn’t the only app that poses risks for journalists, but it’s been bizarre to watch reporters with sources to protect express concern about a TikTok ban when they shouldn’t be using the platform in the first place. TikTok officials, after all, have explicitly targeted reporters in attempts to reveal their sources.

My colleagues seem to nonetheless be dressing up as bullseyes.

Ignoring TikTok’s Record

Impassioned pleas by reporters to not ban TikTok curiously omit TikTok’s most egregious attacks on reporters.

In his defense of TikTok, the Daily Beast’s Brad Polumbo offers a disclaimer in the first half of the headline — “TikTok Is Bad. Banning It Would Be Much Worse” — but never expands upon why. Instead, the bulk of the piece offers an apologia for TikTok’s parent company, ByteDance.

Meanwhile, Vox’s A.W. Ohlheiser expatiates on the “both/and” of TikTok, highlighting its many perceived benefits and ills. And yet, the one specific ill, which could have the most impact on Ohlheiser and other reporters, is absent from the laundry list of downsides.

The record is well established. In an attempt to identify reporters’ sources, ByteDance accessed IP addresses and other user data of several journalists, according to a Forbes investigation. The intention seems to have been to track the location of the reporters to see if they were in the same locations as TikTok employees who may have been sources for stories about TikTok’s links to China.

Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

“TikTok does not collect precise GPS location information from US users, meaning TikTok could not monitor US users in the way the article suggested,” the TikTok communication team’s account posted on X in response to Forbes’s initial reporting. “TikTok has never been used to ‘target’ any members of the U.S. government, activists, public figures or journalists.”

Forbes kept digging, and its subsequent investigation found that an internal email “acknowledged that TikTok had been used in exactly this way,” as reporter Emily Baker-White put it.

TikTok did various probes into the company’s accessing of U.S. user data; officials were fired and at least one resigned, according to Forbes. That doesn’t change the basic facts: Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

And Now, Service Journalism for Journalists

For my journalism colleagues, there may well be times when you need to check TikTok, for instance when researching a story. If this is the case, you should follow the operational security best practice of compartmentalization: keeping various items separated from one another.

In other words, put TikTok on a separate “burner” device, which doesn’t have anything sensitive on it, like your sources saved in its contacts. There’s no evidence TikTok can see, for example, your chat histories, but it can, according to the security research firm Proofpoint, access your device’s location data, contacts list, as well as camera and microphone. And, as as a security researcher, I like to be as safe as possible.

And keep the burner device in a separate location from your regular phone. Don’t walk around with both phones turned on and connected to a cellular or Wi-Fi network and, for the love of everything holy, don’t take the burner to sensitive source meetings.

You can also limit the permissions that your device gives to TikTok — so that you’re not handing the app your aforementioned location data, contacts list, and camera access — and you should. Only allow the app to do things that are required for the app to run, and only run enough to do your research.

And don’t forget, this is all for your research. When you’re done looking up whatever in our hellscape tech dystopia has brought you to this tremendous time suck, the burner device should be wiped and restored to factory defaults.

The security and disinformation risks posed to journalists are, of course, not unique to TikTok. They permeate, one way or another, every single social media platform.

That doesn’t explain journalists’ inscrutable defense of a medium that is actively working against them. It’s as clear as your favorite water cup.

Editor’s note: You can follow The Intercept on TikTok here.

The post Forget a Ban — Why Are Journalists Using TikTok in the First Place? appeared first on The Intercept.

]]>
https://theintercept.com/2024/04/07/tiktok-ban-journalists-safety/feed/ 0 465818 The TikTok logo is being displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024. (Photo by Klaudia Radecka/NurPhoto via Getty Images)
<![CDATA[Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List”]]> https://theintercept.com/2024/04/05/google-photos-israel-gaza-facial-recognition/ https://theintercept.com/2024/04/05/google-photos-israel-gaza-facial-recognition/#respond Fri, 05 Apr 2024 11:00:00 +0000 Google prohibits using its tech for “immediate harm,” but Israel is harnessing its facial recognition to set up a dragnet of Palestinians.

The post Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List” appeared first on The Intercept.

]]>
The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. In its facial recognition story, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating at the hands of the Israeli military began with its use of facial recognition. Abu Toha, later released without being charged with any crime, told the paper that Israeli soldiers told him his facial recognition-enabled arrest had been a “mistake.”

Putting aside questions of accuracy — facial recognition systems are notorious less accurate on nonwhite faces — the use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny, or worse, is at odds with the company’s clearly stated rules. Under the header “Dangerous and Illegal Activities,” Google warns that Google Photos cannot be used “to promote activities, goods, services, or information that cause serious and immediate harm to people.”

“Facial recognition surveillance of this type undermines rights enshrined in international human rights law.”

Asked how a prohibition against using Google Photos to harm people was compatible with the Israel military’s use of Google Photos to create a “hit list,” company spokesperson Joshua Cruz declined to answer, stating only that “Google Photos is a free product which is widely available to the public that helps you organize photos by grouping similar faces, so you can label people to easily find old photos. It does not provide identities for unknown people in photographs.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)

It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military.

“It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity,’ but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action.”

Doing Good or Doing Google?

In addition to its terms of service ban against using Google Photos to cause harm to people, the company has for many years claimed to embrace various global human rights standards.

“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post. “That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”

This deep commitment includes, according to the company, upholding the Universal Declaration of Human Rights — which forbids torture — and the U.N. Guiding Principles on Business and Human Rights, which notes that conflicts over territory produce some of the worst rights abuses.

The Israeli military’s use of a free, publicly available Google product like Photos raises questions about these corporate human rights commitments, and the extent to which the company is willing to actually act upon them. Google says that it endorses and subscribes to the U.N. Guiding Principles on Business and Human Rights, a framework that calls on corporations to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.”

Walden also said Google supports the Conflict-Sensitive Human Rights Due Diligence for ICT Companies, a voluntary framework that helps tech companies avoid the misuse of their products and services in war zones. Among the document’s many recommendations are for companies like Google to consider “Use of products and services for government surveillance in violation of international human rights law norms causing immediate privacy and bodily security impacts (i.e., to locate, arrest, and imprison someone).” (Neither JustPeace Labs nor Business for Social Responsibility, which co-authored the due-diligence framework, replied to a request for comment.)

“Google and Corsight both have a responsibility to ensure that their products and services do not cause or contribute to human rights abuses,” said Bacciarelli. “I’d expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”

Google employees taking part in the No Tech for Apartheid campaign, a worker-led protest movement against Project Nimbus, called their employer to prevent the Israeli military from using Photos’s facial recognition to prosecute the war in Gaza.

“That the Israeli military is even weaponizing consumer technology like Google Photos, using the included facial recognition to identify Palestinians as part of their surveillance apparatus, indicates that the Israeli military will use any technology made available to them — unless Google takes steps to ensure their products don’t contribute to ethnic cleansing, occupation, and genocide,” the group said in a statement shared with The Intercept. “As Google workers, we demand that the company drop Project Nimbus immediately, and cease all activity that supports the Israeli government and military’s genocidal agenda to decimate Gaza.”

Project Nimbus

This would not be the first time Google’s purported human rights principles contradict its business practices — even just in Israel. Since 2021, Google has sold the Israeli military advanced cloud computing and machine learning-tools through its controversial “Project Nimbus” contract.

Unlike Google Photos, a free consumer product available to anyone, Project Nimbus is a bespoke software project tailored to the needs of the Israeli state. Both Nimbus and Google Photos’s face-matching prowess, however, are products of the company’s immense machine-learning resources.

The sale of these sophisticated tools to a government so regularly accused of committing human rights abuses and war crimes stands in opposition to Google’s AI Principles. The guidelines forbid AI uses that are likely to cause “harm,” including any application “whose purpose contravenes widely accepted principles of international law and human rights.”

Google has previously suggested its “principles” are in fact far narrower than they appear, applying only to “custom AI work” and not the general use of its products by third parties. “It means that our technology can be used fairly broadly by the military,” a company spokesperson told Defense One in 2022.

How, or if, Google ever turns its executive-blogged assurances into real-world consequences remains unclear. Ariel Koren, a former Google employee who said she was forced out of her job in 2022 after protesting Project Nimbus, placed Google’s silence on the Photos issue in a broader pattern of avoiding responsibility for how its technology is used.

“It is an understatement to say that aiding and abetting a genocide constitutes a violation of Google’s AI principles and terms of service,” Koren, now an organizer with No Tech for Apartheid, told The Intercept. “Even in the absence of public comment, Google’s actions have made it clear that the company’s public AI ethics principles hold no bearing or weight in Google Cloud’s business decisions, and that even complicity in genocide is not a barrier to the company’s ruthless pursuit of profit at any cost.”

The post Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List” appeared first on The Intercept.

]]>
https://theintercept.com/2024/04/05/google-photos-israel-gaza-facial-recognition/feed/ 0 465717 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack]]> https://theintercept.com/2024/04/03/linux-hack-xz-utils-backdoor/ https://theintercept.com/2024/04/03/linux-hack-xz-utils-backdoor/#respond Wed, 03 Apr 2024 23:05:38 +0000 A shadowy figure spent years ingratiating themself to a developer, then injected a backdoor that could have taken over millions of computers.

The post The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack appeared first on The Intercept.

]]>
On March 29, Microsoft software developer Andres Freund was trying to optimize the performance of his computer when he noticed that one program was using an unexpected amount of processing power. Freund dove in to troubleshoot and “got suspicious.”

Eventually, Freund found the source of the problem, which he subsequently posted to a security mailing list: He had discovered a backdoor in XZ Utils, a data compression utility used by a wide array of various Linux-based computer applications — a constellation of open-source software that, while often not consumer-facing, undergirds key computing and internet functions like secure communications between machines.

By inadvertently spotting the backdoor, which was buried deep in the code in binary test files, Freund averted a large-scale security catastrophe. Any machine running an operating system that included the backdoored utility and met the specifications laid out in the malicious code would have been vulnerable to compromise, allowing an attacker to potentially take control of the system.

The XZ backdoor was introduced by way of what is known as a software supply chain attack, which the National Counterintelligence and Security Center defines as “deliberate acts directed against the supply chains of software products themselves.” The attacks often employ complex ways of changing the source code of the programs, such as gaining unauthorized access to a developer’s system or through a malicious insider with legitimate access.

The malicious code in XZ Utils was introduced by a user calling themself Jia Tan, employing the handle JiaT75, according to Ars Technica and Wired. Tan had been a contributor to the XZ project since at least late 2021 and built trust with the community of developers working on it. Eventually, though the exact timeline is unclear, Tan ascended to being co-maintainer of the project, alongside the founder, Lasse Collin, allowing Tan to add code without needing the contributions to be approved. (Neither Tan nor Collin responded to requests for comment.)

The XZ backdoor betrays a sophisticated, meticulous operation. First, whoever led the attack identified a piece of software that would be embedded in a vast array of Linux operating systems. The development of this widely used technical utility was understaffed, with a single, core maintainer, Collin, who later conceded he was unable to maintain XZ, providing the opportunity for another developer to step in. Then, after cultivating Collin’s trust over a period of years, Tan injected a backdoor into the utility. All these moves were underlaid by a technical proficiency that ushered the creation and embedding of the actual backdoor code — a code sophisticated enough that analysis of its precise functionality and capability is still ongoing.

“The care taken to hide the exploits in binary test files as well as the sheer time taken to gain a reputation in the open-source project to later exploit it are abnormally sophisticated,” said Molly, a system administrator at Electronic Frontier Foundation who goes by a mononym. “However, there isn’t any indication yet whether this was state sponsored, a hacking group, a rogue developer, or any combination of the above.”

Tan’s elevation to being a co-maintainer mostly played out on an email group where code developers — in the open-source, collaborative spirit of the Linux family of operating systems — exchange ideas and strategize to build applications.

On one email list, Collin faced a raft of complaints. A group of users, relatively new to the project, had protested that Collin was falling behind and not making updates to the software quickly enough. He should, some of these users said, hand over control of the project; some explicitly called for the addition of another maintainer. Conceding that he could no longer devote enough attention to the project, Collin made Tan a co-maintainer.

The users involved in the complaints seemed to materialize from nowhere — posting their messages from what appear to be recently created Proton Mail accounts, then disappearing. Their entire online presence is related to these brief interactions on the mailing list dedicated to XZ; their only recorded interest is in quickly ushering along updates to the software.

Various U.S. intelligence agencies have recently expressed interest in addressing software supply chain attacks. The Cybersecurity and Infrastructure Security Agency jumped into action after Freund’s discovery, publishing an alert about the XZ backdoor on March 29, the same day Freund publicly posted about it.

Open-Source Players

In the open-source world of Linux programming — and in the development of XZ Utils — collaboration is carried out through email groups and code repositories. Tan posted on the listserv, chatted to Collin, and contributed code changes on the code repository Github, which is owned by Microsoft. GitHub has since disabled access to the XZ repository and disabled Tan’s account. (In February, The Intercept and other digital news firms sued Microsoft and its partner OpenAI for using their journalism without permission or credit.)

Several other figures on the email list participated in efforts — appearing to be diffuse but coinciding in their aims and timing — to install the new co-maintainer, sometimes particularly pushing for Tan.

Later, on a listserv dedicated to Debian, one of the more popular of the Linux family of operating systems, another group of users advocated for the backdoored version of XZ Utils to be included in the operating system’s distribution.

These dedicated groups played discrete roles: In one case, complaining about the lack of progress on XZ Utils and pushing for speedier updates by installing a new co-maintainer; and, in the other case, pushing for updated versions to be quickly and widely distributed.

“I think the multiple green accounts seeming to coordinate on specific goals at key times fits the pattern of using networks of sock accounts for social engineering that we’ve seen all over social media,” said Molly, the EFF system administrator. “It’s very possible that the rogue dev, hacking group, or state sponsor employed this tactic as part of their plan to introduce the back door. Of course, it’s also possible these are just coincidences.”

The pattern seems to fit what’s known in intelligence parlance as “persona management,” the practice of creating and subsequently maintaining multiple fictitious identities. A leaked document from the defense contractor HBGary Federal outlines the meticulousness that may go into maintaining these fictive personas, including creating an elaborate online footprint — something which was decidedly missing from the accounts involved in the XZ timeline.

While these other users employed different emails, in some cases they used providers that give clues as to when their accounts were created. When they used Proton Mail accounts, for instance, the encryption keys associated with these accounts were created on the same day, or mere days before, the users’ first posts to the email group. (Users, however, can also generate new keys, meaning the email addresses may have been older than their current keys.)

One of the earliest of these users on the list used the name Jigar Kumar. Kumar appears on the XZ development mailing list in April 2022, complaining that some features of the tool are confusing. Tan promptly responded to the comment. (Kumar did not respond to a request for comment.)

Kumar repeatedly popped up with subsequent complaints, sometimes building off others’ discontent. After Dennis Ens appeared on the same mailing list, Ens also complained about the lack of response to one of his messages. Collin acknowledged things were piling up and mentioned Tan had been helping him off list; he might soon have “a bigger role with XZ Utils.” (Ens did not respond to a request for comment.)

After another complaint from Kumar calling for a new maintainer, Collin responded: “I haven’t lost interest but my ability to care has been fairly limited mostly due to longterm mental health issues but also due to some other things. Recently I’ve worked off-list a bit with Jia Tan on XZ Utils and perhaps he will have a bigger role in the future, we’ll see.”

The pressure kept coming. “As I have hinted in earlier emails, Jia Tan may have a bigger role in the project in the future,” Collin responded after Ens suggested he hand off some responsibilities. “He has been helping a lot off-list and is practically a co-maintainer already. :-)”

Ens then went quiet for two years — reemerging around the time the bulk of the malicious backdoor code was installed in the XZ software. Ens kept urging ever quicker updates.

After Collin eventually made Tan a co-maintainer, there was a subsequent push to get XZ Utils — which by now had the backdoor — distributed widely. After first showing up on the XZ GitHub repository in June 2023, another figure calling themselves Hans Jansen went on this March to push for the new version of XZ to be included in Debian Linux. (Jansen did not respond to a request for comment.)

An employee at Red Hat, a software firm owned by IBM, which sponsors and helps maintain Fedora, another popular Linux operating system, described Tan trying to convince him to help add the compromised XZ Utils to Fedora.

These popular Linux operating systems account for millions of computer users — meaning that huge numbers of users would have been open to compromise if Freund, the developer, had not discovered the backdoor.

“While the possibility of socially engineering backdoors in critical software seems like an indictment of open-source projects, it’s not exclusive to open source and could happen anywhere,” said Molly. “In fact, the ability for the engineer to discover this backdoor before it was shipped was only possible due to the open nature of the project.”

The post The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack appeared first on The Intercept.

]]>
https://theintercept.com/2024/04/03/linux-hack-xz-utils-backdoor/feed/ 0 465606
<![CDATA[Congress Has a Chance to Rein In Police Use of Surveillance Tech]]> https://theintercept.com/2024/04/02/surveillance-tech-new-york-state-police/ https://theintercept.com/2024/04/02/surveillance-tech-new-york-state-police/#respond Tue, 02 Apr 2024 14:00:00 +0000 As state police amass more spying tools, privacy advocates say Congress’s debate over a mass surveillance bill offers hope for reform.

The post Congress Has a Chance to Rein In Police Use of Surveillance Tech appeared first on The Intercept.

]]>
Hardware that breaks into your phone; software that monitors you on the internet; systems that can recognize your face and track your car: The New York State Police are drowning in surveillance tech.

Last year alone, the Troopers signed at least $15 million in contracts for powerful new surveillance tools, according to a New York Focus and Intercept review of state data. While expansive, the State Police’s acquisitions aren’t unique among state and local law enforcement. Departments across the country are buying tools to gobble up civilians’ personal data, plus increasingly accessible technology to synthesize it.

“It’s a wild west,” said Sean Vitka, a privacy advocate and policy counsel for Demand Progress. “We’re seeing an industry increasingly tailor itself toward enabling mass warrantless surveillance.”

So far, local officials haven’t done much about it. Surveillance technology has far outpaced traditional privacy laws, and legislators have largely failed to catch up. In New York, lawmakers launched a years-in-the-making legislative campaign last year to rein in police intrusion — but with Gov. Kathy Hochul pushing for tough-on-crime policies instead, none of their bills have made it out of committee.

So New York privacy proponents are turning to Congress. A heated congressional debate over the future of a spying law offers an opportunity to severely curtail state and local police surveillance through federal regulation.

At issue is Section 702 of the Foreign Intelligence Surveillance Act, or FISA, which expires on April 19. The law is notorious for a provision that allows the feds to access Americans’ communications swept up in intelligence agencies’ international spying. As some members of Congress work to close that “backdoor,” they’re also pushing to ban a so-called data broker loophole that allows law enforcement to buy civilians’ personal data from private vendors without a warrant. Closing that loophole would likely make much of the New York State Police’s recently purchased surveillance tech illegal.

Members of the House and Senate judiciary committees, who have introduced bills to close the loopholes, are leading the latest bipartisan charge for reform. Members of the House and Senate intelligence committees, meanwhile, are pushing to keep the warrant workarounds in place. The Democratic leaders of both chambers — House Minority Leader Hakeem Jeffries and Senate Majority Leader Chuck Schumer, both from New York — have so far kept quiet on the spying debate. As Section 702’s expiration date nears, local advocates are trying to get them on board.

On Tuesday, a group of 33 organizations, many from New York, sent a letter to Jeffries and Schumer urging them to close the loopholes. More than 100 grassroots and civil rights groups from across the country sent the lawmakers a similar petition this week.

“These products are deeply invasive, discriminatory, and ripe for abuse.”

“These products are deeply invasive, discriminatory, and ripe for abuse,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, which signed both letters. They reach “into nearly every aspect of our digital and physical lives.”

Jeffries’s office declined to comment. Schumer’s office did not respond to a request for comment before publication.

Both letters cited a Wired report from last month, which revealed that Republican Rep. Mike Turner of Ohio, the chair of the House Intelligence Committee, pointed to New York City protests against Israel’s war on Gaza to argue against the spying law’s reform. Sources told Wired that in a presentation to fellow House Republicans, Turner implied that protesters in New York had ties to Hamas — and therefore should remain subject to Section 702’s warrantless surveillance backdoor. An intelligence committee spokesperson disputed the characterization of Turner’s remarks, but said that the protests had “responded to what appears to be a Hamas solicitation.”

“The real-world impact of such surveillance on protest and dissent is profound and undeniable,” read the New York letter, spearheaded by Empire State Indivisible and NYU Law School’s Brennan Center for Justice. “With Rep. Turner having placed your own constituents in the crosshairs, your leadership is urgently needed.”

Police surveillance today looks much different than it did 10, five, or even three years ago. A report from the U.S. Office of the Director of National Intelligence, declassified last year, put it succinctly: “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits.”

That report called specific attention to the “data broker loophole”: law enforcement’s practice of obtaining data for which they’d otherwise have to obtain a warrant by buying it from brokers. The New York State Police have taken greater and greater advantage of the loophole in recent years, buying up seemingly as much tech and data as they can get their hands on.

In 2021, the State Police purchased a subscription to ShadowDragon, which is designed to scan websites for clues about targeted individuals, then synthesize it into in-depth profiles.

Related

ShadowDragon: Inside the Social Media Surveillance Software That Can Watch Your Every Move

“I want to know everything about the suspect: Where do they get their coffee? Where do they get their gas? Where’s their electric bill? Who’s their mom? Who’s their dad?” ShadowDragon’s founder said in an interview unearthed by The Intercept in 2021. The company claims that its software can anticipate crime and violence — a practice, trendy among law enforcement tech companies, known as “predictive policing,” which ethicists and watchdogs warn can be inaccurate and biased.

The State Police renewed their ShadowDragon subscription in January of last year, shelling out $308,000 for a three-year contract. That was one of at least nine web surveillance tools State Police signed contracts for last year, worth at least $2.1 million in total.

Among the other firms the Troopers contracted with are Cognyte ($310,000 for a three-year contract); Whooster ($110,000 over three years); Skopenow ($280,000); Griffeye ($209,000); the credit reporting agency TransUnion ($159,000); and Echosec ($262,000 over two years), which specializes in using “global social media, discussions, and defense forums” to geolocate people. They also bought Cobwebs software, a mass web surveillance tool created by former Israeli military and intelligence officials — part of that country’s multibillion-dollar surveillance tech industry, which often tests its products on Palestinians.

That’s likely not the full extent of the State Police’s third party-brokered surveillance arsenal. As New York Focus revealed last year, the State Police have for years been shopping around for programs that take in mass quantities of data from social media, sift through them, and then feed insights — including users’ real-time location information — to law enforcement. Those contracts don’t show up in the state contract data, suggesting that the public disclosures are incomplete. Depending on how the programs obtain their data, closing the data broker loophole could bar their sale to law enforcement.

The State Police refused to answer questions about how its officers use surveillance tools.

“We do not discuss specific strategies or technologies as it provides a blueprint to criminals which puts our members and the public at risk,” State Police spokesperson Deanna Cohen said in an email.

Closing the data broker loophole wouldn’t entirely curtail the police surveillance tech boom. The New York State Police have also been deepening their investments in tech the FISA reforms wouldn’t touch, like aerial drones and automatic license plate readers, which store data from billions of scans to create searchable vehicle location databases.

They’ve also spent millions on mobile device forensic tools, or MDFTs, powerful hacking hardware and software that allow users to download full, searchable copies of a cellphone’s data, including social media messages, emails, web and search histories, and minute-by-minute location information.

Watchdogs warn of potential abuses accompanying the proliferation of MDFTs. The Israeli MDFT company Cellebrite has serviced repressive authorities around the globe, including police in Botswana, who used it to access a journalist’s list of sources, and Hong Kong, where the cops deployed it against leaders of the pro-democracy protest movement there.

In the United States, law enforcement officials argue that more expansive civil liberties protections prevent them from misusing the tech. But according to the technology advocacy organization Upturn, around half of police departments that have used MDFTs have done so with no internal policies in place. Meanwhile, cops have manipulated people into consenting to having their phones cracked without a warrant — for instance, by having them sign generic consent forms that don’t explain that the police will be able to access the entirety of their phone’s data.

In October 2020, New York police departments known to use MDFTs had spent less than $2.2 million on them, and no known MDFT-using department in the country had hit the million-dollar mark, according to a report by Upturn.

Between September 2022 and November 2023, however, the State Police signed more than $12.1 million in contracts for MDFT products and training, New York Focus and The Intercept found. They signed a five-year, $4 million agreement with Cellebrite, while other contracts went to MDFT firms Magnet Forensics and Teel Technologies. The various products attack phones in different ways, and thus have different strengths and weaknesses depending on the type of phone, according to Emma Weil, senior policy analyst at Upturn.

Cellebrite’s tech initially costs around $10,000–$30,000 for an official license, then tens or low hundreds of thousands of dollars for the ability to hack into a set number of phones. According to Weil, the State Police’s inflated bill could mean either that Cellebrite has dramatically increased its pricing, or that the Troopers are “getting more intensive support to unlock more difficult phones.”

If Congress passes the Section 702 renewal without addressing its warrant workarounds, state and local legislation will become the main battleground in the fight against the data broker loophole. In New York, state lawmakers have introduced at least 14 bills as part of their campaign to rein in police surveillance, but none have gotten off the ground.

If the legislature passes some of the surveillance bills, they may well face opposition when they hit the governor’s desk. Hochul has extolled the virtues of police surveillance technology, and committed to expanding law enforcement’s ability to disseminate the information gathered by it. Every year since entering the governor’s mansion, she has proposed roughly doubling funding to New York’s Crime Analysis Center Network, a series of police intelligence hubs that distribute information to local and federal law enforcement, and she’s repeatedly boosted funding to the State Police’s social media surveillance teams.

The State Police has “ramped up its monitoring,” she said in November. “All this is in response to our desire, our strong commitment, to ensure that not only do New Yorkers be safe — but they also feel safe.”

This story was published in partnership with New York Focus, a nonprofit news site investigating how power works in New York state. Sign up for their newsletter here.

The post Congress Has a Chance to Rein In Police Use of Surveillance Tech appeared first on The Intercept.

]]>
https://theintercept.com/2024/04/02/surveillance-tech-new-york-state-police/feed/ 0 465386
<![CDATA[Meta Refuses to Answer Questions on Gaza Censorship, Say Sens. Warren and Sanders]]> https://theintercept.com/2024/03/26/meta-gaza-censorship-warren-sanders/ https://theintercept.com/2024/03/26/meta-gaza-censorship-warren-sanders/#respond Tue, 26 Mar 2024 12:00:00 +0000 In the days after October 7, Meta said it removed more than 2 million pieces of Hebrew and Arabic content, but didn’t break down the data.

The post Meta Refuses to Answer Questions on Gaza Censorship, Say Sens. Warren and Sanders appeared first on The Intercept.

]]>
Citing the company’s “failure to provide answers to important questions,” Sens. Elizabeth Warren, D-Mass., and Bernie Sanders, I-Vt., are pressing Meta, which owns Facebook and Instagram, to respond to reports of disproportionate censorship around the Israeli war on Gaza.

“Meta insists that there’s been no discrimination against Palestinian-related content on their platforms, but at the same time, is refusing to provide us with any evidence or data to support that claim,” Warren told The Intercept. “If its ad-hoc changes and removal of millions of posts didn’t discriminate against Palestinian-related content, then what’s Meta hiding?”

In a letter to Meta CEO Mark Zuckerberg sent last December, first reported by The Intercept, Warren presented the company with dozens of specific questions about the company’s Gaza-related content moderation efforts. Warren asked about the exact numbers of posts about the war, broken down by Hebrew or Arabic, that have been deleted or otherwise suppressed.

The letter was written following widespread reporting in The Intercept and other outlets that detailed how posts on Meta platforms that are sympathetic to Palestinians, or merely depicting the destruction in Gaza, are routinely removed or hidden without explanation.

A month later, Meta replied to Warren’s office with a six-page letter, obtained by The Intercept, that provided an overview of its moderation response to the war but little in the way of specifics or new information.

“Meta’s lack of investment to safeguard its users significantly exacerbates the political situation in Palestine and perpetuates tech harms on fundamental rights in Palestine and other global majority countries, all while evading meaningful legal accountability,” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, told The Intercept. “The time has come for Meta, among other tech giants, to publicly disclose detailed measures and investments aimed at safeguarding individuals amidst the ongoing genocide, and to be more responsive to experts and civil society.”

Meta’s reply disclosed some censorship: “In the nine days following October 7, we removed or marked as disturbing more than 2,200,000 pieces of content in Hebrew and Arabic for violating our policies.” The company declined, however, to provide a breakdown of deletions by language or market, making it impossible to tell whether that figure reflects discriminatory moderation practices.

Much of Meta’s letter is a rehash of an update it provided through its public relations portal at the war’s onset, some of it verbatim.

Now, a second letter from Warren to Meta, joined this time by Sanders, says this isn’t enough. “Meta’s response, dated January 29, 2024, did not provide any of the requested information necessary to understand Meta’s treatment of Arabic language or Palestine-related content versus other forms of content,” the senators wrote.

Both senators are asking Meta to again answer Warren’s specific questions about the extent to which Arabic and Hebrew posts about the war have been treated differently, how often censored posts are reinstated, Meta’s use of automated machine learning-based censorship tools, and more.

Accusations of systemic moderation bias against Palestinians have been borne out by research from rights groups.

“Since October 7, Human Rights Watch has documented over 1,000 cases of unjustified takedowns and other suppression of content on Instagram and Facebook related to Palestine and Palestinians, including about human rights abuses,” Human Rights Watch said in a late December report. “The censorship of content related to Palestine on Instagram and Facebook is systemic, global, and a product of the company’s failure to meet its human rights due diligence responsibilities.”

A February report by AccessNow said Meta “suspended or restricted the accounts of Palestinian journalists and activists both in and outside of Gaza, and arbitrarily deleted a considerable amount of content, including documentation of atrocities and human rights abuses.”

A third-party audit commissioned by Meta itself previously concluded it had given the short shrift to Palestinian rights during a May 2021 flare-up of violence between Israel and Hamas, the militant group that controls the Gaza Strip. “Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” said the auditor’s report.

In response to this audit, Meta pledged an array of reforms, which free expression and digital rights advocates say have yet to produce a material improvement.

In its December report, Human Rights Watch noted, “More than two years after committing to publishing data around government requests for taking down content that is not necessarily illegal, Meta has failed to increase transparency in this area.”

Update: March 26, 2024, 1:11 p.m. ET
This story has been updated to include a statement received after publication from Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy.

The post Meta Refuses to Answer Questions on Gaza Censorship, Say Sens. Warren and Sanders appeared first on The Intercept.

]]>
https://theintercept.com/2024/03/26/meta-gaza-censorship-warren-sanders/feed/ 0 464307
<![CDATA[Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance]]> https://theintercept.com/2024/03/25/elon-musk-x-dataminr-surveillance-privacy/ https://theintercept.com/2024/03/25/elon-musk-x-dataminr-surveillance-privacy/#respond Mon, 25 Mar 2024 16:16:42 +0000 Musk made hay of his legal battle against secret surveillance but continued selling X user data to a company that facilitates government monitoring.

The post Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance appeared first on The Intercept.

]]>
Ten years ago, the internet platform X, then known as Twitter, filed a lawsuit against the government it hoped would force transparency around abuse-prone surveillance of social media users. X’s court battle, though, clashes with an uncomfortable fact: The company is itself in the business of government surveillance of social media.

Under the new ownership of Elon Musk, X had continued the litigation, until its defeat in January. The suit was aimed at overturning a governmental ban on disclosing the receipt of requests, known as national security letters, that compel companies to turn over everything from user metadata to private direct messages. Companies that receive these requests are typically legally bound to keep the request secret and can usually only disclose the number they’ve received in a given year in vague numerical ranges.

In its petition to the Supreme Court last September, X’s attorneys took up the banner of communications privacy: “History demonstrates that the surveillance of electronic communications is both a fertile ground for government abuse and a lightning-rod political topic of intense concern to the public.” After the court declined to take up the case in January, Musk responded tweeting, “Disappointing that the Supreme Court declined to hear this matter.”

The court’s refusal to take the case on ended X’s legal bid, but the company and Musk had positioned themselves at the forefront of a battle on behalf of internet users for greater transparency about government surveillance.

However, emails between the U.S. Secret Service and the surveillance firm Dataminr, obtained by The Intercept from a Freedom of Information Act request, show X is in an awkward position, profiting from the sale of user data for government surveillance purposes at the same time as it was fighting secrecy around another flavor of state surveillance in court.

While national security letters allow the government to make targeted demands for non-public data on an individual basis, companies like Dataminr continuously monitor public activity on social media and other internet platforms. Dataminr provides its customers with customized real-time “alerts” on desired topics, giving clients like police departments a form of social media omniscience. The alerts allow police to, for instance, automatically track a protest as it moves from its planning stages into the streets, without requiring police officials to do any time-intensive searches.

Although Dataminr defends First Alert, its governmental surveillance platform, as a public safety tool that helps first responders react quickly to sudden crises, the tool has been repeatedly shown to be used by police to monitor First Amendment-protected online political speech and real-world protests.

“The Whole Point”

Dataminr has long touted its special relationship with X as integral to First Alert. (Twitter previously owned a stake in Dataminr, though divested before Musk’s purchase.) Unlike other platforms it surveils by scraping user content, Dataminr pays for privileged access to X through the company’s “firehose”: a direct, unfiltered feed of every single piece of user content ever shared publicly to the platform.

Watching everything that happens on X in real time is key to Dataminr’s pitch to the government. The company essentially leases indirect access to this massive spray of information, with Dataminr acting as an intermediary between X’s servers and a multitude of police, intelligence, and military agencies.

While it was unclear whether, under Musk, X would continue leasing access to its users to Dataminr — and by extension, the government — the emails from the Secret Service confirm that, as of last summer, the social media platform was still very much in the government surveillance business.

“Dataminr has a unique contractual relationship with Twitter, whereby we have real-time access to the full stream of all publicly available Tweets,” a representative of the surveillance company wrote to the Secret Service in a July 2023 message about the terms of the law enforcement agency’s surveillance subscription. “In addition all of Dataminr’s public sector customers today have agreed to these terms including dozens who are responsible for law enforcement whether at the local, state or federal level.” (The terms are not mentioned in the emails.)

According to an email from the Secret Service in the same thread, the agency’s interest in Dataminr was unambiguous: “The whole point of this contract is to use the information for law enforcement purposes.”

Privacy advocates told The Intercept that X’s Musk-era warnings of government surveillance abuses are contradictory to the company’s continued sale of user data for the purpose of government surveillance. (Neither X nor Dataminr responded to a request for comment.)

“X’s legal briefs acknowledge that communications surveillance is ripe for government abuse, and that we can’t depend on the police to police themselves,” said Jennifer Granick, the surveillance and cybersecurity counsel at the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “But then X turns around and sells Dataminr fire-hose access to users’ posts, which Dataminr then passes through to the government in the form of unregulated disclosures and speculative predictions that can falsely ensnare the innocent.”

“Social media platforms should protect the privacy of their users.”

“Social media platforms should protect the privacy of their users,” Adam Schwartz, the privacy litigation director at the Electronic Frontier Foundation, which filed an amicus brief in support of X’s Supreme Court petition. “For example, platforms must not provide special services, like real-time access to the full stream of public-facing posts, to surveillance vendors who share this information with police departments. If X is providing such access to Dataminr, that would be disappointing.”

“Glaringly at Odds”

Following a 2016 investigation into the use of Twitter data for police surveillance by the ACLU, the company went so far as to expressly ban third parties from “conducting or providing surveillance or gathering intelligence” and “monitoring sensitive events (including but not limited to protests, rallies, or community organizing meetings)” using firehose data. The new policy went so far as to ban the use of firehose data for purposes pertaining to “any alleged or actual commission of a crime” — ostensibly a problem for Dataminr’s crime-fighting clientele.

These assurances have done nothing to stop Dataminr from using the data it buys from X to do exactly these things. Prior reporting from The Intercept has shown the company has, in recent years, helped federal and local police surveil entirely peaceful Black Lives Matter protests and abortion rights rallies in recent years.

Neither X nor Dataminr have responded to repeated requests to explain how a tool that allows for the real-time monitoring of protests is permitted under a policy that expressly bans the monitoring of protests. In the past, both Dataminr and X have denied that monitoring the real-time communications of people on the internet and relaying that information to the police is a form of surveillance because the posts in question are public.

Twitter later softened this prohibition by noting surveillance applications were banned “Unless explicitly approved by X in writing.” Dataminr, for its part, remains listed as an “official partner” of X.

Though the means differ, national security scholars told The Intercept that the ends of national security letters and fire-hose monitoring are the same: widespread government surveillance with little to no meaningful oversight. Neither the national security letters nor dragnet social media surveillance require a sign-off from a judge and, in both cases, those affected are left unaware they’ve fallen under governmental scrutiny.

“While I appreciate that there may be some symbolic difference between giving the government granular data directly and making them sift through what they buy from data brokers, the end result is still that user data ends up in the hands of law enforcement, and this time without any legal process,” said David Greene, civil liberties director at EFF.

“The end result is still that user data ends up in the hands of law enforcement, and this time without any legal process.”

It’s the kind of ideological contradiction typical of X’s owner. Musk has managed to sell himself as a heterodox critic of U.S. foreign policy and big government while simultaneously enriching himself by selling the state expensive military hardware through his rocket company SpaceX.

“While X’s efforts to bring more transparency to the National Security Letter process are commendable, its objection to government surveillance of communications in that context is glaringly at odds with its decision to support similar surveillance measures through its partnership with Dataminr,” said Mary Pat Dwyer, director of Georgetown University’s Law Institute for Technology Law and Policy. “Scholars and advocates have long argued the Dataminr partnership is squarely inconsistent with the platform’s policy forbidding use of its data for surveillance, and X’s continued failure to end the relationship prevents the company from credibly portraying itself as an advocate for its users’ privacy.”

The post Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance appeared first on The Intercept.

]]>
https://theintercept.com/2024/03/25/elon-musk-x-dataminr-surveillance-privacy/feed/ 0 464367