How the FBI can conduct mass surveillance – even without AI | Technology


The FBI declares it can conduct mass surveillance without AI, despite Anthropic’s protest.

A central part of the standoff between Anthropic and the Department of Defense has revolved around the artificial intelligence firm’s refusal to allow its technology to be used for mass domestic surveillance. Yet even without the cooperation of AI firms, remarks this week from Kash Patel, FBI director, show how authorities are by any reasonable measure already operating a system that can surveil citizens at scale.

On Wednesday, Patel confirmed to a Senate intelligence committee hearing that the FBI is actively buying commercially available data on Americans. Patel’s answer, which was under oath, was in response to a question from senator Ron Wyden on whether the agency was purchasing location data on citizens, as it had previously admitted to doing in 2023.

As the debate around how the US federal government uses AI has come to the forefront in recent months, it has also brought renewed attention to how authorities already possess vast capabilities for tracking and surveilling the public. Patel’s admission underscores how the government is able to conduct mass surveillance despite its assurances to abide by lawful use of AI and fourth amendment protections against unreasonable searches, which prohibit the warrantless collection of individuals’ location histories.

Federal law enforcement agencies generally must obtain a warrant to gather historical or real-time cellphone location data, which requires establishing probable cause in the eyes of a judge. While the supreme court ruled in 2018 that law enforcement could not coerce companies into disclosing information such as cell phone location records, the court did not explicitly prohibit authorities from purchasing data that included that information and more. Through contracting a network of data brokers that amass information from apps, web browsers and other online sources, federal authorities have been able to access information that it would otherwise need a warrant to obtain. Buying such information, usually en masse, can circumvent this requirement, leading many privacy advocates to label the practice unconstitutional.

The data broker industry, which is worth hundreds of billions globally, is part of the lifeblood of modern marketing and targeted advertisements. Information on the demographics, browsing habits, locations and other identifying information of consumers is a valuable commodity that has also always carried the potential for misuse.

Privacy advocates, researchers and journalists have long documented how information from data brokers can be used to determine private details of citizens without their knowledge, including sensitive personal data such as health conditions and precise locations. In 2019, the New York Times used a large set of smartphone location data to demonstrate how easy it was to track and determine the identity of almost anyone using this ostensibly anonymized data – in one case identifying a senior defense department official and his wife based on their daily movements.

Fears over use of data brokers being used to engineer mass surveillance have intensified in recent years as AI technology has made it easier to parse and cross reference vast datasets. The expanded capabilities that AI provides are also combined with efforts from government agencies, including the Department of Homeland Security and Elon Musk’s so-called “department of government efficiency”, to build a master dataset for uses that include targeting immigrants, Wired reported in April.

The use of this data has real-world consequences going back years. During ICE’s mass deportation efforts, 404 Media reported last year that the agency turned to surveillance systems that used commercially available data to monitor neighborhoods and track people to their homes or places of work based on their phone locations. In 2024, a company allegedly tracked nearly 600 visits to Planned Parenthood locations to provide the data for a massive anti-abortion ad campaign.

During Anthropic’s standoff with the Pentagon, the company’s CEO Dario Amodei discussed in a blog post how data brokers contribute to the risk that AI could be used for mass surveillance, one of the focal points of the fight.

“Under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” Amodei wrote, adding: “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life–automatically and at massive scale.”

Amodei’s post also highlights how the Pentagon’s demand that AI companies allow “any lawful use” of their products is vague enough that it could include the mass surveillance of citizens. Through the data broker loophole, analyzing the detailed personal information of Americans would not violate any privacy or surveillance laws – a dynamic that Wyden described as “an outrageous end run around the fourth amendment”.

OpenAI, which signed a contract with the Department of Defense following Anthropic’s refusal to comply with Pentagon demands, initially left a grey area in the deal around AI using commercial data. Following backlash, the company added a caveat to the agreement that its AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals”.

“The Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information,” OpenAI stated in a post following the deal.

Yet some digital privacy experts have expressed skepticism that this addendum is strong enough to prevent AI being used in mass surveillance operations, pointing to the words “intentionally” and “deliberate” in the language of the deal. In the past, the government has argued that their possession of personal information is an incidental byproduct of using such large data sets – a grey area that privacy advocates argue allows them to continue a years-long pattern of domestic surveillance operations.



Source link