Right in the middle of the ongoing feud between the Silicon Valley AI company Anthropic and the U.S. Department of Defense over whether the military will use—or not use—Anthropic’s large language models is yet another company: Palantir.
Palantir, the Miami-based data analytics and artificial intelligence platform, is a key software provider for the Department of Defense—and the main channel by which the Department has been using Anthropic’s large language model, Claude.
“We are legitimately still in the middle of all this,” CEO Alex Karp said in an interview with Fortune on the sidelines of the company’s twice-a-year AIP conference on Thursday. “It’s our stack that runs the LLMs.”
Karp says he had been in numerous discussions with all parties involved—discussions he declined to give specifics about, as he says he doesn’t want to “out conversations” or “bash people.”
But Karp does want to make one thing clear: The Defense Department is not using AI for domestic mass surveillance on U.S. citizens—and, to his knowledge, it has no plans to.
“Without commenting on internal dialogs, there was never a sense that these products would be used domestically,” Karp said. “The Department of War is not planning to use these products domestically. That’s a completely different kettle of fish… The terms the Department of War wants are completely focused on non-American citizens in a war context.”
Palantir has a vast business doing work for the U.S. government, including the DoD. Anthropic partnered with Palantir in 2024 to offer its AI technology to the DoD via Palantir. Anthropic also began working directly with the DoD last year to create a version of its technology designed for the Defense Department.
The contentious back-and-forth between Anthropic and the Defense Department has been ongoing since around January, and the two sides don’t agree on what set it off. Statements that Undersecretary of Defense for Research and Engineering Emil Michael made last week allege that Palantir had notified the Pentagon that Anthropic was inquiring about whether its models had been used for the U.S. military mission to capture Venezuelan President Nicolás Maduro. (Anthropic has refuted this characterization, asserting it hasn’t discussed the use of Claude for specific operations “with any industry partners, including Palantir, outside of routine discussions on strictly technical matters”). Ever since, the two sides have been locked in a fight over whether Anthropic can write contractual limits on how its models are used.
Anthropic CEO Dario Amodei has published multiple blog posts on the matter, including an initial statement at the end of February asserting that the Defense Department had refused to accept safeguards that its LLMs not be used for domestic mass surveillance or the deployment of fully autonomous weapons. Pete Hegseth, the Secretary of Defense, later designated Anthropic a “supply-chain risk,” threatening many of the company’s commercial relationships, and prompting Anthropic to sue the Pentagon over the designation.
‘Totally in favor’ of domestic terms of engagement
Palantir, which was funded by the CIA’s venture capital arm early on and whose software has been used in counter-terrorism efforts abroad, has long been accused of helping government and intelligence agencies spy on civilians and potential domestic suspects. Karp has repeatedly rebutted such claims for over a decade and has spoken about the importance of setting technical guardrails around technology that could be used in the U.S. for domestic surveillance. Palantir early on created a “Privacy and Civil Liberties” team—an interdisciplinary group of engineers, lawyers, philosophers, and social scientists—tasked with building privacy‑protective features into its products and fostering a culture of responsible use. The team helped set up internal channels, including an ethics hotline, for employees to flag work they viewed as crossing ethical lines.
Civil liberties groups, however, continue to accuse the company of doing the opposite—by helping the government surveil. The company’s relationship with U.S. Immigration and Customs Enforcement, in particular, which began under the Obama Administration, has invited intense scrutiny and criticism from both external critics and the company’s own employees—criticism that has only escalated over the last year as the Trump Administration has pushed ICE into an aggressive crackdown in cities like Minneapolis.
Karp told Fortune he is “very sympathetic with arguments against using these products inside the U.S.” and said that he is “totally in favor” of setting terms of engagement and limits to how domestic agencies can use artificial intelligence.
“Quite frankly, I think we should self-impose them,” Karp said of these terms of engagement. “The Valley should have a consortium: This is what we’re going to do, and this is what we’re not going to do,” he said.
But Karp drew a sharp distinction between whether tech companies should set terms with domestic agencies and whether they should set them with the Department of Defense, which is primarily focused on managing the United States’ relationships with other countries and its adversaries.
“What we’re talking about now is using products vis-a-vis someone who’s trying to kill our service members,” Karp said, noting that he personally supports “wide license” of usage for the Department of Defense specifically.
“If we knew China and Russia and Iran wouldn’t build them, I would be in favor of very heavy—very heavy—legal constraints,” Karp said. But he points out that American adversaries will build them and use them against the U.S. anyway. “I don’t think this is an opinion. I think this is a fact, and that fact means I think the Department of War should have wide license to use these products.”


