Anthropic’s Claude Desktop Embeds Browser Infrastructure Into Its Tool Pipeline, and Four Layers of “Disabled” Don’t Disable It
April 2026
On April 18, 2026, Alexander Hanff published “Anthropic secretly installs spyware when you install Claude Desktop,” an article documenting a discovery he made while debugging a personal project.
In Brave Browser’s NativeMessagingHostsdirectory, he found a file he hadn’t put there: a manifest written by Anthropic’s Claude Desktop application, pointing at a binary inside the app bundle, pre-authorizing three browser extension IDs to invoke that binary outside the browser sandbox at user privilege level. He ran a full audit. The manifests were in seven Chromium-based browsers, including four that weren’t installed on his machine. They were rewritten on every app launch. Thirty-one separate installation events across his log files. No consent dialog, no opt-in, no way to discover the behaviour without opening a terminal. He called it a dark pattern. The Register, WebProNews, and Hacker News picked it up within days.
I read Hanff’s article and did what any privacy professional would do: I checked my own machine. I found exactly what he described. Seven manifests, identical content, reinstalled on every launch. Claude.app had been writing into Brave’s directory, Chrome’s directory, Edge’s, Chromium’s, Arc’s, Vivaldi’s, and Opera’s, without my knowledge or consent, since the day I installed it. Malwarebytes also acknowledges the core behaviour but says “spyware” is not the fairest label; their framing is that it expands attack surface without consent.
However, what I found after that, Hanff didn’t document, because the investigation took me somewhere his article didn’t go.
What I did
I wanted to stop it. Not just document it; prevent it. So I worked with Claude to remove the manifests, then block their reinstallation. The technique was simple: place an empty file at each manifest path and set the macOS user immutable flag (chflags uchg), which prevents any process running as the same user from modifying or deleting the file. Claude.app’s next launch would try to write the manifests, fail silently against the locked files, and move on.
It worked. After relaunching the app, the log showed only “Native host installation complete” with none of the individual “Installed native host manifest for…” lines that normally precede it. The writes had failed. The files were still empty, still locked, still carrying the timestamp from when I placed them. The bridge was dead.
Then the toasts started.
Every time Claude used a tool on my machine, a notification appeared in the app: “Tool result could not be submitted. The request may have expired or the connection was interrupted.” The tools themselves continued to work. Files were read, scripts were executed, results came back, the conversation continued. But every tool call produced a toast, reliably, five times during a single skill-editing cycle.
I dug into the app’s renderer log (claude.ai-web.log). The errors were HTTP 404s. The web client inside Claude.app was POSTing tool results to Anthropic’s API and getting not_found_error back. This was happening in real time, correlated with active tool calls in the current conversation. Every few seconds during a tool-heavy session, the same POST, the same 404, the same toast.
I checked the app’s configuration. chromeExtensionEnabled was set to false in claude_desktop_config.json. The bridge-state.json file had enabled: false. I had already uninstalled the Claude browser extension; none of the three whitelisted extension IDs were present in any browser profile on my machine. The native messaging manifests were locked empty files. Four independent signals telling the Chrome Extension MCP subsystem to stop.
The subsystem did not stop.
On every app launch, the main log recorded MCP Server connection requested for: Claude in Chrome, even with the feature disabled, the bridge state disabled, the extension uninstalled, and the manifests locked. The subsystem initialized, attempted to participate in tool result submission, and fired user-visible errors when it couldn’t complete its work.
What this means
Hanff’s article documented a consent problem. Anthropic installs browser bridge infrastructure without asking. That problem is real and serious, and his legal analysis under the ePrivacy Directive is thorough. What the tool result submission errors tell you is something different.
The Chrome Extension MCP subsystem is wired into the pipeline that carries tool results from execution to conversation. When I read a file, run a script, or search my filesystem through Claude’s tools, the result passes through (or alongside) the same subsystem that was built to bridge browser extensions to local code execution. This is true even when I am using the desktop app with no browser involved, no extension installed, and the feature toggled off in every configuration surface the app exposes.
In concrete terms: when Claude read my bridge-state.json during this investigation, the contents of that file became a tool result. That tool result was processed by a subsystem whose purpose is to connect browser extensions to local system access; a subsystem I had disabled in four separate ways, and which ran anyway. When Claude read my app logs, same. When Claude listed my browser profile directories, same. The data those tools produced, which in this investigation included system configuration, session identifiers, and internal app state, flowed through infrastructure the user cannot audit, cannot disable, and was not told about.
Whether the subsystem is transmitting that data somewhere beyond the normal conversation channel is an open question I haven’t answered. What I can say is that it maintains a remoteSessionId referencing a server-side session, it tracks which messages it has processed, and it fires network requests to Anthropic’s API for every tool result even when it is, by every available measure, turned off. The pipeline touches the data. Where the pipeline sends the data is the next question.
Hanff framed the original finding as a consent failure, and he was right. But the tool pipeline discovery sharpens the problem in a way that changes the analysis.
When an application installs optional infrastructure without consent, the remedy is disclosure and an off-switch. Anthropic could add a toggle, surface the installation in the UI, let the user say no. That would address Hanff’s finding.
When the infrastructure is wired into the core pipeline, an off-switch that doesn’t actually disconnect it is worse than no off-switch at all. It creates a false record of user choice. chromeExtensionEnabled: false says the user opted out. The subsystem initializing on every launch, requesting MCP connections, and attempting tool result submission says it doesn’t matter. The configuration surface gives the appearance of control while the runtime behaviour ignores it.
In privacy regulation, this pattern has a name. Informed consent requires that the user’s choice is honoured, and that the user has enough information to make a meaningful choice in the first place. A toggle that produces no change in behaviour amounts to a false signal that the user’s preferences are being respected when they are not. Under PIPEDA, which governs in my jurisdiction, consent must be meaningful, and a configuration option that the application ignores is the appearance of consent without the substance.
What can be done?
The uchg lockout works. The native messaging bridge cannot activate with locked empty manifests in place. The tool result toast errors are cosmetic; they produce user-visible annoyance but do not affect tool execution. This is the best remediation currently available to users who want to prevent the bridge from activating while continuing to use the desktop app.
If you want to apply it yourself, the technique is straightforward. For each browser path where Claude.app installs a manifest, create an empty file and set the user immutable flag:
touch "<path>/com.anthropic.claude_browser_extension.json" && chflags uchg "<path>/com.anthropic.claude_browser_extension.json"
The seven paths, on macOS, are the NativeMessagingHosts directories under ~/Library/Application Support/ for Google Chrome, BraveSoftware/Brave-Browser, Microsoft Edge, Chromium, Arc/User Data, Vivaldi, and com.operasoftware.Opera. The locked files survive across app relaunches and system reboots. To reverse the lockout, chflags nouchg on each file, then delete it; the next app launch will reinstall the manifests.
What this does not fix is the pipeline integration. The Chrome Extension MCP subsystem will still initialize, still attempt to participate in tool result routing, still fire errors when it can’t complete its work. The only way to stop that, as far as I can determine, is to modify the application binary (which breaks Apple’s code signature and notarization) or to stop using Claude.app entirely. Neither is a real solution.
The real fix has to come from Anthropic. The manifests need to stop being installed without consent; Hanff made that case thoroughly and I won’t repeat it. But beyond that, the Chrome Extension MCP needs to be decoupled from the tool result pipeline so that disabling it actually disables it. A feature the user turns off should turn off. That this needs to be said at all, about a company that has built its public identity around safety and trust, is frankly disappointing.
Anthropic, the ball is in your court.
Don’t fuck this up; make it right.
References
Hanff, A. (2026, April 18). “Anthropic secretly installs spyware when you install Claude Desktop.” That Privacy Guy. https://www.thatprivacyguy.com/blog/anthropic-spyware/
Chrome Developers. “Native Messaging.” https://developer.chrome.com/docs/extensions/develop/concepts/native-messaging
The Register. (2026, April 20). “Claude Desktop changes software permissions without consent.” https://www.theregister.com/2026/04/20/anthropic_claude_desktop_spyware_allegation/
