<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Jonas Kamsker</title>
    <description>Personal blog and portfolio of Jonas Kamsker - .NET developer and open-source enthusiast based in Linz, Austria.</description>
    <link>https://blog.kamsker.at/</link>
    <atom:link href="https://blog.kamsker.at/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Fri, 20 Feb 2026 16:24:40 +0000</pubDate>
    <lastBuildDate>Fri, 20 Feb 2026 16:24:40 +0000</lastBuildDate>
    <generator>Jekyll v4.3.4</generator>

    
      <item>
        <title>Broken by Default: Claude Cowork on Windows</title>
        <description>&lt;h2 id=&quot;the-promise&quot;&gt;The Promise&lt;/h2&gt;

&lt;p&gt;I’ll be honest: the idea of Cowork excited me. An AI agent that lives on my desktop, manages files, automates tasks - the kind of thing that makes you feel like you’re living in the future. Point it at a workspace, give it a job, go make coffee.&lt;/p&gt;

&lt;p&gt;So I installed it on Windows. Clicked the button. Waited.&lt;/p&gt;

&lt;p&gt;And then Cowork looked me dead in the eyes and said: “The Claude API cannot be reached from Claude’s workspace.”&lt;/p&gt;

&lt;p&gt;Which is a weird thing to say when I’m &lt;em&gt;literally on the internet right now.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;two-error-messages-zero-clarity&quot;&gt;Two Error Messages, Zero Clarity&lt;/h2&gt;

&lt;p&gt;Cowork doesn’t fail with &lt;em&gt;one&lt;/em&gt; error. It fails with &lt;em&gt;two&lt;/em&gt;, and they look like completely different problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 1 (the misleading one):&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;“The Claude API cannot be reached from Claude’s workspace…”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Your first instinct: is Anthropic down? You check. There isn’t an outage. You can resolve &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api.anthropic.com&lt;/code&gt; from your terminal. Port 443 is open. Your host machine has internet. Everything is fine.&lt;/p&gt;

&lt;p&gt;Except nothing is fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 2 (the real one, hidden behind “longer loading”):&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;CLI output was not valid JSON … Output: sandbox-helper: host share not mounted at /mnt/.virtiofs-root/shared: not a mount point&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;There&lt;/em&gt; it is. Buried in the second error message like a confession at the bottom of a Terms of Service. The VM’s filesystem is broken, the CLI process outputs an error string instead of JSON, and Cowork’s parser falls over because it expected structured data and got a cry for help.&lt;/p&gt;

&lt;p&gt;Two symptoms. Three root causes. One very confused developer.&lt;/p&gt;

&lt;p&gt;And here’s the part that really bothers me: Anthropic is positioning Cowork squarely at non-technical users. Knowledge workers. The marketing says “Claude Code power for the rest of your work.” If &lt;em&gt;I&lt;/em&gt; - someone who debugs Hyper-V networking as a semi-regular hobby - spent hours in PowerShell diagnosing this, a non-dev user has exactly zero chance. They see “API unreachable,” they Google it, they find nothing useful, and they uninstall. That’s not a speed bump - that’s a product cliff with no guardrail.&lt;/p&gt;

&lt;h2 id=&quot;whats-actually-happening-under-the-hood&quot;&gt;What’s Actually Happening Under the Hood&lt;/h2&gt;

&lt;p&gt;Here’s the thing the “API unreachable” error doesn’t tell you: Cowork doesn’t run on your machine. Not really.&lt;/p&gt;

&lt;p&gt;Cowork runs inside a &lt;strong&gt;dedicated Linux VM&lt;/strong&gt; - a full virtual machine running on Hyper-V (Windows’ built-in hypervisor, the same technology that powers WSL2 and Docker Desktop). Under the hood, Cowork talks to Microsoft’s Host Compute Service (HCS) - a low-level API for creating and managing VMs that sits beneath the friendlier tools like Hyper-V Manager. This means Cowork’s VM may &lt;em&gt;not&lt;/em&gt; show up as a normal “VM” in Hyper-V Manager - it’s registered at the platform level rather than as a classic Hyper-V VM. But here’s what tripped me up: &lt;strong&gt;Cowork does &lt;em&gt;not&lt;/em&gt; share WSL2’s VM.&lt;/strong&gt; It’s not a distro running inside the WSL2 lightweight utility VM. It’s not piggybacking on Docker’s backend either. It boots its own completely independent virtual machine, with its own kernel, its own root filesystem, and its own networking stack.&lt;/p&gt;

&lt;p&gt;Think of Hyper-V as an apartment building. WSL2 is one tenant. Docker Desktop is another. Cowork moves in as a third - separate apartment, separate lease, separate plumbing. They share the building’s foundation (the hypervisor), but nothing else. When Cowork’s plumbing breaks, WSL2 keeps running fine. The reverse is also true - except for &lt;a href=&quot;https://github.com/anthropics/claude-code/issues/26216&quot;&gt;one fun bug&lt;/a&gt; where Cowork’s virtual network &lt;em&gt;permanently breaks WSL2’s internet&lt;/em&gt; until you manually find and delete the offending network configuration using Windows’ HNS diagnostic tools. Good neighbors.&lt;/p&gt;

&lt;p&gt;The whole thing is managed by a dedicated Windows service called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;CoworkVMService&lt;/code&gt; (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cowork-svc.exe&lt;/code&gt;). The VM bundle lives inside Claude Desktop’s app data — on my machine (Microsoft Store install) it was at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;%LOCALAPPDATA%\Packages\Claude_*\LocalCache\Roaming\Claude\vm_bundles\claudevm.bundle\&lt;/code&gt; (some non-Store installs use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;%APPDATA%\Claude\vm_bundles\claudevm.bundle\&lt;/code&gt;). It contains a ~9.4 GB Linux root filesystem (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rootfs.vhdx&lt;/code&gt; - VHDX is Hyper-V’s virtual hard disk format), a Linux kernel (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vmlinuz&lt;/code&gt;), an initial RAM disk for bootstrapping (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;initrd&lt;/code&gt;), and a persistent state disk (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt;) that stores the VM’s session data between restarts. That last file becomes very relevant in about three paragraphs. On macOS, Cowork uses Apple’s Virtualization Framework instead of Hyper-V - same concept, different hypervisor, and roughly a month more maturity since it launched first.&lt;/p&gt;

&lt;p&gt;The VM connects to the outside world through a virtual network adapter (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vEthernet (cowork-vm-nat)&lt;/code&gt;) on its own private IP range (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;172.16.0.0/24&lt;/code&gt;). Two Windows services make this work: &lt;strong&gt;HNS&lt;/strong&gt; (Host Networking Service) orchestrates the virtual network - think of it as the VM’s network card and cabling. &lt;strong&gt;WinNAT&lt;/strong&gt; (Windows Network Address Translation) then provides the actual internet routing - it translates the VM’s private IP addresses into your host’s real ones so traffic can flow in and out. Without HNS, the VM has no network. Without WinNAT, the VM has a network that goes nowhere.&lt;/p&gt;

&lt;p&gt;Host folders get shared into the VM via &lt;strong&gt;VirtioFS&lt;/strong&gt;, a high-performance file sharing protocol designed for virtual machines (similar to how Docker mounts host directories into containers). Your workspace folder appears inside the VM at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/mnt/.virtiofs-root/shared/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When either the networking or the filesystem sharing breaks, Cowork doesn’t degrade gracefully - it faceplants into error messages that point everywhere except the actual problem.&lt;/p&gt;

&lt;h2 id=&quot;the-diagnosis-three-layers-of-broken&quot;&gt;The Diagnosis: Three Layers of Broken&lt;/h2&gt;

&lt;p&gt;I pulled up an admin PowerShell and started poking. What I found was a layer cake of failure - each layer independently capable of killing Cowork, and all three broken simultaneously.&lt;/p&gt;

&lt;h3 id=&quot;layer-1-the-vms-network-adapter-had-no-dns&quot;&gt;Layer 1: The VM’s Network Adapter Had No DNS&lt;/h3&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;Get-DnsClientServerAddress&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-InterfaceAlias&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;vEthernet (cowork-vm-nat)&apos;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ServerAddresses : {}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Empty. The virtual network adapter that Cowork’s VM uses to resolve domain names had no DNS servers configured. The VM was living in a world where domain names were a theoretical concept.&lt;/p&gt;

&lt;p&gt;Meanwhile, the host was merrily resolving DNS on its own adapters, completely unaware that its VM tenant was sitting in the dark.&lt;/p&gt;

&lt;h3 id=&quot;layer-2-winnat-was-just-gone&quot;&gt;Layer 2: WinNAT Was Just… Gone&lt;/h3&gt;

&lt;p&gt;This is the one that really got me. The virtual network &lt;em&gt;existed&lt;/em&gt; - HNS (the service that manages Cowork’s virtual network, remember) showed it, subnet &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;172.16.0.0/24&lt;/code&gt;, gateway &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;172.16.0.1&lt;/code&gt;, all looking perfectly normal:&lt;/p&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;Get-NetNat&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Output: nothing. Empty. The WinNAT object that’s supposed to provide outbound internet access for the VM’s subnet simply wasn’t there.&lt;/p&gt;

&lt;p&gt;The VM had an IP address. It had a gateway. It had a virtual switch. What it &lt;em&gt;didn’t&lt;/em&gt; have was a NAT rule to actually translate its traffic to the outside world. A house with a front door and a brick wall where the street should be.&lt;/p&gt;

&lt;p&gt;The Cowork VM logs confirmed it:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;API reachability: PROBABLY_UNREACHABLE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And later, having given up on optimism:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;API reachability: UNREACHABLE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This is a &lt;a href=&quot;https://github.com/anthropics/claude-code/issues/24945&quot;&gt;known issue&lt;/a&gt;, by the way. Multiple users have reported it. The installer creates the virtual network via HNS but doesn’t reliably create the corresponding WinNAT rule that gives that network internet access. And if you’re running VPN software, it gets worse - &lt;a href=&quot;https://github.com/anthropics/claude-code/issues/25513&quot;&gt;VPNs are fundamentally incompatible&lt;/a&gt; with Cowork’s NAT setup because VPN split-tunnel rules don’t apply to NAT’d VM traffic. The VPN doesn’t know Cowork’s VM exists. It can’t route for a tenant it’s never met.&lt;/p&gt;

&lt;h3 id=&quot;layer-3-the-vms-virtual-disk-was-corrupted&quot;&gt;Layer 3: The VM’s Virtual Disk Was Corrupted&lt;/h3&gt;

&lt;p&gt;Even if networking were perfect, Cowork still wouldn’t have started. Remember &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt;? The VM’s persistent state disk? It was in an inconsistent state, which meant the VirtioFS host share - the file sharing bridge between your Windows folders and the VM - failed to mount.&lt;/p&gt;

&lt;p&gt;The sandbox helper process tried to set up the environment, discovered the mount point was broken, and printed an error to stdout. The Cowork CLI, expecting a stream of JSON, got plaintext instead. Parser meets unexpected input. Parser loses.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;sandbox-helper: host share not mounted at /mnt/.virtiofs-root/shared: not a mount point
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That’s the line that produces the “CLI output was not valid JSON” error. Not a JSON problem. Not a CLI problem. A filesystem problem wearing a JSON mask.&lt;/p&gt;

&lt;h2 id=&quot;the-fix-three-commands-and-a-file-rename&quot;&gt;The Fix: Three Commands and a File Rename&lt;/h2&gt;

&lt;p&gt;Once you know what’s actually broken, the fix is almost anticlimactic.&lt;/p&gt;

&lt;h3 id=&quot;fix-1-give-the-vm-dns&quot;&gt;Fix 1: Give the VM DNS&lt;/h3&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$alias&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;vEthernet (cowork-vm-nat)&apos;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# In my case: my LAN resolver + Cloudflare as fallback&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Set-DnsClientServerAddress&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-InterfaceAlias&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$alias&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-ServerAddresses&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;10.0.0.45&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;1.1.1.1&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Clear-DnsClientCache&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Restart-Service&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;CoworkVMService&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Force&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can verify it took:&lt;/p&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;Get-DnsClientServerAddress&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-InterfaceAlias&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;vEthernet (cowork-vm-nat)&apos;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# Before: {}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# After:  {10.0.0.45, 1.1.1.1}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you have specific DNS requirements (corporate resolvers, etc.), swap in whatever makes sense for your environment.&lt;/p&gt;

&lt;h3 id=&quot;fix-2-recreate-winnat&quot;&gt;Fix 2: Recreate WinNAT&lt;/h3&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Windows 11 Home note:&lt;/strong&gt; Some users report &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-NetNat&lt;/code&gt; / &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;New-NetNat&lt;/code&gt; aren’t available or fail on Home editions (missing NetNat/WMI components). If you’re on Home, you may be dealing with a different class of problem than “missing NAT rule” and may need a different workaround or a supported Windows edition.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the big one. Without this, the VM has no internet - period.&lt;/p&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;New-NetNat&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Name&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;cowork-vm-nat&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-InternalIPInterfaceAddressPrefix&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;172.16.0.0/24&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Name          : cowork-vm-nat
Active        : True
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Two lines. That’s it. The entire difference between “API unreachable” and a working Cowork instance is a single &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;New-NetNat&lt;/code&gt; call that Windows silently decided not to persist.&lt;/p&gt;

&lt;h3 id=&quot;fix-3-reset-the-vm-state&quot;&gt;Fix 3: Reset the VM State&lt;/h3&gt;

&lt;p&gt;The corrupted &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt; needs to go. But we’re cautious, so we rename instead of delete:&lt;/p&gt;

&lt;div class=&quot;language-powershell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$svc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;CoworkVMService&apos;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;# Auto-detect bundle path (Store install uses a versioned package folder)&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bundle&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Get-Item&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;nn&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LOCALAPPDATA&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;\Packages\Claude_*\LocalCache\Roaming\Claude\vm_bundles\claudevm.bundle&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-ErrorAction&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;SilentlyContinue&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;FullName&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;if&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-not&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bundle&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bundle&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;nn&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;APPDATA&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;\Claude\vm_bundles\claudevm.bundle&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$session&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Join-Path&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bundle&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;sessiondata.vhdx&apos;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bak&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$session&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;.bak.&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Get-Date&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Format&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;yyyyMMdd-HHmmss&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Stop-Service&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$svc&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Force&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rename-Item&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Path&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$session&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-NewName&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Split-Path&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bak&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;-Leaf&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Start-Service&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$svc&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In my case, Cowork created a fresh &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt; on next start. The old one sits there timestamped, waiting for the forensic investigation you’ll never do. If your install never creates &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt; at all - or it’s still broken after this - you’re likely hitting a different setup bug and may need a full reinstall.&lt;/p&gt;

&lt;h3 id=&quot;after-the-fixes&quot;&gt;After the Fixes&lt;/h3&gt;

&lt;p&gt;Restart everything properly - and I do mean &lt;em&gt;properly&lt;/em&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Quit Claude Desktop from the system tray (not just closing the window - actually &lt;em&gt;Exit&lt;/em&gt;)&lt;/li&gt;
  &lt;li&gt;Restart &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;CoworkVMService&lt;/code&gt; if it isn’t already running&lt;/li&gt;
  &lt;li&gt;Relaunch and point Cowork at a simple, local workspace path (not OneDrive, not a library - a plain &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C:\SomeFolder&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;If it still flakes, temporarily disable VPN/tunnel adapters and try again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That last point deserves emphasis. VPN software, network tunnels, and virtual adapters are the most common reason Windows “loses” NAT rules or DNS configurations for Hyper-V virtual switches. Remember: Cowork’s VM has its own networking stack, completely separate from WSL2 and Docker. When your VPN reconfigures routing tables, it doesn’t know or care that there’s a third VM that also needs internet.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Debugging Depth Meter: [████████░░] 8/10

  Layer 1: DNS empty on VM adapter - annoying but diagnosable
  Layer 2: WinNAT missing entirely - invisible unless you know to check
  Layer 3: VM disk corrupted - produces an error that looks like a JSON bug
  Bonus:   VPN adapters silently antagonizing all of the above
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;why-this-was-hard-to-find&quot;&gt;Why This Was Hard to Find&lt;/h2&gt;

&lt;p&gt;The error messages are the real villain. “API unreachable” sends you down the wrong path entirely - you start checking Anthropic’s status page, your firewall, your proxy settings. The second error about invalid JSON sounds like a Cowork bug. Neither says “your Windows NAT layer is missing and your virtual disk is corrupted.”&lt;/p&gt;

&lt;p&gt;The diagnostic path requires you to already know that Cowork runs its own dedicated Hyper-V VM (not inside WSL2, not Docker’s VM - its own independent instance via HCS), that this VM relies on WinNAT for internet access, and that VirtioFS mounts can go stale when &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessiondata.vhdx&lt;/code&gt; gets corrupted. That’s not something a normal user would ever figure out. It’s barely something a developer would figure out without falling down the right rabbit hole.&lt;/p&gt;

&lt;p&gt;And that’s the core tension. Cowork is marketed at the people &lt;em&gt;least&lt;/em&gt; equipped to debug it when it breaks. The fix is three PowerShell commands, but the path to discovering those three commands requires knowledge that Anthropic’s target audience definitionally does not have.&lt;/p&gt;

&lt;p&gt;The relevant logs live in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;%LOCALAPPDATA%\Packages\Claude_*\LocalCache\Roaming\Claude\logs\&lt;/code&gt; (Microsoft Store install) or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;%APPDATA%\Claude\logs\&lt;/code&gt; (non-Store install) - specifically &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cowork_vm_node.log&lt;/code&gt; for VM networking status and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;main.log&lt;/code&gt; for the CLI/sandbox errors. If you’re hitting anything like what I described, start there.&lt;/p&gt;

&lt;h2 id=&quot;faq-partially-helpful-fully-honest&quot;&gt;FAQ (Partially Helpful, Fully Honest)&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why did WinNAT disappear?&lt;/strong&gt;
Best guess: a Windows Update, a VPN install, or a Hyper-V reconfiguration silently cleared it. Windows doesn’t warn you. It just lets NAT rules evaporate like a goldfish releasing a memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Will this happen again?&lt;/strong&gt;
Probably. WinNAT has the persistence of a New Year’s resolution. Check &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-NetNat&lt;/code&gt; after major updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wait, so Cowork, WSL2, and Docker are all separate VMs?&lt;/strong&gt;
Not exactly. WSL2 runs a lightweight utility VM, and multiple WSL2 distros share that same underlying VM/kernel. Docker Desktop can run either inside WSL2 (as the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-desktop&lt;/code&gt; distro) or as a separate Hyper-V VM depending on configuration. Cowork appears to run work in its own isolated VM environment and creates its own networking artifacts (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cowork-vm-nat&lt;/code&gt;). Three tenants, one building, zero coordination on plumbing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does Cowork break WSL2?&lt;/strong&gt;
It can. &lt;a href=&quot;https://github.com/anthropics/claude-code/issues/26216&quot;&gt;Issue #26216&lt;/a&gt; documents Cowork’s virtual network (managed by HNS) permanently breaking WSL2’s internet. The fix is deleting the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cowork-vm-nat&lt;/code&gt; network entry via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-HnsNetwork | Where-Object { $_.Name -eq &quot;cowork-vm-nat&quot; } | Remove-HnsNetwork&lt;/code&gt;, which you’ll need to redo every time Claude Desktop recreates it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is this a Cowork bug or a Windows bug?&lt;/strong&gt;
Both. The error messages are Cowork’s fault - surfacing “WinNAT missing” instead of “API unreachable” would save hours. But Windows silently dropping NAT configurations isn’t Cowork’s doing. It’s Cowork trusting Windows to hold its drink. Windows dropped it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can a non-developer fix this?&lt;/strong&gt;
No. And that’s the problem worth talking about.&lt;/p&gt;

&lt;h2 id=&quot;the-takeaway&quot;&gt;The Takeaway&lt;/h2&gt;

&lt;p&gt;Three layers of broken. Three commands to fix. Two hours to figure out which three.&lt;/p&gt;

&lt;p&gt;The root cause, stripped of narrative: Cowork’s Linux VM - a dedicated Hyper-V instance, separate from WSL2 and Docker - lost outbound internet because Windows dropped the NAT rule, lost DNS because the virtual adapter was misconfigured, and couldn’t mount host folders because the virtual disk was corrupted. Each failure produced a different symptom, none of which pointed at the actual problem.&lt;/p&gt;

&lt;p&gt;If you’re on Windows and Cowork won’t start, run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-NetNat&lt;/code&gt; and check whether &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-HnsNetwork | Where-Object { $_.Name -eq &quot;cowork-vm-nat&quot; }&lt;/code&gt; returns anything. An empty &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get-NetNat&lt;/code&gt; combined with an existing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cowork-vm-nat&lt;/code&gt; HNS network is a strong signal you’re in the “missing NAT rule” failure mode. Everything else is cleanup.&lt;/p&gt;

&lt;p&gt;And yes, I debugged a Linux VM networking issue by writing PowerShell. The year is 2026 and nothing makes sense.&lt;/p&gt;
</description>
        <pubDate>Thu, 19 Feb 2026 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/cowork-windows-broken/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/cowork-windows-broken/</guid>
      </item>
    
      <item>
        <title>Forgejo&apos;s CLI Can&apos;t Show Build Details? Fine. I&apos;ll Do It Myself.</title>
        <description>&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Forgejo Actions has no usable API or CLI surface for runs, logs, or artifacts. I built &lt;a href=&quot;https://github.com/JKamsker/forgejo-cli-ex&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex&lt;/code&gt;&lt;/a&gt;, a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj&lt;/code&gt;-style companion that scrapes the web UI’s embedded JSON so humans &lt;em&gt;and&lt;/em&gt; AI agents can manage CI from the terminal. Yes, it’s scraping. No, I don’t feel bad about it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;a-confession-and-a-problem&quot;&gt;A Confession and a Problem&lt;/h2&gt;

&lt;p&gt;I’ll be honest: I’m not great at CI/CD. I don’t enjoy writing pipeline configs, I don’t enjoy debugging them in production, emotionally, and I &lt;em&gt;especially&lt;/em&gt; don’t enjoy playing “spot the difference” between yesterday’s green run and today’s red run - where the only change is that the build system woke up and chose violence.&lt;/p&gt;

&lt;p&gt;If a build breaks at 09:03, I want it fixed by 09:04. Not a 40-minute archaeological dig through logs that read like a toaster having a panic attack.&lt;/p&gt;

&lt;p&gt;So I do what any (un)reasonable developer in 2026 does - I let AI agents handle it. They write the workflows, they iterate on failures, they fix the weird YAML indentation issues. I review the results. It’s a great arrangement.&lt;/p&gt;

&lt;p&gt;This works beautifully on GitHub, because &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; - GitHub’s CLI - covers &lt;em&gt;everything&lt;/em&gt;. Runs, logs, artifacts, cancellations, reruns. An AI agent with access to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; can see what’s happening, read the logs, download artifacts, retry failures, all without ever touching a browser. It has hands and feet. It can walk around and get things done.&lt;/p&gt;

&lt;p&gt;Then my company started migrating to self-hosted platforms. Forgejo, specifically - open source, lightweight, GitHub-compatible Actions. Great choice for a lot of reasons.&lt;/p&gt;

&lt;p&gt;One problem: the moment we moved, my AI agents lost their legs.&lt;/p&gt;

&lt;h2 id=&quot;the-gap&quot;&gt;The Gap&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://codeberg.org/forgejo-contrib/forgejo-cli/&quot;&gt;Forgejo CLI (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj&lt;/code&gt;)&lt;/a&gt; is solid. Repos, issues, PRs, releases - all there, all from the terminal. It’s the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; equivalent, and for the things it covers, it covers them well.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;Forgejo Actions&lt;/strong&gt;? Nothing. No run listing. No log downloads. No artifacts. No cancel. No rerun. These features exist exclusively behind the web UI.&lt;/p&gt;

&lt;p&gt;To put it another way:&lt;/p&gt;

&lt;p&gt;What &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; lets an agent do: list runs, read logs, download artifacts, cancel jobs, rerun failures - never open a browser.&lt;/p&gt;

&lt;p&gt;What &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj&lt;/code&gt; lets an agent do: …have you tried clicking?&lt;/p&gt;

&lt;p&gt;For a human, that’s annoying. You alt-tab, you click around, you lie to yourself that this is fine.&lt;/p&gt;

&lt;p&gt;For an AI agent? It’s a brick wall. Agents don’t have browsers. They have terminals and CLI tools. If there’s no command for it, it doesn’t exist. My agents went from autonomously managing the full CI/CD lifecycle on GitHub to being completely helpless the moment a build failed on Forgejo. They were a brilliant brain in a jar - with no network adapters.&lt;/p&gt;

&lt;p&gt;I was back to manually debugging pipelines. The one thing I was specifically trying to &lt;em&gt;not do&lt;/em&gt;.&lt;/p&gt;

&lt;h2 id=&quot;what-i-needed&quot;&gt;What I Needed&lt;/h2&gt;

&lt;p&gt;The dream was simple - give my agents (and myself, when I’m feeling brave) the same experience &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; provides:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# What&apos;s happening?&lt;/span&gt;
fj-ex actions runs &lt;span class=&quot;nt&quot;&gt;--limit&lt;/span&gt; 20

&lt;span class=&quot;c&quot;&gt;# Let me read the tea leaves (whole run)&lt;/span&gt;
fj-ex actions logs run &lt;span class=&quot;nt&quot;&gt;--run-index&lt;/span&gt; 42

&lt;span class=&quot;c&quot;&gt;# ...or a specific job inside that run&lt;/span&gt;
fj-ex actions logs job &lt;span class=&quot;nt&quot;&gt;--run-index&lt;/span&gt; 42 &lt;span class=&quot;nt&quot;&gt;--job-index&lt;/span&gt; 0

&lt;span class=&quot;c&quot;&gt;# Grab the goods&lt;/span&gt;
fj-ex actions artifacts get &lt;span class=&quot;nt&quot;&gt;--run-index&lt;/span&gt; 42 &lt;span class=&quot;nt&quot;&gt;--artifact&lt;/span&gt; build-output &lt;span class=&quot;nt&quot;&gt;--out-file&lt;/span&gt; build-output.zip

&lt;span class=&quot;c&quot;&gt;# Nope, kill it&lt;/span&gt;
fj-ex actions cancel &lt;span class=&quot;nt&quot;&gt;--run-index&lt;/span&gt; 42

&lt;span class=&quot;c&quot;&gt;# Hope springs eternal&lt;/span&gt;
fj-ex actions rerun &lt;span class=&quot;nt&quot;&gt;--run-index&lt;/span&gt; 42
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Terminal. One line. Done. Something an agent can call, parse the output of, and reason about.&lt;/p&gt;

&lt;p&gt;And it had to feel like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj&lt;/code&gt;. Same &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--host/-H&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--repo/-r&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--remote/-R&lt;/code&gt; flags. Same git-remote inference. Same subcommand style. Not a fork, not a replacement - a companion. Hence the name: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex&lt;/code&gt;, Forgejo CLI &lt;em&gt;Extension&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I didn’t need to build a new brain. I needed to build knees.&lt;/p&gt;

&lt;h2 id=&quot;how-i-got-there&quot;&gt;How I Got There&lt;/h2&gt;

&lt;p&gt;It started as a few PowerShell scripts. Just enough to stop the bleeding while we migrated repos. And while hacking those together, I found the detail that made this whole project viable.&lt;/p&gt;

&lt;p&gt;Forgejo’s frontend developers - bless their hearts - left the keys in the ignition.&lt;/p&gt;

&lt;p&gt;Their web UI embeds structured JSON directly in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;data-*&lt;/code&gt; HTML attributes. On a run page, the initial UI state (and sometimes artifact metadata) is right there in the markup (HTML-escaped, but still JSON once you decode entities):&lt;/p&gt;

&lt;div class=&quot;language-html highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;div&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;data-initial-post-response=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;{&amp;amp;quot;state&amp;amp;quot;:{&amp;amp;quot;run&amp;amp;quot;:{&amp;amp;quot;jobs&amp;amp;quot;:[{&amp;amp;quot;id&amp;amp;quot;:123,&amp;amp;quot;name&amp;amp;quot;:&amp;amp;quot;build&amp;amp;quot;,&amp;amp;quot;status&amp;amp;quot;:&amp;amp;quot;failure&amp;amp;quot;}]}}}&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;data-initial-artifacts-response=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;[{&amp;amp;quot;id&amp;amp;quot;:15,&amp;amp;quot;name&amp;amp;quot;:&amp;amp;quot;build-output&amp;amp;quot;,&amp;amp;quot;size&amp;amp;quot;:204800}]&quot;&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;&amp;gt;&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;No headless browser needed. No DOM spelunking. Just: fetch the page, yank the attribute(s), decode HTML entities, parse the JSON, pretend this was an API all along.&lt;/p&gt;

&lt;p&gt;That’s not an API… but it &lt;em&gt;is&lt;/em&gt; data.&lt;/p&gt;

&lt;p&gt;The PowerShell scripts worked until I wanted one more feature and realized I was fighting PowerShell harder than the actual problem. My developer ego demanded type safety for what is essentially a glorified &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; script. So I rewrote it in Rust - not because it was the right choice, but because the only two modes I have are “quick hack” and “mass rewrite in a systems language.”&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex&lt;/code&gt; essentially pretends to be a browser. Same HTTP requests, same cookie-based auth, same CSRF token dance for mutations like cancel and rerun. Since there’s no API token for any of this, it logs in the human way - username, password, stash the session cookies. Is storing credentials ideal? No. Is there an alternative when you’re authenticating against a login form that doesn’t support tokens? Also no. The README doesn’t hide this.&lt;/p&gt;

&lt;h2 id=&quot;what-it-actually-feels-like&quot;&gt;What It Actually Feels Like&lt;/h2&gt;

&lt;p&gt;Here’s why this matters beyond the technical trick: an AI agent with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex&lt;/code&gt; installed can now do the full loop.&lt;/p&gt;

&lt;p&gt;Build fails → agent runs &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex actions runs&lt;/code&gt; to see what happened → reads the logs with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex actions logs run --run-index &amp;lt;n&amp;gt;&lt;/code&gt; (or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;... logs job --run-index &amp;lt;n&amp;gt; --job-index &amp;lt;n&amp;gt;&lt;/code&gt;) → figures out the issue → pushes a fix → monitors the rerun. All autonomously. All in the terminal. No human required to go click around in a web UI on the agent’s behalf.&lt;/p&gt;

&lt;p&gt;Forgejo handed my agents a beautiful map of the world. I just gave them back their shoes.&lt;/p&gt;

&lt;p&gt;And for the times I &lt;em&gt;do&lt;/em&gt; look at CI/CD myself, it’s just… nicer. Logs are text in my terminal. I can pipe them into &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;grep&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rg&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;less&lt;/code&gt;, whatever. Artifacts download to my current directory. Switching repos is a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-r&lt;/code&gt; flag away. The whole thing gets out of the way and lets me get back to the part of my job I actually enjoy.&lt;/p&gt;

&lt;h2 id=&quot;is-this-cursed-a-little&quot;&gt;Is This Cursed? A Little.&lt;/h2&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Cursedness Meter: [███████░░░] 7/10

  ✅ Works today
  ✅ No headless browser required
  ✅ Structured JSON, not regex-over-HTML
  ❌ Scraping behind auth
  ❌ Stored session cookies
  ❌ Will break if Forgejo redesigns
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On a cursedness scale from “regex to parse HTML” to “running production on SQLite,” this lands at “screen-scraping behind auth with stored credentials.” So, you know, Tuesday.&lt;/p&gt;

&lt;p&gt;But the alternative was either contributing the missing API endpoints upstream - a much larger undertaking, and one I’d still welcome - or telling my agents “sorry, you’re on your own” every time a Forgejo build failed. I chose the pragmatic option.&lt;/p&gt;

&lt;p&gt;And if Forgejo &lt;em&gt;does&lt;/em&gt; add proper API support for Actions someday? Great. The commands stay the same, only the plumbing changes. Migrating away from scraping would be the happiest refactor I’ve ever done.&lt;/p&gt;

&lt;h2 id=&quot;faq-half-useful-half-honest&quot;&gt;FAQ (Half Useful, Half Honest)&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Does this break if Forgejo changes the UI?&lt;/strong&gt;
Yes. That’s the pact. I scrape, they ship, I pray.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is storing session cookies ideal?&lt;/strong&gt;
No. But neither is opening a browser in 2026 to check if a build passed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Rust?&lt;/strong&gt;
Because the alternative was maintaining PowerShell, and I’ve suffered enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Will you replace scraping with a real API later?&lt;/strong&gt;
The day Forgejo exposes Actions in the API, I will refactor with mass joy and mass &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cargo rm&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;go-get-it&quot;&gt;Go Get It&lt;/h2&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fj-ex&lt;/code&gt; is &lt;a href=&quot;https://crates.io/crates/forgejo-cli-ex&quot;&gt;on crates.io&lt;/a&gt;, &lt;a href=&quot;https://github.com/JKamsker/forgejo-cli-ex&quot;&gt;on GitHub&lt;/a&gt;, and (of course) &lt;a href=&quot;https://codeberg.org/JKamsker/forgejo-cli-ex&quot;&gt;on Codeberg&lt;/a&gt; with pre-built binaries for Linux, Windows, and macOS.&lt;/p&gt;

&lt;p&gt;My agents have their legs back. My Forgejo tabs are closed. And I’m back to doing what I do best - reviewing the work someone else did and mass-approving it pretending I understood every change.&lt;/p&gt;

&lt;p&gt;And yes, I wrote Rust to avoid clicking a website.&lt;/p&gt;
</description>
        <pubDate>Wed, 18 Feb 2026 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/how-fj-ex-was-built/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/how-fj-ex-was-built/</guid>
      </item>
    
      <item>
        <title>The Site Is on Fire. Here&apos;s Your FTP Password. Good Luck.</title>
        <description>&lt;h2 id=&quot;a-confession&quot;&gt;A Confession&lt;/h2&gt;

&lt;p&gt;I’ll be honest: I have a thing for dirty code.&lt;/p&gt;

&lt;p&gt;Not &lt;em&gt;my&lt;/em&gt; dirty code - I write pristine, well-architected systems that definitely don’t have &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;// TODO: fix this later&lt;/code&gt; comments from 2019. No, I mean &lt;em&gt;other people’s&lt;/em&gt; dirty code. Legacy systems. The kind of codebase where the README says “just works” and the reality says “just barely.” Hand me FTP credentials to a decade-old PHP site that’s held together by session variables and sheer institutional denial, and something in my brain lights up like a Christmas tree.&lt;/p&gt;

&lt;p&gt;Most developers see a legacy dumpster fire and feel dread. I feel the same thing a cave diver feels staring into a dark hole in the ground: this is going to be awful, I might not come back the same person, and I absolutely cannot wait to go in.&lt;/p&gt;

&lt;p&gt;So when a client’s legacy SilverStripe site landed on my desk a while back - “not working,” they said, which undersells it the way “the Titanic had a minor hull issue” undersells maritime history - I didn’t groan. I cracked my knuckles.&lt;/p&gt;

&lt;p&gt;The admin filters were decorative - you could select them, the UI would update, and nothing would change. The statistics page took minutes to render. And the CSV export button? That one didn’t produce a slow page. It produced a dead server. The people who actually needed this data couldn’t get it; the site was a locked filing cabinet that electrocuted you when you touched the handle.&lt;/p&gt;

&lt;p&gt;The existing workaround was… creative. Since the export was broken, someone had set up an AI agent to read every outgoing invoice as PDF (which was mirrored to a dedicated email address) - and scrape the numbers out of them to reconstruct the data the admin panel couldn’t provide. An AI reading every invoice to reverse-engineer the database. In production.&lt;/p&gt;

&lt;p&gt;I’ll spare you my exact reaction. What I &lt;em&gt;said&lt;/em&gt; was “I’ll take a look at the site.”&lt;/p&gt;

&lt;p&gt;The ask was straightforward: build a bot that extracts the data the admin panel can’t export, and send the reports to the client. That’s it. No fixing the site. No debugging. Just get the data out.&lt;/p&gt;

&lt;p&gt;All I had to work with was admin login credentials and the live web UI. No server access. No FTP. No SSH. No deployment pipeline. Just a username, a password, and a website that killed itself the moment you asked it to do anything useful.&lt;/p&gt;

&lt;p&gt;The plan was simple: build the bot, get the data flowing, hand it over, walk away.&lt;/p&gt;

&lt;p&gt;You can probably guess how that went.&lt;/p&gt;

&lt;h2 id=&quot;build-the-bot-the-part-they-paid-me-for&quot;&gt;Build the Bot (The Part They Paid Me For)&lt;/h2&gt;

&lt;p&gt;First order of business: the client needed order exports &lt;em&gt;today&lt;/em&gt;, and the export button was one of the things producing 500s. So I needed a workaround that bypassed the broken UI entirely.&lt;/p&gt;

&lt;p&gt;I wrote a Python exporter that logs into the admin panel and triggers the same SilverStripe GridField export actions a human would click, except it does it over HTTP, with structured logging, and without crashing. Added redaction to the HTTP logs so cookies and CSRF tokens wouldn’t end up in version control, because I’ve read enough post-mortems to know that’s how you get a &lt;em&gt;second&lt;/em&gt; incident.&lt;/p&gt;

&lt;p&gt;Even while the building was on fire, I refactored the exporter into a clean &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;src/&lt;/code&gt; layout with shared modules and ran &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;python -m compileall&lt;/code&gt; as a sanity check. Professionalism is a disease - you can’t turn it off even when you should.&lt;/p&gt;

&lt;p&gt;By mid-afternoon, non-technical team members could trigger exports through a chat slash command and get the CSV back in their messaging client. No admin panel required. No 500s. Just data. The PDF-scraping workaround? Quietly retired.&lt;/p&gt;

&lt;p&gt;I also wrapped the exporter in a serverless function and added CI that automatically exports the past month’s data on every push and uploads it as an artifact.&lt;/p&gt;

&lt;p&gt;Job done. Bot built. Data flowing. Walk away.&lt;/p&gt;

&lt;p&gt;I did not walk away.&lt;/p&gt;

&lt;h2 id=&quot;the-brain-goblin&quot;&gt;The Brain Goblin&lt;/h2&gt;

&lt;p&gt;The bot worked. The reports were landing. The client was happy. The rational thing was to close the laptop and move on.&lt;/p&gt;

&lt;p&gt;But the site was still broken. The admin panel was still 500ing. The filters were still lying. And somewhere in the back of my skull, a small, irresponsible voice was saying: &lt;em&gt;you could fix this. You know you could fix this. It would be so satisfying to fix this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I call this voice the brain goblin. It’s the part of my brain that sees a dumpster fire and doesn’t think “someone should deal with that” - it thinks “I should deal with that, right now, tonight.”&lt;/p&gt;

&lt;p&gt;There’s a catch, though: I’m not a PHP developer. I &lt;em&gt;was&lt;/em&gt;, once, in the dark ages - back when &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mysql_real_escape_string&lt;/code&gt; was considered security and deploying meant dragging files into FileZilla. But that was a long time ago. I couldn’t fix this myself.&lt;/p&gt;

&lt;p&gt;But I could build the tools to let an AI agent fix it &lt;em&gt;for&lt;/em&gt; me.&lt;/p&gt;

&lt;p&gt;The agent is a better PHP developer than I am. It can read SilverStripe framework code, trace ORM call chains, spot N+1 patterns, and suggest fixes in a language I haven’t seriously written in over a decade. What it &lt;em&gt;can’t&lt;/em&gt; do is see what’s happening on a remote server. It can’t read logs that live behind FTP. It can’t query a database it doesn’t have credentials for. It can’t upload a patched file.&lt;/p&gt;

&lt;p&gt;The agent was a brain in a jar. My job was to build the jar some legs.&lt;/p&gt;

&lt;p&gt;I got FTP access. And from there, the plan took shape: build CLI tools that let the agent fetch evidence over FTP, tail logs, query the database, and upload surgical fixes one file at a time. Most of the tooling was done by midnight on day one. I have the commit history to prove it, and the commit messages to prove I was losing it.&lt;/p&gt;

&lt;h2 id=&quot;build-the-operating-room&quot;&gt;Build the Operating Room&lt;/h2&gt;

&lt;p&gt;The constraints were, as they say, &lt;em&gt;chef’s kiss&lt;/em&gt;. FTP/FTPS access. No SSH. No deployment pipeline. Changes had to be uploaded file-by-file, like it was 2003 and we were all pretending this was fine. Some requests died before SilverStripe could even log them - Apache would just emit “End of script output before headers,” the server equivalent of a shrug, and move on.&lt;/p&gt;

&lt;p&gt;But the FTP folders had secrets. MySQL credentials in a config file - that got me database access. FTP alone was painfully slow for exploratory debugging, though. I needed something closer to shell access. So I did what any principled engineer would do: I uploaded a temporary, purpose-built web shell, used it to gather intel on the server environment, and removed it when I was done. Resume material? No. But the only door they gave me was FTP.&lt;/p&gt;

&lt;p&gt;So I built a portable CLI toolkit, piece by piece:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FTP log tailing.&lt;/strong&gt; A script that connects over FTPS, downloads the latest log entries, and formats them locally. Because &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;tail -f&lt;/code&gt; is great when you have SSH. When you have FTP, you improvise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read-only database CLI.&lt;/strong&gt; A command-line tool that talks to the remote database, with guardrails - only &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;SELECT&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;SHOW&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;DESCRIBE&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXPLAIN&lt;/code&gt; are allowed, and it rejects multi-statement queries. I needed the agent to inspect data, not accidentally &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;DROP TABLE&lt;/code&gt; a client’s order history because of a fat-fingered semicolon. Yes, I wrote unit tests for the query guardrails. During the production firefight. Like I said: disease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FTP put for file-by-file deploys.&lt;/strong&gt; Upload exactly one changed file and nothing else. Verify by re-downloading the remote file and comparing SHA256 hashes - or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git diff --no-index&lt;/code&gt; for the paranoid (me). This became the deployment “pipeline.” It’s the saddest CI/CD you’ve ever seen, and it worked perfectly.&lt;/p&gt;

&lt;p&gt;The repo stopped being “just an exporter” and became a portable operations toolbox - the agent’s nervous system for a site we could only reach by FTP.&lt;/p&gt;

&lt;h2 id=&quot;bring-the-site-into-version-control-so-you-can-think-about-it&quot;&gt;Bring the Site Into Version Control (So You Can Think About It)&lt;/h2&gt;

&lt;p&gt;You can’t reason about code that exists only on a remote server you access via FTP. So I mirrored the entire webroot locally and committed it. A download script using eight parallel FTP connections pulled over a thousand source files - PHP, JS, CSS, templates, configs - with zero failures. For the first time in this site’s life, it was in version control.&lt;/p&gt;

&lt;p&gt;I told the agent explicitly: source files only. No media. Do not download images.&lt;/p&gt;

&lt;p&gt;It downloaded 300 GB of media before my SSD ran out of space.&lt;/p&gt;

&lt;p&gt;The agent had decided, with the quiet confidence of a golden retriever carrying a tree branch through a doorway, that “mirror the webroot” meant &lt;em&gt;mirror the webroot&lt;/em&gt;. Every customer photo. Every generated thumbnail. Every asset uploaded since the mid-2010s. My SSD just tapped out first. We had a conversation about boundaries.&lt;/p&gt;

&lt;p&gt;The webroot itself was a geological record of deployment strategies. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;website_v1&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;website_v2&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;website_old&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;website_really_old&lt;/code&gt; - all sitting right next to the actual production folder, like roommates who’d stopped acknowledging each other. Thousands of orphaned images in the root directory. A zip export from literally a decade ago that “should have been deleted a few days after generation.”&lt;/p&gt;

&lt;p&gt;I also started writing structured bug reports as separate markdown files, because by this point I had enough threads going that my brain alone was not a reliable storage medium.&lt;/p&gt;

&lt;p&gt;Speaking of error logs: I pulled roughly a year’s worth. About two thousand logged error events. Promising. Then I noticed ninety-five percent of them were the same bug on a single endpoint, screaming on repeat like a car alarm nobody disconnects. The actual admin 500s I was hunting? Barely a whisper underneath.&lt;/p&gt;

&lt;p&gt;From here, fixes became normal engineering: the agent traces the issue through the codebase, suggests a patch, I review it, commit, upload the changed file, verify by re-downloading and diffing.&lt;/p&gt;

&lt;p&gt;Same site. Suddenly legible.&lt;/p&gt;

&lt;h2 id=&quot;what-was-actually-broken&quot;&gt;What Was Actually Broken&lt;/h2&gt;

&lt;p&gt;Here’s where it gets fun. “Fun.”&lt;/p&gt;

&lt;h3 id=&quot;the-type-filter-that-referenced-a-ghost-column&quot;&gt;The Type Filter That Referenced a Ghost Column&lt;/h3&gt;

&lt;p&gt;The admin orders page had a “Type” dropdown filter. When you selected an option, the UI sent a query parameter like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;q[Type]=CouponItemID&lt;/code&gt;. The server-side code obediently tried to filter on a column called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;CouponItemID&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That column didn’t exist.&lt;/p&gt;

&lt;p&gt;The dropdown values were raw internal identifiers that some past developer had wired directly into the UI - and at some point the schema had changed underneath them. The filter was sending SQL WHERE clauses into the void. The database responded by dying. A reasonable reaction, honestly.&lt;/p&gt;

&lt;p&gt;The fix: change the dropdown values to semantic keys and map them to the columns that actually exist. Fifteen minutes of work, once you know where to look. Finding where to look took considerably longer.&lt;/p&gt;

&lt;h3 id=&quot;the-filters-that-didnt-filter-and-the-export-that-paid-the-price&quot;&gt;The Filters That Didn’t Filter (And the Export That Paid the Price)&lt;/h3&gt;

&lt;p&gt;This was the big one. Not one bug, but a constellation of failures that all pointed in the same direction: &lt;em&gt;the database was always loading everything&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Most of the admin filters were decorative. They looked functional - dropdowns selected, date ranges set, UI updated - but under the hood, the filter state wasn’t being applied to the actual query. Every admin page load was hitting the unfiltered dataset. Tens of thousands of orders, every time.&lt;/p&gt;

&lt;p&gt;The only thing keeping the paginated view alive was pagination itself: slice results to 50 rows, hand them to PHP, done. Slow, lying, and demoralizing - but survivable. The server stayed up. The page rendered. Admins could at least &lt;em&gt;see&lt;/em&gt; their orders, even if the filters were theater.&lt;/p&gt;

&lt;p&gt;But even the paginated view was lying. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;result.count()&lt;/code&gt; was still counting the entire unfiltered dataset on every load. The UI would show “1–50 of 19,550” regardless of what you’d filtered to, because the count didn’t know about the filter either.&lt;/p&gt;

&lt;p&gt;Then someone clicked “Export as CSV.”&lt;/p&gt;

&lt;p&gt;That button was the kill switch. SilverStripe’s GridField export does one thing the paginated view never did: it removes pagination. All of it. When the filters work, that’s fine - you’re exporting a bounded dataset. When the filters are broken and “the entire result set” means &lt;em&gt;every order ever taken&lt;/em&gt;, combined with summary fields that traverse ORM relationships per row - classic N+1 - you get a request that churns through tens of thousands of rows, firing relationship queries for each one, until PHP runs out of either time or memory. The paginated dashboard could limp along; the export button just killed the server outright. Apache would return a raw error before SilverStripe’s error handler even woke up.&lt;/p&gt;

&lt;p&gt;The fix came in three layers. &lt;strong&gt;Mitigation:&lt;/strong&gt; default to a bounded date range when no explicit filters are set, because exporting the entire history of a company in one HTTP request is not a feature, it’s a dare. &lt;strong&gt;Correctness:&lt;/strong&gt; make the export action carry the current filter state, which required changes in both the GridField JavaScript and the server-side code. &lt;strong&gt;Performance:&lt;/strong&gt; override the export columns to skip expensive per-row relationship traversals.&lt;/p&gt;

&lt;h3 id=&quot;date-filters-that-filtered-on-the-wrong-date&quot;&gt;Date Filters That Filtered on the Wrong Date&lt;/h3&gt;

&lt;p&gt;The admin date presets - “Last 7 days,” “Last 3 months,” “Today” - were filtering on the wrong timestamp column. The paginator count didn’t reflect the filtered list. Filter state wasn’t persisted across page loads.&lt;/p&gt;

&lt;p&gt;That last one had a framework-specific root cause: in SilverStripe’s GridField, data manipulators apply in component order. The date filter was running &lt;em&gt;after&lt;/em&gt; the paginator, so the paginator computed totals against the unfiltered list. The UI would show all results regardless of the date range. Later pages were empty. The admin panel was gaslighting its own users.&lt;/p&gt;

&lt;p&gt;Multiple fixes: correct the timestamp column, fix the preset logic, insert the date filter component &lt;em&gt;before&lt;/em&gt; the paginator, and add session-backed persistence. A sane default (“Past 3 months” instead of “everything ever”) and a visual highlight of the active preset followed the next day.&lt;/p&gt;

&lt;h3 id=&quot;we-cant-see-the-error&quot;&gt;“We Can’t See the Error”&lt;/h3&gt;

&lt;p&gt;The meta-bug. The reason everything else took so long.&lt;/p&gt;

&lt;p&gt;Some failures happened entirely in PHP’s shutdown phase - fatal errors, uncaught exceptions, memory exhaustion. SilverStripe’s error handler never ran. Apache logged a one-liner. The admin saw a blank page or a generic 500. Nobody knew &lt;em&gt;what&lt;/em&gt; broke.&lt;/p&gt;

&lt;p&gt;I added temporary, gated observability: log fatal shutdown errors and uncaught exceptions to a writable temp file (readable over FTP), and optionally stream detailed error output to the response - but only for authenticated admins, and gated behind a flag that defaults to off. Just enough visibility to diagnose, without turning production into a confessional booth for every visitor.&lt;/p&gt;

&lt;p&gt;Once the agent could &lt;em&gt;see&lt;/em&gt; the errors, every other fix fell into place within hours.&lt;/p&gt;

&lt;h3 id=&quot;the-statistics-page-that-rendered-the-entire-universe&quot;&gt;The Statistics Page That Rendered the Entire Universe&lt;/h3&gt;

&lt;p&gt;This one wasn’t a 500 - it was a 200 that took &lt;em&gt;minutes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The statistics admin page rendered all tabs server-side in a single initial request. Orders, coupons, registrations - everything, all at once, like a waiter who brings every item on the menu to your table and asks you to pick. Under the hood, a filter stats code path was iterating tens of thousands of records, firing a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;COUNT(*)&lt;/code&gt; per record against a table with millions of rows and no useful indexes. A single text-matching filter query took about 13 seconds. Multiply by 20 filters.&lt;/p&gt;

&lt;p&gt;The fixes: rewrite aggregate queries to avoid N+1 loops, cache repeated lookups per request, apply a default date range on entry so the page doesn’t try to summarize all of recorded history, and optimize relationship queries to stop building enormous &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;IN (...)&lt;/code&gt; lists. Performance went from “go make coffee” to “tolerable.” Still not fast. But the building was no longer on fire.&lt;/p&gt;

&lt;h3 id=&quot;the-server-punches-back&quot;&gt;The Server Punches Back&lt;/h3&gt;

&lt;p&gt;One more thing. After deploying a fix, I got an immediate &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Parse error: unexpected &apos;?&apos;&lt;/code&gt; in production.&lt;/p&gt;

&lt;p&gt;The fix used PHP’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;??&lt;/code&gt; null coalescing operator. The server’s PHP version was too old to support it.&lt;/p&gt;

&lt;p&gt;I rewrote it as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;isset($x) ? $x : $default&lt;/code&gt;, re-uploaded over FTP, and added “the server is running ancient PHP” to my mental model.&lt;/p&gt;

&lt;p&gt;Oh, and sometimes I’d upload the correct fix and the site would still show the old behavior. OPcache had decided the previous code was fine, actually, and combined JS caches weren’t helping either. The mitigation was a combination of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;?flush=all&lt;/code&gt;, clearing the combined-files cache directory, and occasionally toggling settings in the hosting panel to nuke the opcode cache. You haven’t lived until you’ve debugged a fix that’s correct but invisible because the runtime is nostalgic.&lt;/p&gt;

&lt;h2 id=&quot;aftermath&quot;&gt;Aftermath&lt;/h2&gt;

&lt;p&gt;Once the fires were out, I turned the one-off tooling into something reusable: user export CLI with registration filters, integration with an email marketing platform as a serverless function, and - critically - documentation. Split and structured so that the next person who gets handed FTP creds and a vague description of “it’s not working” has a slightly less terrible day than I did.&lt;/p&gt;

&lt;p&gt;A couple weeks later, another production issue surfaced: path handling bugs where &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$_SERVER[&quot;DOCUMENT_ROOT&quot;]&lt;/code&gt; resolved to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/framework&lt;/code&gt; under SilverStripe routing instead of the actual webroot. Replaced brittle document root references with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Director::baseFolder()&lt;/code&gt; and hardened the CLI image generation scripts to normalize paths properly. Same workflow - patch, commit, FTP upload, verify. The toolbox held up.&lt;/p&gt;

&lt;p&gt;Oh, and they gave me SSH access. After I’d already fixed everything. Classic.&lt;/p&gt;

&lt;h2 id=&quot;the-vibe-check&quot;&gt;The Vibe Check&lt;/h2&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Deployment Sophistication Meter: [██░░░░░░░░] 2/10

  ✅ Changes are in version control
  ✅ Diffs are verified post-deploy
  ✅ Database access is read-only by default
  ✅ Exports work without the admin panel
  ✅ Unit tests exist for the FTP and DB tooling
  ✅ AI agent can autonomously trace bugs through the codebase
  ❌ &quot;Deployment&quot; means FTP put
  ❌ &quot;Rollback&quot; means FTP put again, but the old file
  ❌ &quot;Monitoring&quot; means running a script that tails a log over FTPS
  ❌ The server&apos;s PHP version doesn&apos;t support ??
  ❌ SSH arrived after the war was over
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;faq-mostly-honest&quot;&gt;FAQ (Mostly Honest)&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why didn’t you just set up SSH?&lt;/strong&gt;
They gave me SSH. After I’d already fixed everything. You work with the door they give you, even if they install a better door the day after you’re done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You uploaded a web shell to production?&lt;/strong&gt;
A temporary one. Removed afterwards. When your only access is FTP and you need to understand the server environment &lt;em&gt;now&lt;/em&gt;, you do what the situation demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isn’t this just a glorified set of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; scripts?&lt;/strong&gt;
Yes. But glorified &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; scripts &lt;em&gt;with guardrails, unit tests, structured logging, and documentation&lt;/em&gt;. That’s the difference between a hack and a tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Python for the exporter?&lt;/strong&gt;
Because it needed to exist in an hour, not be beautiful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You wrote unit tests during a production firefight?&lt;/strong&gt;
For the FTP helpers and the DB read-only guardrails. In the same week I was uploading PHP files one at a time over FTPS. Professionalism doesn’t have an off switch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How much did the AI agent actually do?&lt;/strong&gt;
More than me. I built the toolbox - the FTP scripts, the DB CLI, the deploy workflow. The agent did the actual PHP archaeology: tracing SilverStripe framework code, identifying the broken filters, spotting the N+1 patterns, writing the patches. I’m not a PHP developer anymore; the agent is. I just gave it legs and pointed it at the fire. Though I did have to stop it from downloading 300 GB of customer photos onto my SSD, so the supervision wasn’t optional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why didn’t you just stop at the bot?&lt;/strong&gt;
Because the brain goblin doesn’t negotiate. The bot was the job. Everything after was compulsion dressed up as initiative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you actually enjoy this kind of work?&lt;/strong&gt;
I know what I said. Don’t judge me.&lt;/p&gt;

&lt;h2 id=&quot;the-end&quot;&gt;The End&lt;/h2&gt;

&lt;p&gt;The site works now. The exports run. The filters filter. The admins can do their jobs without a 500 greeting them at every turn. I documented it, handed it back, and walked away. A clean break. Just like I promised.&lt;/p&gt;

&lt;p&gt;The bottleneck was never intelligence - it was access. The AI agent knew more PHP than I’ve forgotten. It just couldn’t &lt;em&gt;see&lt;/em&gt; the server. Every hour I spent building tooling - the log tailer, the DB CLI, the FTP deploy script - paid for itself ten times over. The fixes found themselves once the agent had eyes.&lt;/p&gt;

&lt;p&gt;And yes, I deployed production fixes over FTP. The server’s PHP was so old my syntax was too modern for it. SSH arrived after the war was over.&lt;/p&gt;

&lt;p&gt;I regret nothing.&lt;/p&gt;
</description>
        <pubDate>Wed, 18 Feb 2026 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/fixing-the-site/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/fixing-the-site/</guid>
      </item>
    
      <item>
        <title>Drafting Effective Tasks for AI Pair Programming</title>
        <description>&lt;p&gt;When I use ChatGPT or GitHub Copilot X (the Codex web UI) as my remote programming partner, the quality of the help I receive depends on the clarity of the brief I give it. The raw notes below evolved into a structured ritual that keeps the conversation focused on outcomes instead of diving into premature implementation details. This article distills that routine into a reusable, human-readable checklist.&lt;/p&gt;

&lt;h2 id=&quot;start-with-the-feature-not-the-fix&quot;&gt;Start with the feature, not the fix&lt;/h2&gt;

&lt;p&gt;Begin every engagement by writing down the feature you are trying to ship. Resist the temptation to anchor the discussion around a particular class, method, or algorithm; those are solutions. Instead, explain what the user should be able to do when you are finished. I keep the following guard rails in mind:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Frame the feature&lt;/strong&gt; as a capability or improvement that someone can experience.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Describe current pain points&lt;/strong&gt; or gaps in capability using plain language.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Clarify success&lt;/strong&gt; with an explicit before/after statement: “Today it works like this…after the change it should work like that.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps the assistant centred on value, not code snippets.&lt;/p&gt;

&lt;h2 id=&quot;gather-context-before-requesting-changes&quot;&gt;Gather context before requesting changes&lt;/h2&gt;

&lt;p&gt;Once the feature outcome is clear, copy the entire brief into the Codex plan mode and ask it to examine the repository. Useful prompts include:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Here is a feature that I want to implement. Please analyze the codebase and collect information about the current implementation and what changes would be necessary.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Codex will typically respond with an outline of the affected areas, relevant files, and unanswered questions. Treat that response as a reconnaissance report. If anything feels off, refine your brief and rerun the request until the plan feels credible.&lt;/p&gt;

&lt;h2 id=&quot;close-the-loop-inside-chatgpt&quot;&gt;Close the loop inside ChatGPT&lt;/h2&gt;

&lt;p&gt;After Codex has explored the repo, paste every version of its response back into the ChatGPT conversation. Ask ChatGPT to consolidate those findings into a detailed plan. Automating the copy/paste with a browser userscript saves time here. The compiled plan should include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The architectural approach and any new structures that need to be introduced.&lt;/li&gt;
  &lt;li&gt;How responsibilities will shift between existing components.&lt;/li&gt;
  &lt;li&gt;Open questions or risks that deserve further investigation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only when the plan satisfies you should you move on to execution.&lt;/p&gt;

&lt;h2 id=&quot;turn-plans-into-actionable-task-lists&quot;&gt;Turn plans into actionable task lists&lt;/h2&gt;

&lt;p&gt;With the high-level plan in hand, return to Codex and request a task list with checkboxes that you can track as you implement. A simple follow-up prompt works:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Please create a .md file with the epic and checkboxes per task. Then go on and implement the first few tasks:
[PASTE TASKS HERE]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Saving this checklist in the repository keeps the scope visible and lets you revisit the remaining tasks after the initial iteration is complete.&lt;/p&gt;

&lt;h2 id=&quot;embrace-parallel-exploration&quot;&gt;Embrace parallel exploration&lt;/h2&gt;

&lt;p&gt;Finally, keep multiple tasks moving in parallel. Four active threads strike a balance between exploration and depth: enough variety to collect strong ideas, but not so many that you lose track of progress. As you iterate, capture learnings in the original ChatGPT conversation so your future self-and any collaborators-can reason about the trade-offs that shaped the work.&lt;/p&gt;

&lt;p&gt;By ritualising the way you brief AI collaborators, you ensure that each session starts with clarity, produces tangible artefacts, and ends with a confident plan of attack.&lt;/p&gt;
</description>
        <pubDate>Thu, 09 Oct 2025 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/drafting-effective-ai-task-prompts/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/drafting-effective-ai-task-prompts/</guid>
      </item>
    
      <item>
        <title>Distilling Multiple AI Iterations into a Single Winning Pull Request</title>
        <description>&lt;p&gt;Generating several candidate implementations with Codex is easy; choosing the right one (and capturing the learnings) is the real art. These notes outline how I shepherd multiple AI-produced branches through a structured review so that the best ideas survive and the rest inform future work.&lt;/p&gt;

&lt;h2 id=&quot;expect-variation-and-iterate-deliberately&quot;&gt;Expect variation and iterate deliberately&lt;/h2&gt;

&lt;p&gt;Codex output can swing wildly in quality. Instead of betting on a single response, request multiple iterations of the same feature. I typically generate four parallel branches and treat them as competing design ideas. Each run should:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Start from the same source branch to keep diffs comparable.&lt;/li&gt;
  &lt;li&gt;Follow the same high-level plan that ChatGPT produced earlier.&lt;/li&gt;
  &lt;li&gt;Produce a short changelog or markdown summary describing the approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;capture-evidence-for-every-branch&quot;&gt;Capture evidence for every branch&lt;/h2&gt;

&lt;p&gt;Before deciding which branch survives, gather objective evidence:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Clone or check out each pull request&lt;/strong&gt; into its own worktree (for example, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;../LiteDB.worktrees&lt;/code&gt;). Keeping them side-by-side makes comparison painless.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Record the repository status&lt;/strong&gt; using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh pr list&lt;/code&gt; or similar tooling so you always know which branch maps to which Codex run.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sync with the master plan&lt;/strong&gt; stored in documentation (such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docs/Spatial-Revamp&lt;/code&gt;) to ensure every iteration addresses the same checklist of requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a paper trail that you can paste back into ChatGPT when it is time to synthesize.&lt;/p&gt;

&lt;h2 id=&quot;let-chatgpt-be-the-reviewer&quot;&gt;Let ChatGPT be the reviewer&lt;/h2&gt;

&lt;p&gt;Once the branches exist, ask ChatGPT to evaluate them. Provide the command output, diffs, and any markdown summaries produced by Codex. Helpful prompts sound like:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Please check out all PRs in different worktrees and evaluate which PR is the best or what combination would be ideal. Mixing approaches is allowed-take the best of all worlds.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Encourage the model to grade each branch against the plan, call out missing pieces, and suggest how to blend strengths if necessary.&lt;/p&gt;

&lt;h2 id=&quot;aggregate-the-findings&quot;&gt;Aggregate the findings&lt;/h2&gt;

&lt;p&gt;When multiple evaluation rounds occur, collate the feedback before asking for a final verdict. A reliable pattern is:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Here is another set of findings. Please check whether they add anything new:
[PASTE SUMMARIES HERE]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;ChatGPT can then produce a decision matrix such as:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Category&lt;/th&gt;
      &lt;th&gt;Source&lt;/th&gt;
      &lt;th&gt;Action&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Correct normalization&lt;/td&gt;
      &lt;td&gt;#58&lt;/td&gt;
      &lt;td&gt;Keep as baseline&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Descriptor metadata&lt;/td&gt;
      &lt;td&gt;#60 / #57&lt;/td&gt;
      &lt;td&gt;Add to persistence&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Automatic backfill/index creation&lt;/td&gt;
      &lt;td&gt;#61&lt;/td&gt;
      &lt;td&gt;Integrate&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This makes trade-offs explicit and tells you exactly what to keep, merge, or discard.&lt;/p&gt;

&lt;h2 id=&quot;finish-with-a-curated-merge&quot;&gt;Finish with a curated merge&lt;/h2&gt;

&lt;p&gt;Armed with the matrix, return to Codex (or your local environment) and perform the final merge manually. The goal is not to auto-merge everything but to craft a single high-quality pull request that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Preserves the strongest implementation choices.&lt;/li&gt;
  &lt;li&gt;Documents any deliberate omissions or deferred tasks.&lt;/li&gt;
  &lt;li&gt;Links back to the evaluation artefacts for future reference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By treating each AI-generated branch as a hypothesis, you transform a noisy stream of drafts into an orderly, evidence-backed workflow that consistently ships better code.&lt;/p&gt;
</description>
        <pubDate>Thu, 09 Oct 2025 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/distilling-ai-generated-iterations/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/distilling-ai-generated-iterations/</guid>
      </item>
    
      <item>
        <title>Polyfill C#: Two Ways to Ship One Library Across Two Frameworks</title>
        <description>&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; You’re multi-targeting a library across &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netstandard2.0&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;net8.0&lt;/code&gt;. Different APIs are available on each. You need the same public surface but different guts. Strategy 1: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#if&lt;/code&gt; directives. Works, but scales like a dumpster fire. Strategy 2: partial classes with file exclusion in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.csproj&lt;/code&gt;. Cleaner, saner, and your future self won’t file a grievance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;the-situation&quot;&gt;The Situation&lt;/h2&gt;

&lt;p&gt;You have a library. It targets two frameworks - say, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netstandard2.0&lt;/code&gt; (because the world is full of legacy projects that aren’t going anywhere) and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;net8.0&lt;/code&gt; (because you’d like to use APIs invented after 2017). Both targets need to expose the same public interface, but the &lt;em&gt;implementation&lt;/em&gt; has to differ because the available APIs are completely different.&lt;/p&gt;

&lt;p&gt;This is a solved problem. It’s solved in two ways. One of them is good.&lt;/p&gt;

&lt;h2 id=&quot;strategy-1-if-directives-the-its-fine-approach&quot;&gt;Strategy 1: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#if&lt;/code&gt; Directives (The “It’s Fine” Approach)&lt;/h2&gt;

&lt;p&gt;The classic. The familiar. The thing you reach for first and regret third.&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;cp&quot;&gt;#if NET8_0
&lt;/span&gt;    &lt;span class=&quot;c1&quot;&gt;// net8 specific code - spans, modern goodness, joy&lt;/span&gt;
&lt;span class=&quot;cp&quot;&gt;#else
&lt;/span&gt;    &lt;span class=&quot;c1&quot;&gt;// netstandard specific code - string allocations, suffering&lt;/span&gt;
&lt;span class=&quot;cp&quot;&gt;#endif
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For a single method with a two-line difference? Perfectly fine. For an entire class where half the methods have different implementations, different helper types, and different using statements? You end up with a file that looks like a ransom note assembled from two different codebases. The syntax highlighting turns into abstract art. Code review becomes a puzzle game where the goal is figuring out which lines actually compile on which target.&lt;/p&gt;

&lt;p&gt;It works. It always works. It just stops being &lt;em&gt;pleasant&lt;/em&gt; somewhere around the fourth nested &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#if&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;strategy-2-partial-classes--file-exclusion-the-grown-up-approach&quot;&gt;Strategy 2: Partial Classes + File Exclusion (The Grown-Up Approach)&lt;/h2&gt;

&lt;p&gt;This is the one. Split each class into three files: the shared surface, the .NET Core implementation, and the .NET Standard implementation. Then tell MSBuild which files belong to which target.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.csproj&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-xml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;PropertyGroup&amp;gt;&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;&amp;lt;TargetFrameworks&amp;gt;&lt;/span&gt;netstandard2.0;net8.0&lt;span class=&quot;nt&quot;&gt;&amp;lt;/TargetFrameworks&amp;gt;&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;&amp;lt;/PropertyGroup&amp;gt;&lt;/span&gt;

&lt;span class=&quot;nt&quot;&gt;&amp;lt;ItemGroup&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Condition=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;&apos;$(TargetFramework)&apos; == &apos;netstandard2.0&apos;&quot;&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;&amp;gt;&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;&amp;lt;Compile&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Remove=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;**\*.NetCore.cs&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;&amp;lt;None&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Remove=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;**\*.NetCore.cs&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;/&amp;gt;&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;&amp;lt;/ItemGroup&amp;gt;&lt;/span&gt;

&lt;span class=&quot;nt&quot;&gt;&amp;lt;ItemGroup&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Condition=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;&apos;$(TargetFramework)&apos; == &apos;net8.0&apos;&quot;&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;&amp;gt;&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;&amp;lt;Compile&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Remove=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;**\*.NetStd.cs&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;&amp;lt;None&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;Remove=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;**\*.NetStd.cs&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;/&amp;gt;&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;&amp;lt;/ItemGroup&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The shared interface - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MyClass.cs&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyClass&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;CommonMethod&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;();&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The .NET 8 implementation - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MyClass.NetCore.cs&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyClass&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;CommonMethod&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;// The good implementation. Spans. Performance. Happiness.&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The .NET Standard fallback - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MyClass.NetStd.cs&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyClass&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;partial&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;CommonMethod&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;// The &quot;it works and that&apos;s enough&quot; implementation.&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When building for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;net8.0&lt;/code&gt;, MSBuild excludes the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;*.NetStd.cs&lt;/code&gt; files entirely. When building for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netstandard2.0&lt;/code&gt;, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;*.NetCore.cs&lt;/code&gt; files disappear. Each target only sees the files it’s supposed to. No &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#if&lt;/code&gt; spaghetti. No guessing which code path is active. Just clean, separate files that each do one thing for one target.&lt;/p&gt;

&lt;h2 id=&quot;which-one-should-you-use&quot;&gt;Which One Should You Use?&lt;/h2&gt;

&lt;p&gt;Honestly? Start with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#if&lt;/code&gt;. If the conditional block is small and isolated, it’s the right call - introducing three files for a two-line difference is overkill.&lt;/p&gt;

&lt;p&gt;The moment you catch yourself scrolling past a wall of preprocessor directives trying to find where the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netstandard&lt;/code&gt; path ends and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;net8&lt;/code&gt; path begins, switch to Strategy 2. Your code reviewers will thank you. Your IDE will thank you. The next person to touch this file - who is statistically likely to be you, three months from now, with no memory of why any of this exists - will thank you.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Approach Comparison:

  #if directives:    Quick to write. Painful to maintain. O(n²) regret scaling.
  Partial + exclude: More files. More setup. Zero ambiguity about what runs where.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Pick your pain. I pick the one with fewer &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#endif&lt;/code&gt;s.&lt;/p&gt;
</description>
        <pubDate>Sun, 05 Oct 2025 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/polyfill-csharp/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/polyfill-csharp/</guid>
      </item>
    
      <item>
        <title>A Thunderbird&apos;s Tale: Taming Google Calendar</title>
        <description>&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Thunderbird won’t sync shared Google Calendars through the normal flow. The official guide is useless. iCal links work for &lt;em&gt;your&lt;/em&gt; calendars but not shared ones. The fix: manually build a CalDAV URL from the calendar’s ID and paste it in. Google’s OAuth prompt appears, you sign in, and suddenly the calendar exists. The whole thing took 90 minutes of debugging and 30 seconds of actual solution.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;i-just-want-my-calendar&quot;&gt;I Just Want My Calendar&lt;/h2&gt;

&lt;p&gt;I like Thunderbird. This is a controversial opinion in some circles, but I stand by it. It handles email, it handles calendars, and it doesn’t try to upsell me on a premium tier every time I open it.&lt;/p&gt;

&lt;p&gt;What it does &lt;em&gt;not&lt;/em&gt; handle gracefully is syncing shared Google Calendars. And by “not gracefully” I mean “not at all, through any documented method, without manual intervention that Google and Mozilla apparently agreed to never tell anyone about.”&lt;/p&gt;

&lt;h2 id=&quot;attempt-1-the-official-guide&quot;&gt;Attempt 1: The Official Guide&lt;/h2&gt;

&lt;p&gt;I started where any reasonable person would - Mozilla’s official support docs. The process seemed straightforward: go to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;≡ &amp;gt; New Account &amp;gt; Calendar &amp;gt; On the Network &amp;gt; Next&lt;/code&gt;, enter your Google email, and let Thunderbird’s auto-discovery find your calendars.&lt;/p&gt;

&lt;p&gt;I entered my email. Thunderbird thought about it for a moment. Then it found… nothing. No Google sign-in prompt. No calendar list. Just a blank screen and the quiet sound of my afternoon evaporating.&lt;/p&gt;

&lt;p&gt;Dead end. Next.&lt;/p&gt;

&lt;h2 id=&quot;attempt-2-ical-links-partial-credit&quot;&gt;Attempt 2: iCal Links (Partial Credit)&lt;/h2&gt;

&lt;p&gt;Google Calendar lets you grab a “Secret address in iCal format” for each calendar. It looks something like:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://calendar.google.com/calendar/ical/your.email%40gmail.com/private-a1b2c3d4e5f6/basic.ics&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I pasted this into Thunderbird and - it worked! For my &lt;em&gt;own&lt;/em&gt; calendars. Perfect sync, no issues.&lt;/p&gt;

&lt;p&gt;For the &lt;em&gt;shared&lt;/em&gt; calendar I actually needed? Thunderbird rejected the iCal link like a bouncer checking IDs. Same format, same source, different result. Helpful.&lt;/p&gt;

&lt;p&gt;So: personal calendars via iCal? Fine. Shared calendars via iCal? Absolutely not. This is the kind of inconsistency that makes you question whether software is a mature engineering discipline or an elaborate prank.&lt;/p&gt;

&lt;h2 id=&quot;attempt-3-the-caldav-discovery&quot;&gt;Attempt 3: The CalDAV Discovery&lt;/h2&gt;

&lt;p&gt;While poking around Thunderbird’s calendar settings for the calendars that &lt;em&gt;did&lt;/em&gt; sync successfully, I noticed something interesting. Thunderbird wasn’t actually using the iCal URL I’d given it. Behind the scenes, it had quietly swapped in a CalDAV URL:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://apidata.googleusercontent.com/caldav/v2/[calendar-id]%40group.calendar.google.com/events/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s… not documented anywhere obvious. Google doesn’t hand you this URL for shared calendars. Thunderbird doesn’t tell you it’s using it. It’s like finding out your car has a turbo button that nobody mentioned because it’s behind the glove compartment.&lt;/p&gt;

&lt;h2 id=&quot;the-fix-its-embarrassingly-simple&quot;&gt;The Fix (It’s Embarrassingly Simple)&lt;/h2&gt;

&lt;p&gt;Once I knew the URL pattern, the rest was assembly:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open Google Calendar settings for the shared calendar.&lt;/li&gt;
  &lt;li&gt;Copy the &lt;strong&gt;public address in iCal format&lt;/strong&gt; - it contains a long calendar ID like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;a1b2c3d4e5@group.calendar.google.com&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Extract that calendar ID.&lt;/li&gt;
  &lt;li&gt;Slot it into the CalDAV URL template: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://apidata.googleusercontent.com/caldav/v2/[CALENDAR-ID]/events/&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Paste the constructed URL into Thunderbird’s calendar location field.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Google OAuth screen appeared. I signed in. The shared calendar materialized in Thunderbird like it had been there all along, casually pretending the last 90 minutes hadn’t happened.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Frustration-to-Fix Ratio: [██████████] 10/1

  90 minutes of debugging
  30 seconds of actual solution
  0 lines of documentation that would have prevented this
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;why-this-is-annoying&quot;&gt;Why This Is Annoying&lt;/h2&gt;

&lt;p&gt;The information to make this work exists in the system. Thunderbird &lt;em&gt;knows&lt;/em&gt; the CalDAV pattern - it uses it internally. Google &lt;em&gt;exposes&lt;/em&gt; the calendar ID - it’s right there in the iCal URL. Neither party connects the dots for the user. It’s like two people each holding half a map and refusing to stand next to each other.&lt;/p&gt;

&lt;p&gt;If you’re hitting this same wall - shared Google Calendar, Thunderbird, auto-discovery failing, iCal links rejected - this is the fix. Build the CalDAV URL yourself, paste it in, and move on with your life.&lt;/p&gt;

&lt;p&gt;You’re welcome. I’m going to go close 14 browser tabs.&lt;/p&gt;
</description>
        <pubDate>Mon, 08 Sep 2025 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/taming-google-calendar-thunderbird/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/taming-google-calendar-thunderbird/</guid>
      </item>
    
      <item>
        <title>The Problem: ZeroTier Stops Working</title>
        <description>&lt;p&gt;ZeroTier has been a reliable solution for establishing VPN connections between devices, even when they are located behind NAT. However, after installing ZeroTier on my brother’s Raspberry Pi home server, I encountered a baffling issue: it would work fine for the first hour or so, and then lose connection from outside. The only way to re-establish the connection was to reboot the Raspberry Pi.&lt;/p&gt;

&lt;p&gt;This problem was further compounded by the fact that if I had an ongoing file copy operation, I could still connect to the Raspberry Pi from the specific machine involved in the transfer. But if I switched to another device, like my laptop, I couldn’t connect at all. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo zerotier-cli info&lt;/code&gt; command showed the status as “OFFLINE.”&lt;/p&gt;

&lt;h2 id=&quot;the-investigation-searching-for-the-root-cause&quot;&gt;The Investigation: Searching for the Root Cause&lt;/h2&gt;
&lt;p&gt;After scouring the internet for answers and receiving advice from fellow Redditors, I opted for a workaround instead of pinpointing the root cause. This involved creating a script that checks ZeroTier’s status periodically and restarts the service if it’s offline.&lt;/p&gt;

&lt;p&gt;Here’s the script I used (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;check_zerotier.sh&lt;/code&gt;):&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Check if the response from &quot;zerotier-cli status&quot; contains &quot;OFFLINE&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;status&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;/var/lib/zerotier-one/zerotier-cli status&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$status&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;OFFLINE&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;OFFLINE! Restarting zerotier-one&quot;&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# If it does contain &quot;OFFLINE&quot;, restart zerotier with the command &quot;service zerotier-one restart&quot;&lt;/span&gt;
    /usr/sbin/service zerotier-one restart
&lt;span class=&quot;k&quot;&gt;fi

if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$status&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;ONLINE&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;ONLINE! Doing nothing&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To automate the process, I added an entry in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;crontab&lt;/code&gt; file to run the script every 10 minutes:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;/10 &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; /usr/scripts/check_zerotier.sh 2&amp;gt;&amp;amp;1 | /usr/scripts/formatLog.sh 2&amp;gt;&amp;amp;1 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /var/log/zt_check.log 2&amp;gt;&amp;amp;1

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I also created a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;formatLog.sh&lt;/code&gt; script to format the log output:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;while &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;read &lt;/span&gt;line&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;current_time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; +&lt;span class=&quot;s2&quot;&gt;&quot;[%Y-%m-%d %H:%M:%S]&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$current_time&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$line&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;the-workaround-a-reliable-solution&quot;&gt;The Workaround: A Reliable Solution&lt;/h2&gt;
&lt;p&gt;By implementing this workaround, I managed to bypass the issue and ensure that ZeroTier remains functional on my brother’s Raspberry Pi home server. While I didn’t identify the root cause of the problem, this solution has been effective in keeping the connection stable and preventing any disruptions.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Sometimes, finding the root cause of an issue can be like searching for a needle in a haystack. In cases like these, it’s essential to focus on finding a reliable workaround that can keep things running smoothly. In my case, a simple script and a cron job were enough to prevent ZeroTier from going offline and maintain a stable connection.
I never pinned down the underlying cause (and at the time I cared more about keeping the box reachable than doing deep archaeology), but this watchdog approach kept the connection stable.&lt;/p&gt;
</description>
        <pubDate>Tue, 11 Apr 2023 12:00:00 +0000</pubDate>
        <link>https://blog.kamsker.at/blog/zerotier-stops-working/</link>
        <guid isPermaLink="true">https://blog.kamsker.at/blog/zerotier-stops-working/</guid>
      </item>
    

    
      
        
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
    

  </channel>
</rss>
