Bucklog’s Machine: Inside a Kubernetes Scanning Fleet

Deep dive into AS211590 (Bucklog SARL), a Kubernetes-orchestrated scanning fleet in Paris that generated 13M sessions in 90 days — targeting n8n CVEs, .env credentials, and critical infrastructure with possible Iran-conflict pre-positioning.
cybersecurity
threat-intelligence
kubernetes
vulnerability-exploitation
credential-harvesting
network-fingerprinting
apt
greynoise
Author

hrbrmstr

Published

March 23, 2026

Most scanning infrastructure is boring. A VPS, a cron job, maybe a cheap proxy rotation service if the operator has ambitions. What we’re looking at with AS211590 (Bucklog SARL / FBW Networks SAS) is something else entirely – a purpose-built, Kubernetes-orchestrated scanning cluster running from a single /24 in Paris that generated 13 million sessions over 90 days and barely registered the load.

This is the walkthrough. We’ll cover how the fleet is built, what it’s doing, and what you can do about it.


The Infrastructure

The BGP prefix 185.177.72.0/24 is registered in RIPE to FBW Networks SAS, 16 rue Grange Dame Rose, Vélizy-Villacoublay, France — allocated 2025-05-27. Our 90-day analysis window for this post opens in late December 2025.

To understand why this fleet is somewhat novel/special, it helps to understand what Censys found when it looked at all 74 observable hosts in the /24: every single one runs the same Debian 12 base image (confirmed via HASSH 425d29fe50d8e4f5e37efb6e24bcf660, uniform fleet-wide), the same OpenSSH 9.2p1 configuration, and – on the 22+ confirmed Kubernetes worker nodes – an identical JARM fingerprint on port 10250 (the kubelet API). That’s not coincidence. That’s a provisioning pipeline.

The certificates tell the story even more cleanly. Every worker node presents a self-signed TLS cert on its kubelet port with a systematic naming pattern: pkNN@<epoch>, where NN is a sequential node ID and the epoch timestamp matches the cert’s not_before field exactly. Node .49 holds cert pk01, provisioned 2025-12-30. Node .22 holds pk11, provisioned 2026-01-07. Twenty-five nodes numbered and timestamped in sequence – automated cluster lifecycle management, not a human typing openssl req in a terminal.

The CNI (Container Network Interface) of choice is Cilium. Two nodes expose Hubble observability ports (4244) with a shared *.kubernetes.hubble-grpc.cilium.io cert from the cluster’s internal Cilium CA. Cilium uses eBPF-based networking. The operator gets fine-grained traffic policy enforcement and — through Hubble — real-time visibility into every flow inside the cluster at the kernel level. You don’t stand up a full observability stack for a throwaway campaign. You stand it up because you want to know exactly which pods are producing which traffic, catch failures before they matter, and maintain operational discipline across a fleet that can’t afford to misbehave.

How do the pods talk to the internet? The JA4T fingerprint 65495_2-4-8-1-3_65495_7 answers that question. An MSS of 65495 only appears on loopback interfaces – the MTU is 65535, and subtract TCP/IP header overhead and you land at 65495. That fingerprint showed up on 304,807 sessions. What it means: kube-proxy or Cilium is routing outbound connections through localhost before NAT’ing them to the external interface. The pods aren’t talking to the internet directly. They go through the cluster’s networking layer first.

There’s also a 1380-byte MSS fingerprint (42780_2-4-8-1-3_1380_12) on 32,815 sessions. Standard Linux MTU is 1500 bytes; 80 bytes of overhead points squarely at VXLAN or WireGuard encapsulation. Some of this traffic traverses a tunnel inside the cluster before it exits.

The node breakdown:

Node tier IPs Role
Core workers (pk01–pk19) 9 IPs 96% of all sessions
Secondary tier .61, .60, .12 ~398K sessions combined
Ramp-up tier ~25 IPs (.130–.158 range) ~15K sessions, entering production
Ingress/control plane .3 10 services including nginx Ingress, Envoy, kubelet
Management .1, .2 SNMP, Elasticsearch (log aggregation)
Anomalous .4, .46, .89 SMB+NFS+RPC on .4; Redis + custom ports on .46/.89

Node .2 runs Elasticsearch on 9200 and 9300. Thirteen million sessions, indexed and queryable, so the fleet maintainers can fully analyze what they’ve captured and/or diagnose campaign issues.


The Tooling Stack

Nine IPs account for 96% of 13 million sessions, with load distributed between 8.7% and 12.9% per node – a spread consistent with Kubernetes DaemonSet or Deployment scheduling. The load distribution is too even to be anything else.

The tools, by observed user agent:

Agent Sessions Role
curl/8.7.1 11,964,108 (91.5%) Bulk HTTP reconnaissance
socketburst/0.1 271,344 Port/service discovery
l9explore/1.2.2 242,913 Vulnerability scanning (ProjectDiscovery)
Chrome/120 (spoofed) 142,842 Browser impersonation
l9tcpid/v1.1.0 4,256 TCP fingerprinting
python-httpx/0.28.1 867 Python HTTP client

Curl handles the volume. l9explore and socketburst handle discovery. l9tcpid fingerprints services for the target list. The Chrome spoof (~15K sessions) gets used selectively where a browser UA gets different responses. This appears to be a part of a deliberate pipeline, moving from inventory collection to full-scale exploit operations.

The JA4H fingerprints confirm the split:

JA4H Sessions Interpretation
ge11nn14enus_16e29da98f67 8,937,713 GET, 14 headers, en-US – primary curl scanner
po11nn16enus_6291b5733205 2,087,283 POST, 16 headers, en-US – n8n exploitation
ge11nn050000_3658ef221638 351,749 GET, 5 headers, no locale – l9explore
ge11nn040000_8391bea91fb6 245,432 GET, 4 headers, no locale – socketburst

The POST fingerprint (2.09M sessions) maps directly to the n8n exploitation campaign. 16 headers on POST vs. 14 on GET tracks with the addition of Content-Type and payload headers.


The Lifecycle

The fleet’s activity across our 90-day observation window (chosen only for data convenience) follows a pattern consistent with a professional deployment.

Phase 1 – Commissioning (Dec 24 – Jan 2): Under 4,000 sessions/day. Infrastructure testing. The first week: 1,167 sessions total.

Phase 2 – Initial operations (Jan 3 – Jan 11): 12K–80K/day. First sustained scanning run, 275,885 sessions in the week of Jan 5. Core patterns established.

Phase 3 – Operational pause (Jan 12 – Jan 18): Volume drops to under 6,000/day. Either infrastructure reconfiguration or deliberate tempo management.

Phase 4 – Sustained scanning (Jan 19 – Feb 10): 10K–150K/day across two consecutive weeks. Building coverage, not sprinting.

Phase 5 – Full-scale operations (Feb 12 – Mar 23): The step-change. Daily volume: 50K–987K. Peak on February 23: 987,094 sessions – a 170x increase from the Phase 3 lull.

Weekly sessions in Phase 5:

Week of Sessions
Feb 9 1,508,612
Feb 16 1,791,082
Feb 23 1,983,061
Mar 2 1,403,226
Mar 9 2,608,825
Mar 16 1,906,230

Phase 5 began February 12. The US/Israel-Iran conflict started February 27. That’s a two-week gap we’ll come back to in a bit.


What the Fleet Is Actually Doing

Credential harvesting

This is the dominant mission. The fleet sweeps for configuration files that contain secrets – and it does so with the systematic thoroughness of something that has all the time in the world and a lot of CPU to spend.

Activity Sessions Target
.env file harvesting 3,543,359 API keys, database credentials, secrets
Generic sensitive file access 3,161,498 Broad configuration file patterns
/proc enumeration 2,128,282 Container escape paths, system info
Git config crawling 594,049 Repository credentials, internal URLs
PHP info 286,643 Server configuration disclosure
AWS credential files 173,167 IAM keys, access credentials
WordPress config 10,319 Database credentials

The .env crawling hits 30+ path variants — /backend/.env, /api/.env, and 28+ additional paths, totaling roughly 200K sessions across variants. This is directory fuzzing vs. targeted exploitation. The fleet probes every plausible location for secrets, then moves on.

n8n exploitation (CVE-2026-21858)

The single largest specific campaign: 1,028,562 sessions targeting n8n workflow automation endpoints.

CVE-2026-21858 is a CVSS 10.0 unauthenticated arbitrary file access vulnerability. The fleet fuzzes approximately 100 unique /form/* and /webhook/* paths at roughly 10K requests each, probing for active n8n workflow endpoints that accept unauthenticated form submissions. The 2.09M POST sessions (JA4H po11nn16enus_6291b5733205) are this campaign.

One layer deeper: CVE-2025-68613, a CVSS 9.9 n8n RCE with a Metasploit module, is linked by Akamai to ZeroBot malware and MuddyWater – an Iranian APT. Bucklog is not the same as MuddyWater. We cannot make that attribution from scanning data. What we can say: the n8n CVE ecosystem is under active exploitation, the fleet has n8n as its single largest specific target, and there’s a documented Iranian APT connection to the same CVE family.

As an aside, n8n has accumulated 22 CVEs in the past three months, 10 rated Critical. Perhaps one should consider finding another automation platform if you still use n8n?

Evasion and active exploitation

Beyond credential harvesting and n8n, the fleet runs a broader exploitation portfolio:

CVE / Technique Sessions Target
CVE-2026-21858 (n8n file access) 1,028,562 Workflow automation
Double URL encoding 75,901 WAF bypass
Generic path traversal 141,289 LFI exploitation
CVE-2024-29291 (Laravel) 13,109 Credential leak
CVE-2024-44000 (WP LiteSpeed) 12,549 WordPress plugin
CVE-2020-5284 (Next.js) 4,696 Directory traversal
CVE-2025-2264 (Sante PACS) 4,124 Healthcare PACS
CVE-2017-9841 (PHPUnit) 3,287 Classic RCE
CVE-2025-48927 (TeleMessage) 2,432 Spring Boot heap dump

The double URL encoding (75K sessions) specifically targets WAF pattern matching that only decodes once. If your WAF sees %252e%252e and doesn’t second-decode it, the traversal gets through.

CVE-2025-2264 targeting Sante PACS is only 4,124 sessions – small by this fleet’s standards – but medical imaging systems are critical infrastructure. Healthcare organizations should audit Sante PACS exposure regardless of session count.


Target Selection: The Conflict Angle

The fleet probes 47+ distinct sensor profile types with a pretty deliberate composition.

Perimeter and VPN devices:

Persona Sessions
Palo Alto NGFW + PAN-OS 245,995
SonicWall SonicOS + Gen7 242,628
Cisco ASA + ASA Software 158,628
pfSense 126,706
Zyxel USG40 91,060
Checkpoint Firewall-1 90,396
Juniper SRX210 89,293

Surveillance systems:

Persona Sessions Note
TrendNet IP Camera 153,971
Dahua Camera 100,773 CyberAv3ngers documented target
Intelbras Camera 100,301
Hikvision 78,572 CyberAv3ngers documented target
Geovision 71,287
Bosch Alarm Panel 74,682

CyberAv3ngers is an IRGC-affiliated group with a documented pattern of targeting Dahua and Hikvision surveillance systems. Both are in this fleet’s top surveillance targets. That could be coincidence. The temporal alignment makes coincidence a less comfortable explanation.

Phase 5 escalation began February 12. The US/Israel-Iran conflict onset: February 27. Fifteen days prior, this fleet went from 150K sessions/day to a trajectory ending at 987K. A reasonable interpretation: pre-positioning. Establishing access breadth before a conflict window opens, so that selective exploitation can begin once it does. This comes from preliminary analysis – direct attribution to any state actor is not supported by the available data – but the combination of target selection, timing, and n8n/MuddyWater overlap is hard to set aside.


What You Can Do About It

Block it

The entire /24 is unified infrastructure. No shared SSH host keys between nodes (each is unique, consistent with proper Kubernetes provisioning), but every single observable host shares the same base image, the same HASSH, and the same operational purpose. There is no evidence of legitimate third-party tenancy in this prefix.

Block 185.177.72.0/24 (AS211590 BUCKLOG) at your perimeter. If you don’t expect traffic from a French scanning cluster, this is a clean block.

Detect it

These fingerprints identify the fleet with low false-positive rate:

Signal Value Notes
JA4H (GET scanner) ge11nn14enus_16e29da98f67 8.9M sessions, primary curl scanner
JA4H (POST/n8n) po11nn16enus_6291b5733205 2.1M sessions, n8n exploitation
JA4T (K8s loopback) 65495_2-4-8-1-3_65495_7 MSS 65495 = Kubernetes pod routing
HASSH (SSH server) 425d29fe50d8e4f5e37efb6e24bcf660 Uniform across all 74 nodes
JARM (kubelet) 3fd3fd20d00000000043d3fd3fd43d684d61a135bd962c8dd9c541ddbaefa8 All K8s worker nodes
User agent curl/8.7.1 Combined with 14+ headers and en-US locale

The JA4H fingerprint ge11nn14enus_16e29da98f67 is the reliable signal here — 14 headers plus en-US locale is a specific combination that doesn’t require trusting the UA string, which is trivially changed. Alert on double-encoded path traversal (%252e%252e) regardless of source.

Patch and protect

  1. Patch n8n now. CVE-2026-21858 (CVSS 10.0) and CVE-2025-68613 (CVSS 9.9, Metasploit module available) are both active targets. If patching isn’t immediate, restrict /form/* and /webhook/* to authenticated access at the network layer.
  2. Audit your .env exposure. Check that your web server blocks dotfile access. Verify .env, .aws/credentials, and .git/config aren’t reachable from your document root. A fleet running 3.5M sessions against these paths will find any that you’ve missed.
  3. Healthcare organizations: Audit Sante PACS installations for CVE-2025-2264.
  4. Monitor the ramp-up tier. The .130–.158 range (~25 IPs) is entering production now. Extend your blocks and monitoring to the full /24, not just the 9 core workers.

GNQL Queries

Fleet overview (last 7 days):

metadata.asn:AS211590 last_seen:7d

n8n exploitation activity:

metadata.asn:AS211590 tags:"n8n CVE-2026-21858 Attempt" last_seen:7d

ENV crawling:

metadata.asn:AS211590 tags:"ENV Crawler" last_seen:7d

Core worker IPs:

ip:185.177.72.13 OR ip:185.177.72.49 OR ip:185.177.72.38 OR ip:185.177.72.23 OR ip:185.177.72.52 last_seen:7d

Surveillance persona targeting (Session Explorer):

sourceMetadata.asn:AS211590 AND gnMetadata.persona.name:("Dahua Camera" OR "Hikvision" OR "TrendNet IP Camera")

What to Watch

Watch the ramp-up tier. When the ~25 nodes in the .130–.158 range reach full production, the fleet’s daily ceiling moves substantially higher than the current 987K peak.

The n8n CVE chaining risk is real. CVE-2026-21858 provides unauthenticated file access; CVE-2025-68613 provides RCE via expression evaluation. A fleet already running 1M sessions against n8n endpoints that has access to a Metasploit module for the RCE follow-on is a meaningful threat to any exposed n8n instance.

We’ll continue tracking AS211590 activity. If the conflict-tempo hypothesis holds, the next escalation point should be visible in the session data before it shows up anywhere else.