Tunnel Security Analysis

CybersecurityFriday, March 27, 2026·8 min read

Security Analysis: Tunneling + Docker Nginx Proxyfrom Home/Office NetworkPrepared for: Dr. Mohd Hanif Mohd Ramli — Eagle Attech / TXIO Fusion SolutionsDate: March 2026Scope: Apps hosted via tunneling (e.g., Cloudflare Tunnel / SSH tunnel / ngrok) throughDocker nginx reverse…

Security Analysis: Tunneling + Docker Nginx Proxy
from Home/Office Network
Prepared for: Dr. Mohd Hanif Mohd Ramli — Eagle Attech / TXIO Fusion Solutions
Date: March 2026
Scope: Apps hosted via tunneling (e.g., Cloudflare Tunnel / SSH tunnel / ngrok) through
Docker nginx reverse proxy on local network
1. Architecture Overview
[Internet Users]


[Tunnel Endpoint] ← Cloudflare / ngrok / SSH tunnel

▼ (encrypted tunnel)
[Home/Office Router]


[Docker Host Machine]

├─ [nginx-proxy container] ← SSL termination, reverse proxy
├─ [certbot container] ← Let's Encrypt renewal
├─ [fusion.txio.live] ← App container
├─ [other apps...] ← ARBITER, GRADSENSE, SCANIX, etc.
└─ [database containers] ← PostgreSQL / MariaDB
2. Security Strengths (What’s Working Well)
2.1 Tunnel as Shield
The tunnel service acts as a shield — your home/office IP is never exposed to the public
internet. Attackers cannot directly port-scan your router or Docker host. This is significantly
safer than traditional port forwarding.
2.2 Let’s Encrypt SSL
Your SSL setup with cer tbot is solid. TLSv1.2/1.3 with moder n cipher suites means traffic

between users and the tunnel endpoint is encrypted. HSTS enforcement prevents
downgrade attacks.
2.3 Docker Isolation
Each app runs in its own container with its own network namespace. A compromised app
container doesn’t automatically grant access to the host filesystem or other containers
(unless misconfigured).
2.4 No Direct Port Exposure
Unlike traditional hosting where ports 80/443 are forwarded through the router, tunneling
means zero inbound ports need to be open on your firewall.
3. Security Weak Points
3.1 CRITICAL — The Host Machine is the Single Point of Trust
Risk Level: HIGH
Your Docker host machine sits on the same network as your personal/office devices. If any
container is compromised and escapes Docker isolation (container escape vulnerability),
the attacker lands directly on your LAN with access to:
Your workstations (twone.local, seaicTHREE)
NAS / file shares
Other IoT devices on the network
Internal services not meant to be public
Mitigation:
Run Docker containers with --read-only filesystem where possible
Never run containers as --privileged
Use Docker user namespaces (userns-remap) so container root ≠ host root
Place Docker host on a separate VLAN from personal/office devices
Enable AppArmor or SELinux profiles for containers
3.2 CRITICAL — Database Containers on Same Host
Risk Level: HIGH

If PostgreSQL/MariaDB containers are on the same Docker network as web-facing apps, a
SQL injection or app compromise gives direct database access without needing to traverse
any network boundary.
Mitigation:
Use separate Docker networks: one for “frontend” (nginx + apps) and one for “backend”
(databases)
Never expose database ports to the Docker host (ports: directive) — use only internal
Docker networking
Set strong, unique passwords for all databases (not defaults)
Enable PostgreSQL pg_hba.conf restrictions even inside Docker
Regular automated backups to an off-host location (DO Spaces, external drive)
3.3 HIGH — Tunnel Credential / Token Security
Risk Level: HIGH
The tunnel daemon (cloudflared, ngrok agent, etc.) authenticates using a token or
credential file. If this token is:
Stored in plaintext in docker-compose.yml
Committed to a git repository
Accessible to other containers
...an attacker who gains any foothold can hijack your tunnel, redirect traffic, or create new
tunnels into your network.
Mitigation:
Store tunnel credentials using Docker secrets, not environment variables in compose
files
Never commit tokens to Git (add to .gitignore, use .env files with restricted
permissions)
Rotate tunnel tokens periodically
Monitor tunnel dashboard for unauthorized tunnel connections
3.4 HIGH — No Web Application Firewall (WAF)
Risk Level: HIGH

Your nginx proxy handles SSL and routing, but raw application traffic (SQL injec tion, XSS,
path traversal, API abuse) passes through unfiltered to your apps.
Mitigation:
If using Cloudflare Tunnel, enable Cloudflare WAF rules (free tier has basic protection)
Consider adding ModSecurity with OWASP Core Rule Set to nginx
Implement rate limiting in nginx (you already have limit_req_zone — verify it’s active
on all apps)
Add request body size limits (client_max_body_size)
3.5 MEDIUM — Container Image Supply Chain
Risk Level: MEDIUM
Pulling nginx:alpine, certbot/certbot, node:18, etc. from Docker Hub means trusting
upstream maintainers. Compromised base images have occurred in the wild.
Mitigation:
Pin specific image digests, not just tags (e.g., nginx:alpine@sha256:abc123...)
Use docker scan or Trivy to scan images for known vulnerabilities
Rebuild and update images regularly (monthly minimum)
Consider using official images only
3.6 MEDIUM — No Centralized Logging / Intrusion Detection
Risk Level: MEDIUM
Without centralized logging, you won’t know if someone is actively probing your apps,
attempting brute force, or has already gained access.
Mitigation:
Aggregate all container logs to a central location (Loki + Grafana, or Papertrail)
Monitor nginx access logs for anomalous patterns (unusual user agents, repeated
4xx/5xx, geographic anomalies)
Set up fail2ban on the Docker host to block repeated failed SSH attempts
Enable Docker daemon audit logging

3.7 MEDIUM — SSL Termination Location
Risk Level: MEDIUM
Depending on your tunnel type, SSL may terminate at the tunnel provider (Cloudflare) and
re-encrypt to your origin, OR traffic may be unencrypted inside the tunnel. If using
Cloudflare Tunnel with “Full (Strict)” mode, this is fine. If using “Flexible” or bare SSH
tunnels, traffic between the tunnel endpoint and your nginx may be plaintext.
Mitigation:
Use Cloudflare “Full (Strict)” SSL mode
Ensure your nginx still serves valid SSL even behind the tunnel (defense in depth)
For SSH tunnels, the SSH encryption covers the channel, but verify no local network
sniffing is possible
3.8 LOW-MEDIUM — Docker Compose / Host Hardening
Risk Level: LOW-MEDIUM
Common oversights on the Docker host:
Docker daemon exposed on TCP without TLS (default is socket-only, but sometimes
changed)
Unused ports still listening
Host OS not regularly updated
No automatic security updates
Mitigation:
Keep Docker daemon on Unix socket only (never expose on 0.0.0.0:2375)
Enable unattended-upgrades on Ubuntu/Debian host
Run docker system prune periodically to remove dangling images/containers
Disable SSH password auth; use key-only
4. Break Points — Where the System Fails
A “break point” is the threshold where your setup stops functioning or becomes untenable.

4.1 Traffic / Performance Break Point
Scenario: Your home/office internet upload bandwidth is the bottleneck.
Upload SpeedConcurrent UsersFeasibility
10 Mbps~20-30 light usersWorkable for internal tools
30 Mbps~50-80 light usersAdequate for small SaaS
50 Mbps~100+ light usersStarts struggling with media
100 Mbps~200+ light usersStill limited vs cloud
Break point triggers:
SCANIX POS system with high daily transactions (hundreds of concurrent API calls)
Multiple apps sharing the same upload pipe
Database-heavy operations (GRADSENSE, ARBITER) competing with user traffic
Video/large file uploads consuming all bandwidth
When to migrate: When you consistently see >70% upload utilization or users report
latency >500ms.
4.2 Availability Break Point
Scenario: Home/office infrastructure lacks redundancy.
EventDowntimeImpact
ISP outageHours to daysAll services down
Power outage (no UPS)ImmediateAll services down + potential data corruption
Router reboot/failureMinutes to hoursAll services down
Docker host crashMinutes (manual)All services down
Tunnel daemon crashUntil restartAll services down
Break point: Any paying client (SCANIX POS users, Kilim Geopark) who needs >99%
uptime cannot be reliably served from a home/office setup. One TNB power cut or TM fiber
outage and your client’s business stops.

When to migrate: When you have SLA commitments or revenue-critical apps. Target: move
production workloads to cloud (DigitalOcean/Hetzner), keep development/staging on home
tunnel.
4.3 Security Break Point
Scenario: Attack surface grows with each new app.
Each new container you expose through the tunnel is another potential entry point. With
ARBITER, GRADSENSE, SCANIX, Eagle Attech website, EkotaniHub, ParkWise, and
OrderFlow — that’s 7+ attack surfaces on one machine, on one network, behind one tunnel.
Break point: A single vulnerability in any one app compromises all apps and your local
network. The blast radius is your entire digital life — work files, research data, student
records, business financials.
4.4 Compliance Break Point
Scenario: University (UiTM) or client data regulations.
GRADSENSE handles student assessment data — PDPA (Malaysia) applies
SCANIX handles transaction/payment data
If any client is government-linked, MyDigital / MAMPU compliance may apply
Break point: Hosting regulated data on a home network with no formal security audit trail,
no backup verification, and shared infrastructure is a compliance risk. JKSAU approval for
GRADSENSE on UiTM infrastructure is the right move — keep that data off the home tunnel.
5. Recommended Action Plan
Immediate (This Week)
1.
Verify Docker network segmentation — separate frontend and backend Docker
networks
2.
Check container privileges — ensure no --privileged containers, enable read-only
where possible
3. Secure tunnel credentials — move from env vars / compose file to Docker secrets
4.
Verify Cloudflare SSL mode is “Full (Strict)”
Short-term (This Month)

5.
Set up VLAN isolation — separate Docker host from personal/office devices on the
network
6. Enable basic WAF — Cloudflare WAF rules or ModSecurity on nginx
7.
Implement log aggregation — at minimum, centralize nginx access/error logs
8.
Automate backups — databases backed up to off-site (DO Spaces) daily
Medium-term (Next Quarter)
9.
Migrate production/revenue apps to cloud — SCANIX, client-facing systems move to
DigitalOcean
10.
Keep development/staging on home tunnel — this is where tunneling shines
11.
Implement container image scanning — Trivy in CI/CD pipeline
12.
Document disaster recovery procedure — how to restore everything from backup
Long-term Strategy
WorkloadWhere to HostReason
SCANIX (production POS)Cloud (DO/Hetzner)Revenue-critical, needs uptime
GRADSENSEUiTM infrastructure (JKSAU)Student data compliance
ARBITER (development)Home tunnelInternal tool, low risk
Eagle Attech websiteCloud or Cloudflare PagesPublic-facing, needs reliability
Client demos / stagingHome tunnelPerfect use case
Personal projects / R&DHome tunnelPerfect use case
6. Summary Verdict
Your tunneling + Docker nginx proxy setup is well-suited for development, demos, and
internal tools but carries significant risk for production workloads with paying clients or
regulated data.
The single biggest risk is blast radius — everything sits on one machine, on one network. A
breach in any app means a breach in everything. The single biggest operational risk is

availability — your ISP and power supply are your SLA.
The strategic move: split your workloads. Keep the tunnel for what it’s great at (dev,
staging, internal tools, personal projects). Move revenue and compliance workloads to
proper cloud infrastructure with the RM 910/month budget we discussed previously.
Analysis by Zeya — March 2026