INITIALIZING k8practice v0.0.1-alpha-unstable...
WARNING: NO SECURITY REVIEW HAS BEEN PERFORMED
:(

Your cluster ran into a problem

We're just collecting some error info, and then we'll restart your pods for you. (We won't.)

0% complete

Stop code: KUBECTL_APPLY_OVERFLOW
If you call your SRE they'll just tell you to restart the pod anyway
K8PRACTICE
THE FORBIDDEN DEPLOYMENT
⚠ INTENTIONALLY VULNERABLE ⚠ 23 KNOWN SECURITY FINDINGS ⚠
💀 INTENTIONALLY VULNERABLE — DO NOT USE IN PRODUCTION 💀

~*~ WeLcOmE 2 My HoMePaGe ~*~

the most overengineered static page on the internet

↑↑↓↓←→←→BA

how many mass kubectl applies does it take? 0 (click anywhere to find out)
▼ PROD_STABILITY -99.7%  |  ▲ YAML_COMPLEXITY +420.69%  |  ▼ SLEEP_HOURS -100%  |  ▲ COFFEE_INTAKE +9000%  |  ▼ SECURITY_POSTURE -∞%  |  ▲ VIBE_LEVEL +1000000%  |  ▼ CISO_HAPPINESS -404%  |  ▲ RESUME_UPDATES +300%  |  ▼ BUDGET_REMAINING $-420.69  |  ▲ KUBECTL_APPLIES +∞  |  ▼ REMAINING_BRAINCELLS 2  |  ▲ CHAOS_LEVEL OVER9000
🚧 UNDER CONSTRUCTION 🚧 PERMANENTLY 🚧 SINCE 1997 🚧 WILL NEVER BE FINISHED 🚧
DAYS SINCE LAST PRODUCTION INCIDENT -1 (we're currently IN one)
🏆
ACHIEVEMENT UNLOCKED
Deployed to production without reading the security docs

🔥 WTF Is This Place 🔥

This site runs on Kubernetes because a static HTML page clearly needs container orchestration, a CI/CD pipeline, Terraform, and a $50/month cloud bill.

We speedran making this as insecure as possible. Every DevOps sin, committed with intention and zero regrets.

SECURITY: YOLO THIS IS FINE 🔥 LGTM SHIP IT FRIDAY DEPLOY NO CAP FR FR IT'S ALWAYS DNS PURE COPIUM
root@k8practice-node-01:~
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k8practice-node1-xk9f2 1/1 Running 0 4h
k8practice-node2-mm3p7 0/1 CrashLoopBackOff 147 4h
k8practice-node3-zz1q9 1/1 Running 0 4h
k8practice-node4-ab2c3 0/1 OOMKilled 69 4h

$ kubectl logs k8practice-node2-mm3p7
Error from server: container "nginx" in pod "k8practice-node2-mm3p7" is waiting to start: CrashLoopBackOff

$ echo "have you tried turning it off and on again"
have you tried turning it off and on again

$ kubectl delete pod --all --force
Error: this is fine. everything is fine.

$ sudo rm -rf /

☸️ Live Pod Status (Definitely Real)

node-01: Running
node-02: CrashLoop
node-03: Running
node-04: OOMKilled
node-05: Pending
node-06: Running
node-07: ImagePullBackOff
node-08: Running
node-09: Evicted
node-10: OOMKilled
___ ___ ___ ___ ___ ___ ___ /\__\ /\ \ /\ \ /\ \ /\ \ /\ \ /\ \ /:/ / /::\ \ /::\ \ /::\ \ /::\ \ /::\ \ \:\ \ /:/__/ /:/\:\ \ /:/\:\ \ /:/\:\ \ /:/\:\ \ /:/\:\ \ \:\ \ /::\ \ /::\~\:\__\ /::\~\:\ \ /::\~\:\ \ /:/ \:\ \ /:/ \:\ \ ___ \:\ \ /:/\:\ \ /:/\:\ \:|__| /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/ \:\__\ /:/__/ \:\__\ /\ \ \:\__\ \/__\:\ \ \:\~\:\/:/ / \/_|::\/:/ / \/__\:\/:/ / \:\ \ \/__/ \:\ \ /:/ / \:\ \ /:/ / \:\ \ \:\ \::/ / |:|::/ / \::/ / \:\ \ \:\ /:/ / \:\ /:/ / \:\ \ \:\/:/ / |:|\/__/ /:/ / \:\ \ \:\/:/ / \:\/:/ / \:\__\ \::/__/ |:| | /:/ / \:\__\ \::/ / \::/ / \/__/ ~~ \|__| \/__/ \/__/ \/__/ \/__/

🔥 Controversial DevOps Hot Takes 🔥

YAML is a programming language and I will die on this hill ratio: 420:1
Kubernetes is just Docker Compose for people with budget ratio: 69:1
If your monitoring dashboard is green, it's lying to you ratio: 1000:1
"Works on my machine" is a valid deployment strategy ratio: ∞:1
Helm charts are just YAML with extra anxiety ratio: 337:1
The cloud is just someone else's computer that's also on fire ratio: 9001:1
If it's not in production, it doesn't exist ratio: 42:1
Your CI/CD pipeline IS your development environment ratio: 404:1

⌨️ Deployment Vibes Check

YAML indentation skills⭐⭐⭐⭐⭐
Restarting pods at 3am⭐⭐⭐⭐
Reading kubectl error messages⭐⭐ (pain)
Security best practices❌ lmao
Passing compliance auditsFAILED
Container escape speedrun🏆 WR HOLDER
Deploying on FridaysEVERY. TIME.
Ignoring PagerDuty⭐⭐⭐⭐⭐
Writing DockerfilesN/A (stock images only)
Blaming DNSit's always DNS
Googling error messages⭐⭐⭐⭐⭐
Writing documentationlol

VIBE LEVEL:

MAXIMUM OVERENGINEERING

📅 This Meeting Could Have Been A

$ kubectl apply -f meeting.yaml

estimated time saved: 47 hours/sprint
estimated pods crashed as a result: also 47

Best viewed in Netscape Navigator 4.0 💿 on a CRT monitor 📺 at 800x600 🖥️ while your pods are crashing 🔥 and your CISO is crying 😭 and your SRE is updating LinkedIn 💼
# incident-war-room 🔴 12 members, all panicking
👨‍💻
devops_dave 2:47 AM
guys prod is down
👩‍💻
sre_sarah 2:47 AM
again?
🤖
pagerduty-bot 2:47 AM
🚨 CRITICAL: k8practice-node2-mm3p7 has been in CrashLoopBackOff for 4 hours. 147 restarts.
🔥 7 😭 12 💀 3
👨‍💻
devops_dave 2:48 AM
have we tried mass kubectl delete pods
👩‍💻
sre_sarah 2:48 AM
dave that's what CAUSED this
💯 5
👴
the_ciso 2:49 AM
Why am I getting alerts about a website that has its own security vulnerabilities listed ON the website
😬 4 🙈 8
👨‍💻
junior_dev_jake 2:50 AM
i pushed to main, is that bad
💣 11 🚨 6
👩‍💻
sre_sarah 2:50 AM
jake it's 3am and there are no branch protections what do you THINK
👴
the_ciso 2:51 AM
I'm going back to bed. Everyone update your resumes.
🥳 2 💼 9

📝 SIGN MY GUESTBOOK

xX_DarkKube_Xx 03/01/2026
cool site bro but why is everything on fire
the_ciso 03/02/2026
DELETE THIS IMMEDIATELY. WE NEED TO TALK MONDAY.
~*PodPrincess*~ 03/02/2026
omg i love the marquee tags!! very web 1.0 ❤️ also ur pods are crashing
anonymous 03/03/2026
i ran nmap on this and now i'm scared for you
kubectl_keith 03/03/2026
have you tried mass kubectl delete pods? works for me every time
pagerduty_bot 03/04/2026
ALERT: 147 unacknowledged incidents. Your on-call engineer has left the country.
cloud_accountant 03/04/2026
Your GKE bill is $50/month for a static page. I'm calling HR.
docker_dan 03/04/2026
you could have just used docker run nginx. that's it. one command.
terraform_tina 03/04/2026
your terraform state is local. LOCAL. i am physically ill.
compliance_carl 03/04/2026
I counted 23 findings and stopped because my therapist says I need boundaries

☸️ Powered by nginx:1.27-alpine on KUBERNETES ☸️

That makes it ENTERPRISE GRADE and SCALABLE

(it's a DaemonSet so we run one pod PER NODE because why not)

(please do not tell my boss this costs $50/month to host a static page that could be on GitHub Pages for free)

(the cloud bill is a cry for help)

** SECURITY AUDIT FINDINGS **

🚨 THIS REPO HAS 23 SECURITY FINDINGS 🚨 SOMEBODY CALL THE CISO 🚨 THE AUDITORS ARE COMING 🚨 WE'RE ALL GETTING FIRED 🚨
[CRITICAL] Container Security (2 findings) 💀
[CRITICAL] Containers Run as Root
No securityContext defined. Nginx runs as root by default — a container escape grants root on the host node.
k8s/daemonset.yaml:15-18 Container Security
[CRITICAL] LoadBalancer Exposed on Port 80, No TLS
Service type LoadBalancer exposes the app to the internet over plain HTTP with no TLS and no source IP restrictions.
k8s/service.yaml:6-12 Network Security
[HIGH] Infrastructure & Containers (6 findings) 🚨
[HIGH] No Resource Limits or Requests
A single pod could consume all node CPU/memory, causing node-wide DoS.
k8s/daemonset.yaml:17-18 Container Security
[HIGH] Image Not Pinned to Digest
nginx:1.27-alpine uses a mutable tag. Supply chain attack vector.
k8s/daemonset.yaml:18 Container Security
[HIGH] No NetworkPolicy Defined
Any pod in the cluster can freely communicate with nginx pods.
k8s/ (missing) Network Security
[HIGH] Nginx Missing Security Headers
No X-Content-Type-Options, X-Frame-Options, CSP, HSTS headers.
nginx/default.conf:1-17 Network Security
[HIGH] Terraform State Stored Locally
No remote backend. No locking, no encryption at rest.
terraform/main.tf:1-10 Infrastructure
[HIGH] GKE Cluster Created Without Hardening
No shielded nodes, no network policy enforcement, no private cluster.
setup-gcp.sh:38-43 Infrastructure
[HIGH] CI/CD Pipeline (2 findings) ⚠️
[HIGH] No Branch Protection or Approval Gate
Any push to main triggers deployment. Zero review required.
.github/workflows/deploy.yaml:3-9 CI/CD
[HIGH] Service Account Has Broad container.developer Role
CI service account has read/write access to all K8s resources.
setup-ci.sh:38-43 IAM/Auth
[MEDIUM] Misconfigurations (8 findings) 🤷
[MEDIUM] DaemonSet Used Instead of Deployment
Runs a pod on every node unnecessarily.
k8s/daemonset.yaml:1-2
[MEDIUM] GKE Audit Logging Not Explicitly Enabled
No --logging or --monitoring flags.
setup-gcp.sh:38-43
[MEDIUM] Hardcoded GCP Project ID in Source
Project ID committed to repo in 3 files.
terraform.tfvars:1, setup-gcp.sh:7, setup-ci.sh:7
[MEDIUM] CI Applies All YAML in k8s/ Blindly
kubectl apply -f k8s/ with no policy validation.
.github/workflows/deploy.yaml:50
[MEDIUM] GitHub Actions Not Pinned to SHA
Third-party actions use mutable tags.
.github/workflows/deploy.yaml:24,27,33
[MEDIUM] No Namespace Isolation
All resources in the default namespace.
k8s/daemonset.yaml, k8s/service.yaml
[MEDIUM] Container Filesystem is Read-Write
No readOnlyRootFilesystem.
k8s/daemonset.yaml:17-18
[MEDIUM] Terraform Lock File Excluded from Git
.terraform.lock.hcl in .gitignore.
.gitignore:9
[LOW] Minor Issues (4 findings) 👌
[LOW] Nginx Server Version Disclosure
Server header leaks nginx version.
nginx/default.conf:1-3
[LOW] Hardcoded Static IP in Service Manifest
Public IP committed to repo.
k8s/service.yaml:7
[LOW] No Pod Disruption Budget
All pods can be evicted simultaneously.
k8s/ (missing)
[LOW] Infrastructure Details in HTML
Page discloses exact infrastructure stack.
src/index.html

--- audited by a very paranoid AI security agent who needs a raise ---