User
Write something
Group Coaching and Q & A is happening in 42 hours
Pinned
🛡️ Course 3 is LIVE — Wazuh + AI Threat Hunt
Quick one. Course 3 is live. Six lessons. Real AWS infrastructure. By the end, you'll have deployed a production-grade SIEM (Wazuh), plugged an AI layer into it (the Wazuh MCP server — 48 tools you talk to in plain English), and used both to investigate threats, hunt for persistent backdoors, and write a custom detection rule that produces audit-ready SOC 2 evidence. This is the lab where AI stops being a chat sidebar and starts being how you do the work. You'll ask your SIEM questions in plain English ("what happened on this server between 2 and 4pm?"), get structured answers back, verify them against the source, and act on them. You'll be paired with a senior SOC analyst persona who narrates the investigation as you go and adjusts depth to your experience level. Real AWS bills. ~$0.11/hr while running. Destroy when you're done. Nothing fake, nothing simulated, nothing you couldn't put on a resume. Courses 1 and 2 just got refreshed too. We rebuilt the on-ramp. Course 1 now puts Claude Code in your hands within the first 30 minutes, with a calibration step that tunes the AI to your real experience level — career switcher to senior practitioner, everyone welcome. Course 2 pairs you with a junior analyst character through every lesson so the AI-augmented workflow becomes muscle memory, not novelty. By the time you reach the SIEM lab, you spend 100% of your time on the actual security work, not on tool onboarding. If you've already done Courses 1 and 2 — head back. The new beats add about 20 minutes across both courses and they reshape everything that comes next. If you're just starting — begin with Course 1, and don't skip the calibration step in Lesson 4. It changes how every Claude response lands.
2
0
Pinned
New: Wazuh + AI SOC lab (first public beta)
Most security training is watching someone else do the work. This isn't that. Pull down the new lab and in a couple of hours you'll have: - Stood up a production-shape Wazuh SIEM on AWS — 20 minutes, one script - Run a controlled attack and investigated the chain manually in the dashboard - Plugged an AI layer on top and re-run the same investigation in plain English - Hunted for the three persistence backdoors the CloudVault attacker left in Course 2 - Written a custom detection rule that fires live on your own terminal - Closed out a fresh incident with an evidence package for the SOC 2 audit That's a week of work for most real teams. It's a resume line most SOC analysts I talk to can't claim. It's the "I actually built that" answer nobody else has in interviews. "Start Here" and "AI Quick Wins" were the setup. This is the payoff — a real engagement where you stand up the SIEM, work the case, hunt what's left behind, close it out. If you haven't done the first two yet, run them first; ~30 minutes, and this one lands harder on the other side. You're working the case alongside an AI-powered senior SOC peer (Mateo) — he stays in character, teaches while you work, and gets out of your way when you've got it. Costs about a coffee in AWS compute. First public beta. If something breaks, feels off, or just confuses you — tell me: - #Build Questions here in Skool (fastest) - DM me - GitHub issues: github.com/botz-pillar/ai-csl-wazuh-lab/issues Repo: https://github.com/botz-pillar/ai-csl-wazuh-lab Go build. Tell me what you find. — Josh
Pinned
👋 New to the Lab? Start here.
Welcome to the AI Cloud Security Lab. This community is for cloud security practitioners who want to use AI to work faster, build real infrastructure, and stand out in their careers. Here's exactly what to do: 1️⃣ Go to the Classroom → START HERE Set up Claude Code, configure your AI workspace, and run your first AI-powered security analysis on a live dataset. Takes about 45 minutes. You'll have real findings documented by the end. 2️⃣ Introduce yourself Drop a post in the #👋 General community with this template: 👋 Hey, I'm [Name]. I work as a [role] at a [type of company]. I'm here because I want to [your goal]. One skill I want to build: [specific skill]. 3️⃣ Post your first results in #🚀 Wins After you finish the START HERE course, share what you found. What did Claude Code flag in the CloudVault Financial data? What surprised you? That's it. Don't overthink it. Just start.
👋 New to the Lab? Start here.
What is the real risk of using an MCP server?
Check out this cool graphic @Stephanie Macahis made!
What is the real risk of using an MCP server?
The Mythos breach has no AI in it. Here's what to do this week.
If you've been on LinkedIn this week, you've seen the Mythos news. Anthropic is investigating unauthorized access to Claude Mythos Preview — the model they capped at about forty partners because they considered it too dangerous to release. Investigation is still ongoing. I want to bring it here because there's a lesson in this one for us specifically — and an action you can take this week. Here's the chain (no AI in it, except at the destination): 1. Attackers poisoned a Trivy GitHub Action — a security scanner — inside LiteLLM's CI/CD pipeline. They stole credentials and pushed backdoored litellm packages to PyPI. Live for about 40 minutes. LiteLLM has 95M+ downloads. 2. Mercor (an AI training startup) was one of thousands hit. Lapsus$ claims 4TB stolen via Mercor's Tailscale VPN. 3. The dump included Anthropic's internal model naming conventions. A Discord group — with an Anthropic contractor in it — used them to guess the Mythos deployment endpoint. They got in on launch day. No zero-day. No novel exploit. No model jailbreak. Just a poisoned dependency, a CI tool nobody was watching, an over-scoped contractor, and a 4TB dump that shouldn't have held those naming conventions in the first place. Verizon's 2025 DBIR put third-party breach involvement at 30% — doubled YoY. Panorays says 85% of CISOs can't see their third-party threats. Only 22% formally vet AI tools. We are getting excited about an AI that can find zero-days while most companies can't see what their vendors are doing on a Tuesday. The biggest risk in 2026 isn't AI capability. It's production security practices that have been broken so long we stopped flinching. This week — pick at least one. Drop your result in comments. 1. Find LiteLLM in your stack. Open Claude Code in your repo and paste this: "Search every package manifest, lockfile, requirements file, Dockerfile, and CI workflow in this repo for litellm. Report the version pinned (or unpinned), where it's used, which environment variables and secrets it has access to, and whether the version falls in the compromised range (1.82.7 / 1.82.8). Then list the credentials you'd need to rotate if this dependency was poisoned."
0
0
1-17 of 17
AI Cloud Security Lab
skool.com/cloud-security-lab
Learn cloud security using AI by building real cloud labs, security programs, and portfolio artifacts—not just studying for certifications.
Leaderboard (30-day)
Powered by