☀️
MLSecOps Field Manual
Documentation of ML Security Experiments
Documentation of attacks, defenses, and monitoring patterns tested in a private lab.
Each module is lab-validated, reproducible, and focused on operational clarity.
Available Modules
Module 1: Evasion Attack (FGSM) - Logs + Mitigation
Module 2: Safe Model Loading
Module 3: LLM Jailbreak + Detection + Mitigation
Module 4: Retrieval Poisoning (RAG)
Module 5: Embedding Drift Detection
Module 6: Agent Tool Misconfiguration
View on GitHub
This work uses AI tools for assistance. All lab experiments and decisions are human-led.