

AI Hacking Workshop
Understanding Adversarial Prompting & Exploitation Techniques in AI Systems
📍 In-Person · Mumbai | 💻 Bring Your Laptop - Live Exercises Included
AI systems are only as secure as the prompts they trust. In this hands-on workshop, we pull back the curtain on the techniques attackers use to manipulate, deceive, and break large language models - and what you can actually do about it.
What You'll Learn & Practice:
01 - Anatomy of Prompt Injection Attacks:Â Understand how crafted inputs hijack AI instructions and override system behaviour in real-world deployments.
02 - Jailbreaking Techniques in the Wild:Â Explore the most effective methods used to bypass constraints in modern AI systems.
03 - Real-World Risk Scenarios:Â Walk through concrete attack cases that show how adversarial prompting translates to tangible business and security risk.
04 - Building Stronger AI Defences: Leave with actionable strategies for hardening AI systems — from input validation to architectural safeguards.
05 - Live Exercises - Bring Your Laptop:Â This isn't just theory. You'll actively attempt prompt injection and jailbreaking attacks in a controlled environment, guided step by step.
Come ready to hack - your laptop is your tool for the day.
Ideal for security professionals, AI developers, and anyone building with or around LLMs.