“AI is just another attack surface.”
Welcome to Prompt Engineering for Hackers, your hands-on introduction to manipulating and understanding large language models — built for hackers, by hackers.
Inspired by books like Linux Basics for Hackers and The Web Application Hacker’s Handbook, this project focuses on teaching prompt injection, LLM behavior, and adversarial prompting from the ground up — no PhD required.
- What prompts really are — and how they guide the model
- Core techniques: zero-shot, few-shot, CoT, role prompts
- How to craft injections, jailbreaks, and basic evasion
- How to build your own red team lab to test LLMs safely
- Where defenses break down — and how to think like an LLM attacker
- Hackers and pentesters new to AI/LLMs
- Students and self-learners exploring AI security
- Bug bounty hunters who want to target prompt injection
- Security pros trying to keep up with AI's new attack surface
You don't need machine learning experience. If you can write an XSS payload or script a shell, you can learn to hack a prompt.
- Tactical chapters with real-world examples
- Labs using open tools like
ollama,ai-goat, andMyLLMBank - Exercises and prompts you can test safely
- Written in Markdown, published freely on GitBook and GitHub
- Chapter 1 – What’s a Prompt, Anyway?
- Chapter 2 – Talking to a Language Model
- Chapter 3 – Prompt Engineering 101
- Chapter 4 – Setting Up Your Prompt Lab
- Chapter 5 – Simple Instruction Attacks
- Chapter 6 – Ignore Context, Ignore the Rules
- Chapter 7 – Injection Through Inputs
- Chapter 8 – Personas, DAN, and Roleplay Jailbreaks
- Chapter 9 – Obfuscation and Encoding
- Chapter 10 – Task Switching and Context Hijacks
- Chapter 11 – Payload Splitting
- Chapter 12 – Basic Defenses (and Why They Fail)
- Chapter 13 – Where to Go Next
PRs, examples, and improvements welcome. Fork it, play with it, and let’s teach hackers everywhere how to bend prompts.
This project is for educational and research purposes only. Do not test against systems you don’t own or have explicit permission to assess. See DISCLAIMER.md for full guidance.
Created by Randall
LLM red teamer. Prompt manipulator. Offensive security enthusiast.
“You don’t need to speak AI — you just need to speak clearly enough to trick it.”