Skip to content

vanohj/prompt-engineering-for-hackers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Prompt Engineering for Hackers

“AI is just another attack surface.”

Welcome to Prompt Engineering for Hackers, your hands-on introduction to manipulating and understanding large language models — built for hackers, by hackers.

Inspired by books like Linux Basics for Hackers and The Web Application Hacker’s Handbook, this project focuses on teaching prompt injection, LLM behavior, and adversarial prompting from the ground up — no PhD required.


📚 What You'll Learn

  • What prompts really are — and how they guide the model
  • Core techniques: zero-shot, few-shot, CoT, role prompts
  • How to craft injections, jailbreaks, and basic evasion
  • How to build your own red team lab to test LLMs safely
  • Where defenses break down — and how to think like an LLM attacker

🧑‍💻 Who This Book is For

  • Hackers and pentesters new to AI/LLMs
  • Students and self-learners exploring AI security
  • Bug bounty hunters who want to target prompt injection
  • Security pros trying to keep up with AI's new attack surface

You don't need machine learning experience. If you can write an XSS payload or script a shell, you can learn to hack a prompt.


⚔️ What's Inside

  • Tactical chapters with real-world examples
  • Labs using open tools like ollama, ai-goat, and MyLLMBank
  • Exercises and prompts you can test safely
  • Written in Markdown, published freely on GitBook and GitHub

Summary

🧠 Part I – The Basics

💀 Part II – Prompt Hacking Begins

🧪 Part III – Going Deeper


📖 Read Online (Coming Soon)

GitBook: promptengineeringforhackers.gitbook.io


✍️ Contribute

PRs, examples, and improvements welcome. Fork it, play with it, and let’s teach hackers everywhere how to bend prompts.


⚠️ Disclaimer

This project is for educational and research purposes only. Do not test against systems you don’t own or have explicit permission to assess. See DISCLAIMER.md for full guidance.


🧠 Author

Created by Randall
LLM red teamer. Prompt manipulator. Offensive security enthusiast.


“You don’t need to speak AI — you just need to speak clearly enough to trick it.”

About

Prompt Engineering for Hackers: A Hands-On Intro to LLMs, Jailbreaks, and Adversarial Prompting

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors