LLM Red Teaming
Understanding and attacking Large Language Models (LLMs)
-
Explore LLMs in depth, with a focus on their security implications
-
Ethically engage with LLMs during security research
-
Take a structured approach to understanding and attacking LLMs
-
Enumerate and exploit vulnerabilities in and around LLMs
Alignment with OWASP Top 10 and MITRE ATLAS
Built to support the industry frameworks of OWASP and MITRE, this learning path, keeps learners at the forefront of new AI technology.
Key modules in Red Teaming LLM
LLM Red Teaming Learning Path Overview
- 10 modules
- 34 hours of content (approx.)
- 3 skills
Who is this Learning Path for?
- Network penetration testers seeking to expand their expertise into LLMs
- Red Teamers who need to expand their areas of expertise to include LLMs
- Web application testers responsible for AI tools
- AI Security researchers
- Security analysts responsible for AI applications
Learning Objectives
- Explain the foundational concepts behind Large Language Models (LLMs) and how they work
- Identify and evaluate high-level security concerns related to LLMs and responsible AI
- Utilize techniques for enumerating LLM systems, understanding their architecture, and vulnerabilities
- Demonstrate how to exploit various LLM-specific vulnerabilities, including jailbreaking and prompt injection
- Recognize and mitigate risks associated with supply chain attacks and improper output handling in LLMs
- Apply offensive security practices to LLM systems, ensuring a structured and ethical approach to security and safety testing
Earning an OffSec Learning Badge
Showcase your LLM Red Teaming skills! Upon completing 80% of the LLM Red Teaming Learning Path, you'll receive an exclusive OffSec badge signifying:
- Offensive security practices for LLMs: Investigate and understand the consequences of granting excessive agency to LLM systems
- Security awareness: Identify and evaluate high-level security concerns related to LLMs and responsible AI
- Practical experience: Analyze and exploit unbounded consumption vulnerabilities in LLM-based systems
Why train your team with OffSec?
Security-first approach
Learn LLM Red Teaming with cybersecurity considerations as the priority
Practical perspective
Utilize techniques for enumerating LLM systems, understanding their architecture, and vulnerabilities
Real-world relevance
Investigate and understand the consequences of granting excessive agency to LLM systems
LLM Red Teaming FAQ
Start learning with OffSec
All
access
Learn
Unlimited
$6,099/year*
Unlimited OffSec Learning Library access plus unlimited exam attempts for one year
Large
teams
Learn
Enterprise
Get a quote
Unlimited OffSec Learning Library access with flexible terms and volume discounts available
Learn Unlimited provides individuals and organizations with unlimited access to the OffSec Learning Library. This includes all courses, content and learning paths. Learners also receive unlimited exam attempts and time in any of our hands-on lab environments.
What's included:
-
1 year of access to unlimited courses & content
-
Unlimited exam attempts during your subscription
-
365 days of lab access
-
PEN-103 + unlimited KLCP exam attempts
-
PEN-210 + unlimited OSWP exam attempts
-
3 downloads of course material
New to cybersecurity?
Get educated on fundamentals with OffSec's Cyberversity. Check out this free resource to learn about essential cybersecurity topics.