OWASP Top 10 for LLM 2025
Interactive demonstrations and educational resources for Large Language Model security.
Project By darshanhackz.
Prompt Injection
Manipulating LLM input to bypass safety filters or execute unauthorized actions.
Insecure Output Handling
Failing to validate LLM output, leading to XSS, CSRF, or code execution.
Training Data Poisoning
Manipulating training data to introduce biases or backdoors into the model.
Model Denial of Service
Overloading the model with expensive requests to cause resource exhaustion.
Supply Chain Vulnerabilities
Compromised third-party datasets, models, or plugins.
Sensitive Information Disclosure
LLM revealing PII or confidential data in its responses.
Insecure Plugin Design
Plugins executing unsafe actions without proper validation.
Excessive Agency
Granting LLMs too much autonomy to perform damaging actions.
Overreliance
Users blindly trusting LLM output without verification.
Model Theft
Unauthorized extraction or copying of the model's weights or architecture.