
Prompt Injection - OWASP Foundation
Prompt injection occurs when an attacker provides specially crafted inputs that modify the original intent of a prompt or instruction set. It’s a way to “jailbreak” the model into ignoring prior instructions, …
What Is a Prompt Injection Attack? [Examples & Prevention]
A prompt injection attack is a GenAI security threat where an attacker deliberately crafts and inputs deceptive text into a large language model (LLM) to manipulate its outputs.
Prompt injection - Wikipedia
Prompt injection is a cybersecurity exploit and an attack vector in which innocuous-looking inputs (i.e. prompts) are designed to cause unintended behavior in machine learning models, particularly large …
What is a prompt injection attack? - IBM
Feb 23, 2023 · What is a prompt injection attack? A prompt injection is a type of cyberattack against large language models (LLMs). Hackers disguise malicious inputs as legitimate prompts, …
What is a prompt injection attack (examples included) - Norton™
Dec 11, 2025 · Prompt injection, also known as prompt hacking, occurs when attackers insert malicious instructions into text that the AI processes through chats, links, files, or other data sources.
What Is Prompt Injection in AI? Examples & Prevention | EC-Council
Dec 31, 2025 · Learn what prompt injection in AI is, how it works, real-world attack examples, and proven prevention techniques to secure AI systems effectively.
Prompt Injection Attacks: 4 Types & How to Defend - mend.io
Jul 5, 2025 · Learn what prompt injection attacks are, how they exploit LLMs like GPT, and how to defend against 4 key types—from direct to stored injection and more.
Prompt Injection 2.0: Hybrid AI Threats - arXiv.org
Jul 17, 2025 · This paper presents a comprehensive analysis of Prompt Injection 2.0, the evolution of prompt injection attacks in the era of agentic AI and hybrid cyber threats. We examine how modern …
What Is a Prompt Injection Attack? Definition, Examples - Proofpoint
A prompt injection attack is a cybersecurity attack where malicious actors create seemingly innocent inputs to manipulate machine learning models, especially large language models (LLMs).
Prompt Injection Attacks Explained: How They Work & How to Stop …
Jan 5, 2026 · Prompt injection is a systemic risk where LLMs follow malicious instructions hidden in inputs because they lack native trust boundaries. As models gain tools, memory, and autonomy, …