CWE-1039 - Inadequate Detection or Handling of Adversarial Input Perturbations in Automated Recognition Mechanism
- Abstraction:Class
- Structure:Simple
- Status:Incomplete
- Release Date:2018-03-29
- Latest Modification Date:2025-04-03
Weakness Name
Inadequate Detection or Handling of Adversarial Input Perturbations in Automated Recognition Mechanism
Description
The product uses an automated mechanism such as machine learning to recognize complex data inputs (e.g. image or audio) as a particular concept or category, but it does not properly detect or handle inputs that have been modified or constructed in a way that causes the mechanism to detect a different, incorrect concept.
When techniques such as machine learning are used to automatically classify input streams, and those classifications are used for security-critical decisions, then any mistake in classification can introduce a vulnerability that allows attackers to cause the product to make the wrong security decision or disrupt service of the automated mechanism. If the mechanism is not developed or "trained" with enough input data or has not adequately undergone test and evaluation, then attackers may be able to craft malicious inputs that intentionally trigger the incorrect classification. Targeted technologies include, but are not necessarily limited to: For example, an attacker might modify road signs or road surface markings to trick autonomous vehicles into misreading the sign/marking and performing a dangerous action. Another example includes an attacker that crafts highly specific and complex prompts to "jailbreak" a chatbot to bypass safety or privacy mechanisms, better known as prompt injection attacks.
Common Consequences
Scope: Integrity
Impact: Bypass Protection Mechanism
Notes: When the automated recognition is used in a protection mechanism, an attacker may be able to craft inputs that are misinterpreted in a way that grants excess privileges.
Scope: Availability
Impact: DoS: Resource Consumption (Other), DoS: Instability
Notes: There could be disruption to the service of the automated recognition system, which could cause further downstream failures of the software.
Scope: Confidentiality
Impact: Read Application Data
Notes: This weakness could lead to breaches of data privacy through exposing features of the training data, e.g., by using membership inference attacks or prompt injection attacks.
Scope: Other
Impact: Varies by Context
Notes: The consequences depend on how the application applies or integrates the affected algorithm.