ThinkTech

AI Risk Library

A structured taxonomy of documented AI risks. Each entry covers what the risk is, how it manifests, how to detect it, and what mitigation strategies are available.

Risk Entry12 min read

Prompt Injection: Definition, Attack Patterns, Detection, and Mitigation

Prompt injection allows attackers to override AI system instructions through crafted inputs. Patterns, detection methods, and mitigations.

Updated Apr 2026securityprompt-injectionrisk
Risk Entry10 min read

Overreliance on AI: Automation Bias, Skill Atrophy, and Organizational Controls

Overreliance on AI systems leads to degraded human judgment and missed errors. Patterns, warning signs, and organizational controls.

Updated Apr 2026human-factorsriskautomation-bias
Risk Entry13 min read

AI Bias: Types, Impact Assessment, Detection, and Mitigation

AI systems can encode and amplify existing biases in training data, model design, and deployment context. Assessment frameworks and mitigation approaches.

Updated Apr 2026fairnessbiasrisk
Risk Entry11 min read

AI Data Leakage: Patterns, Detection, and Mitigation

Data leakage through AI systems occurs when sensitive information is exposed through model outputs, training data memorization, or API logging. This entry covers five documented patterns, a risk matrix, detection methods, and regulatory implications.

Updated Apr 2026data-privacyriskcompliance
Risk Entry11 min read

AI Hallucination: Definition, Patterns, Detection, and Mitigation

AI hallucination occurs when a model generates confident, plausible-sounding information that is factually incorrect. This entry covers how it happens, how to detect it, and what mitigation strategies are available.

Updated Apr 2026hallucinationriskllm