Truvara is in Beta.
AI for GRC

AI Governance in the Age of LLMs: What GRC Professionals Need to Know

LLMs have outpaced governance controls in most enterprises. Learn the threats, applicable frameworks, and concrete actions GRC teams must take to close the AI accountability gap.

TT
Truvara Team
March 28, 2026
10 min read

GRC professionals face a governance crisis. Large language models have entered enterprise environments faster than the controls to govern them. By the end of 2025, AI topped the security priority list for chief information security officers — yet 60 % of those same leaders admitted they could not see or control the prompts employees were sending to generative AI tools, according to Cisco's 2025 Cybersecurity Readiness Index. That gap between deployment speed and governance maturity is where GRC teams must act, and act now.

GRC professionals need a working framework for LLM risk: what the threats are, which frameworks apply, how to assess exposure, and what concrete controls close the gap between AI ambition and AI accountability.


The Threat Landscape Has Changed

Traditional cybersecurity protects code, networks, and access controls. LLMs introduce a fundamentally different attack surface — one that exploits the model itself as a pathway to data, decisions, and damage.

Prompt Injection

Prompt injection is the most documented LLM attack vector and the most dangerous in practice. The technique, named by developer Simon Willison in September 2022 after data scientist Riley Goodside demonstrated it, manipulates LLM inputs to return unintended responses. Rather than exploiting a code flaw, prompt injection exploits the model's core function: following instructions in natural language.

OWASP ranked prompt injection as the number‑one security risk for LLMs in May 2023. It has held that position every year since. The reason is elementary: it requires virtually no technical knowledge to execute. After testing over 100 generative AI products, Microsoft’s AI Red Team found that simple manual jailbreaks spread like wildfire on public forums. Security teams discovered — often the hard way — that their AI assistants could be redirected to ignore safety guardrails, extract data, or execute unauthorized operations, all through carefully worded user inputs.

Two primary variants matter for GRC professionals:

  • Direct prompt injection – malicious instructions embedded directly in a user’s query to the LLM.
  • Indirect prompt injection – instructions hidden in documents, emails, or web content that the LLM retrieves as part of its context.

Prompt leaking — a sub‑variant — attempts to extract the system’s internal instructions, including safeguards and operational boundaries. Once those boundaries are exposed, attackers can systematically craft inputs designed to circumvent them.

Jailbreaking

Jailbreaking targets the model’s safety mechanisms to unlock capabilities explicitly restricted by the provider. While prompt injection manipulates the output through clever input design, jailbreaking attempts to access the model’s internal workings. Both are attack vectors; they operate at different levels of the architecture.

Data Poisoning

Attackers corrupt training data or retrieval‑augmented generation (RAG) pipelines to skew model outputs toward incorrect, biased, or malicious conclusions. This threat is insidious because it operates upstream — by the time a poisoned output surfaces, the contamination has already propagated through the system.

Model Theft

Intellectual property embedded in proprietary models can be extracted through repeated querying. For organizations that have invested significantly in fine‑tuning models on sensitive data, model theft represents both a competitive and a legal risk.


Why Traditional GRC Frameworks Fall Short — and Which Ones Don't

GRC teams trained on ISO 27001 and NIST SP 800‑53 face a fair question: do those frameworks cover LLM risks? Partially. The challenge is that LLMs behave differently from traditional IT assets. A firewall blocks unauthorized network traffic. It cannot block a well‑crafted prompt that persuades the model to share information it should not.

ISO 27001:2022 — The Applicable Foundation

ISO 27001:2022 provides durable control domains that translate directly to AI risk management:

Control DomainLLM ApplicationExample Gap
A.5 — Organizational assetsSystem prompts, fine‑tuning dataMost organizations lack documented classification of AI assets
A.8 — Technology asset managementThird‑party AI APIs and model providersVendor onboarding rarely includes AI‑specific security assessments
A.12 — Operations securityPrompt logging, output monitoringGRC teams often have zero visibility into LLM interaction logs
A.16 — Information security incident managementPrompt injection as an incident categoryMost incident response playbooks don’t mention AI‑specific attack vectors
A.18 — ComplianceEU AI Act, emerging regulationsOrganization‑wide AI inventory is typically absent

The gap is not that ISO 27001 lacks relevance — it does not. The gap is that controls must be extended and applied with intentionality to AI systems. Treating AI as a casual productivity tool leaves the organization exposed even when the rest of the ISMS is mature.

NIST AI Risk Management Framework — Making AI Governance Explicit

The NIST AI RMF (AI RMF 1.0) takes the Govern‑Map‑Measure‑Manage cycle and makes it explicit for probabilistic, non‑transparent systems. Where ISO 27001 asks what controls apply, NIST AI RMF asks how those controls are operationalized for AI specifically.

Key Govern functions that GRC teams should adopt immediately:

  1. AI inventory – map every AI application in use, including shadow AI, and classify by risk tier (mirroring EU AI Act classifications).
  2. AI system cards – document for each AI system its purpose, data inputs, decision scope, human‑oversight requirements, and known limitations.
  3. Third‑party AI risk assessment – evaluate model providers, API vendors, and cloud AI services against the same supply‑chain criteria applied to software vendors.
  4. Incident categorization – add prompt injection, data poisoning, and model‑output failure as named incident categories in the existing incident register.

EU AI Act — The Regulatory Backdrop

The EU AI Act, which entered into force in August 2024, creates binding obligations for high‑risk AI systems used in the European Union. It references ISO 42001 as a mechanism for demonstrating conformity. For organizations already operating under ISO 27001, the path to EU AI Act readiness runs through systematic AI governance — not through a separate regulatory compliance exercise.


The GRC Action Plan for LLM Risk

Translating framework awareness into operational controls requires a structured, phased approach.

Phase 1: Discover and Classify (30‑60 days)

  • Run an organization‑wide AI use‑case inventory – include SaaS tools, internal notebooks, and browser extensions.
  • Classify each AI system – internal‑only, data‑adjacent, or decision‑influencing.
  • Assign ownership – a named individual (or team) who is accountable for governance, monitoring, and risk acceptance.

Without defined ownership, AI adoption becomes “everyone’s problem” and governance falls through the cracks.

Phase 2: Assess Against Existing Controls (60‑90 days)

Map identified AI systems to current ISO 27001 controls and NIST AI RMF functions. Typical gaps look like this:

Gap AreaTypical FindingRecommended Response
Prompt and output loggingNo systematic logging of LLM interactionsDeploy an LLM firewall or SIEM integration for AI tools
Third‑party AI vettingNo AI‑specific security questionnaire in vendor onboardingAdopt the AI Security Questionnaire (aligned with ISO 42001)
Incident responseAI attack vectors absent from incident registerAdd prompt injection and data poisoning as named categories with dedicated playbooks

Phase 3: Implement and Operate (90‑180 days)

  1. LLM Firewalls & Logging – capture every prompt and completion, store logs securely, and forward them to a SIEM for correlation. This directly tackles the 60 % visibility gap highlighted by Cisco.
  2. Human‑in‑the‑Loop for High‑Risk Decisions – any AI output that influences hiring, lending, healthcare, legal, or financial outcomes must be reviewed by a qualified human before action. This satisfies EU AI Act requirements for high‑risk AI.
  3. Prompt Engineering Standards – create a style guide that forbids inclusion of PII, confidential documents, or internal system instructions in prompts. Provide template prompts for common use cases.
  4. Third‑Party AI Vetting Process – require completion of the AI Security Questionnaire covering model training data provenance, data‑retention policies, adversarial testing history, and incident‑notification procedures before any contract is signed.

Phase 4: Monitor and Continuously Improve

Prompt injection is an arms race. Defenses that worked last month may be bypassed by new techniques emerging weekly. Adopt a “continuous improvement” mindset:

  • Conduct quarterly red‑team exercises that specifically target AI systems.
  • Review the latest OWASP Top 10 for LLM Applications and adjust controls accordingly.
  • Refresh AI governance policies at least annually, incorporating new threat intelligence.
  • Report AI‑security metrics (e.g., number of logged injection attempts, mean time to detect) in the quarterly risk report to the board.

Frequently Asked Questions

Is ISO 27001 enough to cover LLM risk, or do I need a separate AI governance framework?
ISO 27001 provides a strong foundation — its control domains map well to AI risk. However, AI introduces specific threat categories (prompt injection, data poisoning, model drift) that standard Annex A controls do not address explicitly. NIST AI RMF complements ISO 27001 by making AI‑specific governance explicit. The most effective approach is extending your existing ISMS to cover AI, rather than building a parallel framework.

What is the biggest practical gap GRC teams face with LLM governance?
Visibility. Most organizations have zero systematic logging of employee prompts and AI outputs. Without logs, you cannot detect a prompt injection in progress, investigate an incident after the fact, or demonstrate due diligence to a regulator. Closing this gap should be the first operational step.

How do I handle shadow AI — employee use of unsanctioned AI tools?
Start with discovery, not discipline. Use network monitoring or endpoint tools to identify AI tool usage across the organization. Classify approved tools, document a clear request‑for‑approval process, and provide vetted alternatives. For tools that cannot be sanctioned (e.g., due to data‑residency or vendor risk), block access and communicate the rationale to employees.

What are the regulatory consequences of failing to govern AI systems?
Under the EU AI Act, non‑compliance with high‑risk AI system requirements can result in fines up to €30 million or 6 % of global annual turnover — whichever is higher. In addition, GDPR penalties may apply if AI‑driven data exfiltration exposes personal data. Cyber‑insurance carriers are also tightening underwriting, often requiring documented AI‑security controls as a condition for coverage.


Key Takeaways & Next Steps

  • Map every LLM – build an AI inventory within 30 days and assign clear owners.
  • Log everything – deploy an LLM firewall or SIEM connector to capture prompts and responses; treat logs as a mandatory audit trail.
  • Extend existing frameworks – plug the identified gaps in ISO 27001 and NIST AI RMF with AI‑specific controls (prompt‑engineering standards, incident categories, third‑party questionnaires).
  • Human‑in‑the‑Loop – enforce manual review for any AI output that influences high‑impact decisions.
  • Continuous testing – run quarterly red‑team exercises, stay current with OWASP LLM Top 10, and update policies at least annually.
  • Report to leadership – include AI‑security metrics in your regular risk dashboard so the board sees both exposure and remediation progress.

Conclusion

The rapid adoption of large language models has outpaced traditional GRC controls, leaving a visibility gap that attackers are already exploiting. By treating LLMs as first‑class assets—cataloguing them, logging every interaction, and extending proven frameworks like ISO 27001 and NIST AI RMF—you can turn that gap into a defensible posture. Real‑world incidents, from simple prompt‑injection tricks posted on developer forums to sophisticated model‑theft campaigns targeting proprietary fine‑tuned models, demonstrate that the threat is both immediate and evolving.

For GRC professionals, the path forward is clear: inventory your AI, embed AI‑specific controls into your existing ISMS, and adopt a continuous‑improvement cycle that treats AI risk as an ongoing, measurable program. The sooner you close the visibility loop, the better you’ll protect data, maintain regulatory compliance, and keep the organization’s AI initiatives on a trustworthy, accountable track.

TT

Truvara Team

Truvara