How Retrieval-Augmented Generation helps organisations protect sensitive information while harnessing AI’s full potential.


When you ask ChatGPT or another AI tool a question, it answers based on what it knows from its training data — typically a massive blend of public information from the internet and available literature up to a certain point in time. While this is powerful, it misses something vital: your own institutional knowledge.

Your company’s proprietary policies, control frameworks, audit reports and lessons learned — they aren’t part of the public training set (and you don’t want them to be). But imagine if you could blend the vast “hive mind” of general AI with the unique knowledge sitting inside your own documents - all the while keeping it private, local and secure. That’s exactly what RAGRetrieval-Augmented Generation — allows you to do.


How RAG Works

Retrieval-Augmented Generation (RAG) makes AI smarter by allowing it retrieve real, relevant documents at the moment you ask a question, and then augment its answer with that information.

Instead of guessing based on general knowledge - or hallucinating - the AI refers to actual facts, policies, reports, or any material that you have authorised. In short: you curate and control the AI’s memory.


Why RAG Matters for Regulated Businesses

Most organisations hold sensitive information that cannot - legally or ethically - be shared externally. In the risk and assurance world this includes:

  • Audit reports
  • Risk assessments
  • Controls testing results
  • Action plans and findings
  • Internal standards and policies

Uploading these documents to public AI tools - even accidentally - can quickly escalate into a material data breach or regulatory violation. Regulators are already beginning to scrutinise AI usage under existing data protection and operational resilience frameworks.

RAG offers a better path:

  • The documents remain securely within your company-managed environment.
  • The AI retrieves them locally, at query time, without sending data to external servers.
  • No retraining on external clouds. No uncontrolled duplication.

In fact, early pilots across banking, insurance, and healthcare are showing how RAG can enhance regulatory compliance by improving auditability, reducing human error, and keeping control firmly in-house. Implementations will align with company IT policies, cybersecurity standards, and regulatory obligations to ensure secure, compliant deployment.


💡 Enjoying this post? Subscribe to the newsletter for more insights on cyber, risk, and the hidden systems that shape our world.

📚 Or check out AI in 2027 — a look at a recent report predicting the impact of imminent superhuman AI over the next decade, and how it will be bigger than the Industrial Revolution.


Can You Build This Securely?

Yes - securely and responsibly.

RAG can be built on-premises in a company-controlled data centre. For maximum flexibility and prototyping, modern company-issued laptops are powerful enough to run a local RAG setup when the right architecture is selected. Importantly, these systems should be deployed only on company-owned, company-managed devices, following internal security standards and change management protocols. Best practices — local encryption, access controls, activity monitoring, and audit trails — are critical to protecting the environment.

At its core, a secure RAG setup needs:

  • A method to turn documents into searchable embeddings
  • A local secure search engine (e.g., Chroma, FAISS)
  • A local AI model (e.g., Mistral, Llama 3) running with lightweight hosting tools like Ollama

The result? You can ask questions of your company’s proprietary knowledge base, combining global AI capabilities with trusted internal insights — all governed under your organisation’s existing control framework.


Final Thought

RAG is no longer an experimental tool for tech companies. It’s becoming a strategic enabler for regulated industries that want smarter access to knowledge without sacrificing security, governance, or compliance.

Deployed thoughtfully, private RAG systems:

  • Reduce data leakage risks compared to unsanctioned external AI use
  • Increase decision-making speed and quality
  • Strengthen control, auditability, and resilience

The future of risk and assurance will not be about resisting AI — it will be about governing AI well, embedding it safely inside our organisations, and aligning its use with professional standards.

Already, boards and regulators are beginning to expect formal AI governance and AI assurance frameworks as part of enterprise risk management. RAG offers a first step: it keeps sensitive data secure while allowing teams to move faster, think smarter, and stay firmly in control.

Done right, it keeps you — and your organisation — firmly in the driver’s seat.

Food for Thought: Healthcare example

In this short video, Dan Woodlock outlines the risks of sharing sensitive healthcare data and key mitigants to apply when using RAG with external AI providers — including running on-premises LLMs where possible.

Courses