How to Run a Private AI Model Locally with Ollama

What: Run an LLM Locally with Ollama If you’re curious about running a Large Language Model (LLM) on your own laptop — no cloud, no internet connection required — Ollama makes it surprisingly easy. It’s a simple way to use powerful open-source AI models directly on your machine. Ollama is a developer-friendly tool that makes it simple to run Large Language Models (LLMs) like Llama, Gemma, or Mistral locally on your laptop with just a single terminal command. Ollama supports a wide range of open-source models and is used by developers, researchers, and privacy-conscious professionals who want fast, offline access to AI without sending data to the cloud. ...

May 8, 2025 · 4 min

Can You Trust AI in a Regulated Business? Meet RAG.

How Retrieval-Augmented Generation helps organisations protect sensitive information while harnessing AI’s full potential. When you ask ChatGPT or another AI tool a question, it answers based on what it knows from its training data — typically a massive blend of public information from the internet and available literature up to a certain point in time. While this is powerful, it misses something vital: your own institutional knowledge. Your company’s proprietary policies, control frameworks, audit reports and lessons learned — they aren’t part of the public training set (and you don’t want them to be). But imagine if you could blend the vast “hive mind” of general AI with the unique knowledge sitting inside your own documents - all the while keeping it private, local and secure. That’s exactly what RAG — Retrieval-Augmented Generation — allows you to do. ...

April 19, 2025 · 4 min · Graeme

Can AI really get to know you? (New Substack Post)

As AI systems increasingly claim to “know” their users, the risks around trust, manipulation, and decision-making are growing just as fast. In this post, I explore whether AI truly understands individuals—or whether we are mistaking prediction for insight. For anyone working in governance, risk, or cybersecurity, the distinction isn’t just academic; it’s a critical emerging threat. Link to Substack post Can AI really get to know you? How well could an AI ever really know you? This week, I’ve been experimenting with using AI as a kind of coach — a patient, curious thought partner — and exploring what it seems to understand about me ...

April 18, 2025 · 1 min · Graeme

What is AI training?

Before an AI system can answer a query, write a paragraph of text, recommend a movie, or drive a car, it must first learn - and that process is called AI training. AI training involves teaching an artificial intelligence model to make accurate predictions or decisions. It learns by analysing large volumes of training data, identifying patterns, and adjusting its internal settings like numerical weights to improve its predictions. Throughout training the AI compares its predictions to known correct answers, refining itself over millions of cycles to reduce errors. However it’s important to remember: AI doesn’t understand its tasks the way a human would. It is simply refining its pattern recognition. ...

April 18, 2025 · 2 min

The Future According to AI 2027 - Insights and Implications

Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean Published 3 April 2025 AI 2027 report We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. https://ai-2027.com/ Topics covered AI Agents (Unimpressive and error-prone at first, but starting to replace some employee tasks and companies inserting into workflows regardless. And getting better) Autonomous Coding AI Research (AIs creating better AIs) After being trained to predict internet text, the model is trained to produce text in response to instructions. This bakes in a basic personality and “drives.” For example, an agent that understands a task clearly is more likely to complete it successfully; over the course of training the model “learns” a “drive” to get a clear understanding of its tasks. Other drives in this category might be effectiveness, knowledge, and self-presentation (i.e. the tendency to frame its results in the best possible light). ...

April 9, 2025 · 2 min

How Tyler Cowen is using AI to get smarter

Tyler Cowen is one of those great minds that whenever an interview is released, I’ll take a listen. An Economics professor at George Mason University, Tyler could be better described as a polymath, and he is most prolific as a writer on his blog Marginal Revolution. The world I’ve lived in has not changed that much since I was born but that’s about to change. You could say its changed already but it’s not fully instantiated in most of the things we do ...

March 8, 2025 · 5 min

Why High-Agency is the Skill You Need to Thrive with AI

With AI advancements, many wonder what will set workers apart in the future, particularly those involved in knowledge work and the professions. When AI has all the knowledge and all the intelligence, what value does the human worker bring? The answer is agency - high agency. High-agency individuals - those who take ownership of their actions, shape circumstances rather than react to them, and turn obstacles into opportunities - are the ones that will thrive. ...

March 7, 2025 · 2 min · Graeme Milroy

UK Government Release their AI Playbook - Key Principles for Responsible AI Use Across the UK's Public Sector

What the playbook covers The UK Government published their Artificial Intelligence (AI) Playbook on 10 February 2025, setting out 10 Principles for using AI in government organisations. The playbook updates previous UK government publications, providing an expanded guide designed to help public sector organisations harness AI technologies safely, effectively, and responsibly. It is a must-read for risk managers, cyber-security professionals, and compliance experts working with or within the UK public sector. The playbook provides guidance and principles to navigate the unique challenges and opportunities presented by AI. ...

February 12, 2025 · 2 min · Graeme

Book summary - Utopia, by Nick Bostrom (2022)

Nick Bostrom’s Deep Utopia explores the future of AI, existential risk, and the philosophical dilemmas of a world where intelligence surpasses human control - where humans may be left with nothing, or nothing meaningful, to do. The book raises profound questions about extreme risk scenarios — ranging from catastrophe to utopia. How will we manage uncertainty, resilience, and ethical responsibility in a world shaped, driven, and perhaps full managed by AI? Can humanity navigate this transition while maintaining control, ensuring AI serves rather than undermines us? ...

1 min

Foundation models

Foundation models are large AI models trained on massive datasets. They’re versatile and powerful because they can be fine-tuned for various tasks. This fine-tuning leverages transfer learning: the model first learns general patterns from a broad dataset, then is adapted with smaller, domain-specific datasets for specific problems like translation or image recognition. This saves time and data, letting organisations use advanced AI without starting from scratch. Further reading What are foundation models? - IBM What Are Foundation Models? - Nvidia What is fine-tuning? - IBM

1 min

Model Context Protocol (MCP)

Further reading Introducing the Model Context Protocol - Anthropic OpenAI adopts rival Anthropic’s standard for connecting AI models to data - Techcrunch 230. MCP - It’s Hot, But Will It Win? - Steven Sinofsky Related pages Anthropic Agentic AI

1 min

Risks around Agentic AI

Risks around Agentic AI Meredith Whittaker, president of the Signal Foundation, discusses some of the risks around Agentic AI. Further reading Signal President Meredith Whittaker calls out agentic AI as having ‘profound’ security and privacy issues - TechCrunch

1 min

Small language models (SLMs)

Further reading Small Language Models Are the New Rage, Researchers Say - Wired UK (Apple News link)

1 min

Third-Party Risk Management in the Age of AI - Rethinking Trust and Accountability

Companies are rapidly integrating AI into their operations, from customer service chatbots to advanced analytics tools. And if organisations are using AI, then so are their third party vendors, the companies processing data on their behalf. Do we know how our vendors are using our data, and how will we manage that risk? AI and Third-Party Risk: What’s at Stake? AI amplifies third-party risk in several ways: Data Leakage: Information entered into AI tools could be stored, reused, or exposed. Are your inputs contributing to a model that may resurface these comments elsewhere? Opaque Data Practices: What AI systems are your third parties using? Are they using in-house proprietary models? How clear are you on data usage, retention, and any onward sharing? Model Vulnerabilities: If an AI model is compromised, its outputs could become inaccurate or biased, damaging trust and operations. What obligations do you have, and how do you manage this risk? Supply Chain Risks: Many AI solutions will rely on a network of sub-processors, expanding the risk landscape. Do you know where your data ends up? The Key Questions to Ask Yourself and Your Vendors To manage third-party risk effectively in the AI era, you need to ask the right questions: ...

4 min