Companies are rapidly integrating AI into their operations, from customer service chatbots to advanced analytics tools. And if organisations are using AI, then so are their third party vendors, the companies processing data on their behalf.

Do we know how our vendors are using our data, and how will we manage that risk?

AI and Third-Party Risk: What’s at Stake?

AI amplifies third-party risk in several ways:

  • Data Leakage: Information entered into AI tools could be stored, reused, or exposed. Are your inputs contributing to a model that may resurface these comments elsewhere?
  • Opaque Data Practices: What AI systems are your third parties using? Are they using in-house proprietary models? How clear are you on data usage, retention, and any onward sharing?
  • Model Vulnerabilities: If an AI model is compromised, its outputs could become inaccurate or biased, damaging trust and operations. What obligations do you have, and how do you manage this risk?
  • Supply Chain Risks: Many AI solutions will rely on a network of sub-processors, expanding the risk landscape. Do you know where your data ends up?

The Key Questions to Ask Yourself and Your Vendors

To manage third-party risk effectively in the AI era, you need to ask the right questions:

1. Data Handling and Privacy:

  • How does the vendor handle and store your data?
  • Is your data used for model training or shared with third parties?
  • Can the vendor demonstrate compliance with data protection regulations (e.g., GDPR, CCPA)?

2. Transparency and Explainability:

  • Can the vendor explain how their AI model processes inputs and generates outputs?
  • Are their models explainable and auditable?

3. Security and Controls:

  • What security measures are in place to protect data in transit and at rest?
  • Does the vendor provide evidence of security controls through a SOC2 or ISO 27001 certification, and do they address AI-specific risks?

4. Model Risk Management:

  • How does the vendor monitor and mitigate model drift or bias?
  • Do they have processes for continuous testing and validation of AI outputs?

5. Subcontractor Risks:

  • Does the vendor use subcontractors or other third-party models when processing your data?
  • How do they manage risk across their own supply chain?

Practical Ways to Manage Third-Party AI Risk

1. Update Your Third-Party Risk Management (TPRM) Framework:

  • Incorporate AI-specific risk assessments into your vendor due diligence process.
  • Expand contractual terms to cover AI-related concerns, including data ownership and model transparency.

2. Conduct Regular Audits and Testing:

  • Perform regular security and privacy audits on vendors where your data is exposed and where AI is being used
  • Test outputs for bias, fairness, and reliability.

3. Implement Strong Data Governance Policies:

  • Limit sensitive data sharing with third-party AI tools.
  • Use synthetic data for testing when possible.

4. Align with Industry Standards:

  • Follow industry guidance such as NIST’s AI Risk Management Framework.
  • Ensure compliance with AI-specific regulations and ethical guidelines.

5. Collaborate and Share Knowledge:

  • Engage with industry peers to share best practices and lessons learned.
  • Participate in industry groups focused on AI and third-party risk.

Attack Surface Management - proactive TPRM by moving beyond assessments

Traditional Third-Party Risk Management (TPRM) relies on periodic assessments, security questionnaires, and audits (eg. SOC2 assessments) to evaluate vendor security. However these methods are inherently point-in-time and, in the case of self-assessments, rely on self-reported data, which may be incomplete or inaccurate.

Attack Surface Management (ASM) goes beyond traditional assessments by providing continuous, real-time monitoring of third-party attack surfaces, uncovering risks that assessments often miss - such as shadow IT, exposed cloud assets, misconfigured databases, and leaked credentials.

Innovations in AI have enabled capabilities that security experts have long envisioned: real-time threat detection, automated asset discovery, and predictive risk analysis across vast, dynamic attack surfaces. Machine learning enhances this by identifying patterns and anomalies in real-time and at scale, prioritising threats, and providing actionable insights to security analysts.

So rather than waiting for annual reviews or vendor disclosures, ASM allows organisations to detect vulnerabilities proactively, monitor how risk is changing dynamically and in real-time, and prompt a response before a potential attacker can exploit the weakness. By integrating ASM into TPRM programs, organisations gain continuous visibility into risks at their third party suppliers, reducing blind spots and providing Boards and Regulators greater assurance over vendor security.