Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

Design Leader's Handbook

16 March 2026

Where Are You in the AI Value Chain — and Why It Matters for Risk Management

Suvi Halttula

Head of Social Sustainability Practice

Artificial intelligence is rapidly becoming embedded in business operations, products, and decision-making. Yet many organizations adopting AI overlook a fundamental question: Where do we sit in the AI value chain?

This question matters because AI risks and responsibilities differ depending on the role a company plays. Organizations may supply the inputs that power AI systems, build and deploy AI systems, or use AI tools developed by others. Each position brings different governance challenges and due diligence needs. Companies that fail to understand their position in the AI ecosystem risk overlooking critical dependencies and mismanaging AI-related risks.

The AI Value Chain

Most organizations participate in the AI ecosystem in one or more of three roles.

1. Suppliers of AI Inputs: Risks Begin Upstream

Some organizations provide the building blocks that enable AI systems, such as datasets, infrastructure, or foundation models.

Key risks at this stage include:

  • biased or low-quality training data

  • unclear data provenance

  • intellectual property violations

  • privacy concerns

  • security vulnerabilities

Issues introduced at the input stage can cascade through the entire AI pipeline, making data governance and transparency critical responsibilities.

2. Developers and Deployers: Where Accountability Intensifies

Organizations that design and deploy AI systems determine how AI interacts with real-world decisions. Examples include AI used in hiring and recruitment, financial decision-making, predictive analytics or across digital services and platforms.

Risks at this stage often include:

  • algorithmic bias and discrimination

  • lack of explainability

  • safety failures

  • regulatory non-compliance

  • inadequate human oversight

Because these organizations control system design and deployment, they often carry the highest level of accountability for ensuring AI systems are trustworthy and compliant.

3. Users of AI Systems: Responsibility Cannot Be Outsourced

Many companies rely on AI tools provided by external vendors rather than developing their own systems. Examples include generative AI tools, AI-powered analytics platforms, and customer service chatbots. However, using external AI systems does not eliminate responsibility.

Common risks include:

  • over-reliance on AI outputs

  • leakage of sensitive information

  • lack of transparency about how systems work

  • shadow AI use by employees

Organizations therefore need strong vendor due diligence and internal governance policies.

Due Diligence Across the AI Value Chain

Effective AI risk management requires organizations to look beyond their own systems and understand the broader ecosystem.

Key due diligence questions include:

  • Where did the training data originate? How were models tested and validated?

  • What governance practices do AI vendors follow? What governance practices do we follow?

  • What risks are embedded in AI tools or components?

  • How do we ensure ethical AI usage?

By conducting due diligence across the AI value chain, companies can identify upstream dependencies, manage downstream impacts, and build stronger governance frameworks. As AI adoption accelerates and regulatory expectations evolve, value chain awareness will become a cornerstone of responsible AI governance.

Executive Summary: Key Takeaways

  1. AI risk flows through the entire AI value chain, from data providers to system users.

  2. Companies typically operate in one or more of three roles: AI input suppliers, AI developers/deployers, or AI users.

  3. Each role creates different risk exposures, including data risks, algorithmic risks, and operational risks.

  4. Robust due diligence across the value chain is essential to identify upstream and downstream risks.

  5. Organizations that understand their role can implement more effective AI governance and risk management practices.

Let's talk about how to make this happen

We help you set the foundation and grow into becoming true business leaders in vast sustainability transformations.

Mia Folkesson

Managing Partner

mia@impaktly.com