Skip to product information
1 of 2

Recognising High-Risk AI under the EU AI Act

Recognising High-Risk AI under the EU AI Act

Regular price £67.00 GBP
Regular price £97.00 GBP Sale price £67.00 GBP
Sale Sold out
Taxes included.

This course is for people who are unsure whether the EU AI Act applies to them yet.

It helps you recognise when an AI system, use case, or decision crosses a line from low concern into high-risk territory, so you know when to pause, escalate, or seek formal advice.

The EU AI Act is live. Enforceable. And, already reshaping how organisations design, deploy, and govern AI systems.

Still, many teams stumble on one foundational question:
When does AI use cross into high-risk territory?

This course gives you a clear, responsible, and practical answer, fast.

What This Course Delivers

Recognising High-Risk AI under the EU AI Act is a concise, self-paced micro-course (under 90 minutes) designed to build AI risk literacy before formal legal or technical assessments begin.

You’ll learn to:

  • Recognise when AI systems fall under the high-risk categories in Annex III
  • Understand why certain uses trigger stricter safeguards
  • Connect regulatory logic to real harm and impact (human, not legal advice)
  • Build shared language across technical, policy, and leadership teams

This positions you to see risk early and to make better decisions when the stakes are high.

Who This Is For

You don’t need a law degree. You don’t need to code. This is for professionals in roles that will make or influence decisions about AI.

  • Policy, governance, and compliance professionals
  • HR, education, and public-sector teams
  • Product owners and programme leads
  • Leaders overseeing AI procurement, deployment, or strategy
  • Anyone expected to ask the right questions, before it’s too late

If you’re involved in an AI rollout, procurement decision, or internal policy review, this is your starting point.

What Makes This Course Different

We know. The marketplace is crowded with AI training, but this course delivers real value because it’s:

  • Grounded in regulation

Work directly with the EU AI Act’s risk-based logic.

  • Human-centred, not abstract

High-risk AI is defined by impact on people’s lives. In this course, we keep that front and centre.

  • Designed for early intervention

Harm typically happens before formal assessments even begin. This course helps you spot escalation points faster.

  • Built by practitioners, not platforms

Created by professionals with field-tested experience in AI ethics, public-sector training, and human-centred governance across the UK, Africa, the USA, Bermuda, and Asia.

By the End of This Course, You’ll Be Able To:

  • Explain the EU AI Act’s risk-based structure with confidence
  • Identify high-risk AI use cases in employment, education, biometrics, critical infrastructure, and public services
  • Distinguish high-risk systems from minimal- or limited-risk applications
  • Discuss why stricter obligations apply, and what “proportionate governance” really means
  • Understand the role of human oversight, inclusion, and accountability as non-negotiables

How It Works

  • Short, structured content (under 90 mins total)
  • Real-world examples from education, hiring, public services, and more
  • Reflective prompts to connect insights to your context
  • A quick knowledge check to reinforce learning
  • Certificate of completion issued automatically

Ethical & Legal Clarity

CKC Cares provides education and strategic guidance on ethical, human-centred AI.

This course is for general purposes. It does not constitute legal advice or formal compliance certification. It will equip you with the understanding and literacy to engage with legal and technical teams, and to ask better questions, earlier.

Why This Matters Now

The EU AI Act shifts responsibility upstream. Organisations will be expected to demonstrate:

  • Awareness of AI risk
  • Proportionate oversight
  • Informed decision-making
  • Governance that matches impact

Don’t wait for an audit, or a headline, to act. Build your risk literacy now, before your next AI deployment or human-AI decision.

Part of CKC Cares’ human-first approach to AI governance, supporting safer, fairer, and more accountable technology in practice.

View full details