AI for Cybersecurity Professionals

Courses

AI for Cybersecurity Professionals

Course Duration: 5 days
AI for Cybersecurity Professionals

About

As AI systems become increasingly integrated into government operations, they present evolving security challenges that require dedicated attention from cybersecurity leadership. The rapid adoption of AI has created a critical gap: while these systems are fundamentally software that should follow well-established cybersecurity practices, their unique characteristics demand new approaches to risk assessment, threat modelling, and security controls.

AI systems possess distinctive properties that differentiate them from traditional software and create novel security vulnerabilities. They are dynamic and adaptive, learning and changing behaviour based on data and interactions, making vulnerabilities harder to identify and contain. They perform complex tasks at unprecedented scale with reduced human oversight, meaning security failures can have amplified impacts across entire organisations. Most critically, LLM-based applications suffer from a fundamental design vulnerability – instructions and data are passed on the same channel – creating opportunities for prompt injection and model extraction. This, together with other AI-related attacks such as data poisoning and adversarial examples, creates threats that traditional security controls were not designed to address.

Meanwhile, adversaries are actively targeting AI systems as high-value assets, seeking to extract proprietary models, poison training data, manipulate outputs, and exploit the trust organisations place in AI-driven decisions. This creates an urgent need for cybersecurity professionals who can integrate AI-specific risks into their threat models, develop appropriate controls and mitigations, and build incident response capabilities for AI security breaches.

In this training program offer two intensive 5-day courses, one for engineers and the second for practitioners (policy, CISOs), to equip them with the hands-on knowledge needed to assess, manage, and mitigate security risks in AI systems, building capabilities in understanding AI system architecture, integrating AI-specific threats into risk assessments, implementing security controls across the AI lifecycle, and developing organizational strategies for AI security governance.

Target Audience

    Typical participants include:

    • Cybersecurity engineers
    • Cybersecurity professionals working on policy and non-technical aspects
    • CISOs

    Learning Outcomes

    At the end of this course, participants will be able to:

    • Describe the anatomy and functionality of AI systems and their relationship to attack surface and attack vectors
    • Evaluate AI system proposals for security risks
    • Develop control requirements, mitigations and security specifications for AI deployments
    • Assess AI security incidents and determine appropriate responses
    • Brief technical and non-technical stakeholders on AI security decisions

    Syllabus Summary

    Module 1: The Anatomy of AI Systems

    The first part of the training introduces participants to the modern AI landscape, clarifying the differences between predictive AI, generative AI, and agentic AI, and how each creates distinct security considerations. Learners explore the full AI supply chain, including the roles of developers, deployers, operators, and users, as well as the critical resources involved such as models, data, software frameworks, hardware infrastructure, and compute.

    Participants also study the core building blocks of AI systems, including LLM architectures, prompting and reasoning methods, memory and Retrieval-Augmented Generation (RAG), and tool integration through function calling and protocols like MCP. The day is highly hands-on: learners progressively build a simplified agentic AI system in n8n (no coding) to understand how real-world AI deployments expand attack surfaces and security risks.

    Module 2: Cybersecurity Attacks and Defences of AI Systems

    The second module focus on the evolving threat landscape for AI systems, combining traditional software vulnerabilities with new attack vectors unique to machine learning and generative AI. Participants work through OWASP’s Top 10 for LLM and GenAI applications, using practical exercises to see how AI-enabled applications introduce new risks and how these can be mitigated within existing cybersecurity programs.

    The module deep dives into major AI threat families, including prompt injection and jailbreaking, tool and RAG exploitation, and supply chain/model loading attacks, alongside broader awareness of techniques catalogued in MITRE ATLAS. Each topic follows an Attack–Defend–Validate workflow: participants exploit vulnerabilities in an agentic AI system, implement security controls, and retest mitigations. Real-world case studies are used to ground concepts in operational practice, with exercises designed to require no coding background.

    Module 3: AI Threat Modeling and GRC

    This third module equips practitioners with governance, risk, compliance, and threat modeling approaches tailored to AI systems. Participants explore the MITRE ATLAS framework, extending familiar ATT&CK-style adversary techniques into AI-specific domains, and learn structured methods for identifying and prioritizing AI threats.

    The session also introduces the NIST AI Risk Management Framework (AI RMF), helping learners map and govern risks beyond security, including bias, privacy, transparency, and safety. Key organisational controls such as AI Bills of Materials (AIBOM), model registries, and governance policies for employee use of external AI tools are discussed to support responsible and compliant AI adoption.

    Module 4: AI Incident Response and Forensics

    The last module focuses on responding to and investigating AI security incidents, extending traditional incident response practices into AI-specific environments. Participants learn to detect indicators of compromise in AI systems, perform forensic analysis of breaches such as prompt injection traces, poisoned data, or malicious model behaviour, and implement containment and remediation strategies.

    The module addresses unique challenges in AI incident response, including non-deterministic outputs, distributed attack surfaces, and compromised training or retrieval pipelines. The day culminates in a team-based competitive exercise, reinforcing practical investigation and response skills in realistic AI security scenarios.

    Course Pricing & Payment Terms

    • The course will commence with a minimum subscription of 20 pax and is limited to 30 pax per cohort.
    • Corporate rates are available. Government subsidies/grants do not apply to this course.
    • For organisations seeking to enrol multiple employees in the course (i.e., more than 10 pax), please contact us at [email protected]
    Payment Terms
    • Payment must be made before the start of the course.
    • In the event of cancellation after acceptance into the course, you are entitled to a refund based on the following guidelines:
    • More than 30 days before the start date: 100% refund
    • Between 5-30 days before the start date: 50% refund
    • Less than 5 days before the start date: No refund

    To sign up or learn more about course dates, please contact us at [email protected]

    Category: Advanced Training

    Ready to Enroll?

    Take the next step in your cybersecurity journey with this comprehensive training program.

    Contact Us to Enroll

    📋 Course Information

    Duration: 5 days
    Category: Advanced Training
    Format: On-site