AI for Cybersecurity Professionals

Courses

AI for Cybersecurity Professionals

Course Duration: 5 days
Level: Advanced
AI for Cybersecurity Professionals

About

As AI systems become increasingly integrated into government operations, they present evolving security challenges that require dedicated attention from cybersecurity leadership. The rapid adoption of AI has created a critical gap: while these systems are fundamentally software that should follow well-established cybersecurity practices, their unique characteristics demand new approaches to risk assessment, threat modelling, and security controls.

AI systems possess distinctive properties that differentiate them from traditional software and create novel security vulnerabilities. They are dynamic and adaptive, learning and changing behavior based on data and interactions, making vulnerabilities harder to identify and contain. They perform complex tasks at unprecedented scale with reduced human oversight, meaning security failures can have amplified impacts across entire organizations. Most critically, LLM-based applications suffer from a fundamental design vulnerability - instructions and data are passed on the same channel - creating opportunities for prompt injection and model extraction. This, together with other AI-related attacks such as data poisoning and adversarial examples, creates threats that traditional security controls were not designed to address.

Meanwhile, adversaries are actively targeting AI systems as high-value assets, seeking to extract proprietary models, poison training data, manipulate outputs, and exploit the trust organizations place in AI-driven decisions. This creates an urgent need for cybersecurity professionals who can integrate AI-specific risks into their threat models, develop appropriate controls and mitigations, and build incident response capabilities for AI security breaches.

This training program is designed to equip practitioners with the hands-on knowledge needed to assess, manage, and mitigate security risks in AI systems, building capabilities in understanding AI system architecture, integrating AI-specific threats into risk assessments, implementing security controls across the AI lifecycle, and developing organizational strategies for AI security governance.

Target Audience

This course is intended for cybersecurity professionals who are responsible for:

  • Reviewing or approving AI system designs
  • Securing AI-enabled applications and platforms
  • Responding to AI-related security incidents
  • Defining security requirements and governance controls for AI adoption

Typical participants include:

  • Cybersecurity engineers and practitioners
  • Security architects and technical leads
  • Incident responders and blue team members
  • Risk, governance, and security assurance professionals

Recommended Experience

Participants should have at least 3 years of cybersecurity experience. It is highly recommended to have experience in Python, as some of the course practice session will require basic Python programming.

Learning Outcomes

At the end of this course, participants will be able to:

  • Use existing AI tools effectively and responsibly to support cybersecurity work
  • Describe the anatomy and functionality of AI systems and their associated attack surfaces
  • Implement effective security controls tailored to the AI lifecycle, from development to deployment and monitoring.
  • Develop incident response strategies for AI security breaches, including forensic analysis and remediation techniques.
  • Apply governance frameworks and best practices to manage AI risks within organizational contexts.
  • Evaluate AI system proposals and deployments for security risks
  • Develop security control requirements and specifications for AI deployments
  • Assess AI security incidents and determine appropriate response actions
  • Apply threat modeling techniques to AI systems
  • Initiate AI security governance and risk management frameworks
  • Brief technical and non-technical stakeholders on AI security decisions

Syllabus Summary

Module 1: AI in Daily Work and the Anatomy of AI Systems

This module begins by showing how existing AI tools can be used effectively and responsibly in the daily work of cybersecurity professionals. Participants learn best practices for using chat-based AI systems to support tasks such as information analysis, research, brainstorming, documentation, and controlled experimentation, while understanding the limitations and risks of AI in security-sensitive contexts. Foundational techniques in prompt and context engineering are introduced through guided, hands-on exercises, with emphasis on recognising when AI is helpful and when it should not be relied upon.

The course then examines how modern AI systems are structured and deployed, focusing on the differences between predictive, generative, and agentic AI and how each introduces distinct security considerations. Participants explore the AI supply chain, including stakeholder roles and critical resources such as models, data, software, infrastructure, and compute. Key building blocks—including LLM architectures, reasoning techniques, memory systems, and tool integrations—are analysed from a security perspective. Concepts are reinforced through a no-code, hands-on exercise in which participants assemble a simplified agentic AI system to illustrate real-world system anatomy and attack surfaces, rather than to teach AI development skills.

Module 2: Cybersecurity Attacks and Defences of AI Systems

This module examines the threat landscape for AI systems and how effective security controls can be designed and integrated into existing cybersecurity programs. Participants learn how AI-enabled applications inherit traditional software vulnerabilities while introducing new attack vectors unique to generative and agentic AI. The OWASP Top 10 for LLM and Generative AI applications is used as a foundation, supported by real-world examples that highlight common risks and practical mitigations.

Key AI threat families—including prompt injection, tool and RAG abuse, and AI supply chain attacks—are explored alongside broader techniques documented in MITRE ATLAS. Each area is approached using an Attack-Defend-Validate method, where participants analyse vulnerabilities, design defensive controls, and validate mitigations in realistic scenarios. Exercises require no coding and focus on security decision-making rather than attack execution, reinforced through recent case studies and large-scale government deployment examples.

Module 3: AI Incident Response and Forensics

This module addresses how to respond to and investigate AI security incidents. Participants learn to identify indicators of compromise in AI systems, conduct forensic analysis of AI security breaches (including log analysis, data poisoning detection, and prompt injection traces), contain and remediate AI-specific security incidents, and document findings for post-incident review. The session builds on traditional incident response frameworks while addressing AI-specific investigation challenges such as non-deterministic behavior, distributed attack surfaces, and contaminated training or retrieval data.

Module 4: AI Threat Modelling and Governance, Risk, and Compliance (GRC)

This module focuses on AI governance, risk, and compliance, and on applying threat modeling to AI systems. Participants explore the MITRE ATLAS framework to understand adversary tactics targeting AI and engage with the NIST AI Risk Management Framework (AI RMF) to identify, assess, and govern AI risks, including considerations such as bias, transparency, privacy, and safety. The session also introduces key organizational controls, including AI Bills of Materials (AIBOM), model registries, and governance approaches for managing employee adoption and use of external AI tools.

Course Pricing & Payment Terms

  • The course will commence with a minimum subscription of 20 pax and is limited to 30 pax per cohort.
  • Corporate rates are available. Government subsidies/grants do not apply to this course.
  • For organisations seeking to enrol multiple employees in the course (i.e., more than 10 pax), please contact us at [email protected]
Payment Terms
  • Payment must be made before the start of the course.
  • In the event of cancellation after acceptance into the course, you are entitled to a refund based on the following guidelines:
  • More than 30 days before the start date: 100% refund
  • Between 5-30 days before the start date: 50% refund
  • Less than 5 days before the start date: No refund

To sign up or learn more about course dates, please contact us at [email protected]

Category: Advanced Training

Ready to Enroll?

Take the next step in your cybersecurity journey with this comprehensive training program.

Contact Us to Enroll

📋 Course Information

Duration: 5 days
Level: Advanced
Category: Advanced Training
Format: On-site