GENERATIVE AI AND CYBERSECURITY

1. Course Description
This course focuses on the application of Generative AI and Large Language Models (LLMs) in the field of cybersecurity, while also addressing the protection of AI systems themselves against emerging threats.
The course provides a dual perspective on both offensive and defensive uses of GenAI, combining core theoretical foundations with hands-on practice in building intelligent security tools such as Security Copilots, Autonomous Agents, and AI security protection mechanisms aligned with recognized safety standards.
2. Learning Outcomes
Upon completion of the course, participants are expected to acquire the following knowledge and competencies:
•    Understand how Generative AI is utilized in both offensive and defensive cybersecurity scenarios
•    Apply LLMs to automate security tasks such as malware analysis, vulnerability scanning, and security reporting
•    Design and deploy Security Copilots based on Retrieval-Augmented Generation (RAG) and AI agents to support investigation and incident analysis
•    Identify and mitigate security risks specific to Generative AI in accordance with the OWASP Top 10 for LLMs
•    Ensure ethical considerations, data privacy, and legal compliance when deploying Generative AI in enterprise environments
3. Course Structure and Key Modules
Module 1: GenAI for Attack and Defense (The Dual Role)
•    Offensive use of GenAI: How attackers leverage AI to generate fileless malware, craft highly personalized phishing emails, and create deepfakes to bypass biometric authentication
•    Defensive use of GenAI: Using AI to summarize security vulnerabilities, explain malicious code (Code Explainer), and automatically generate security patches
Module 2: Prompt Engineering Techniques for Cybersecurity
•    Security Prompting: Techniques for crafting prompts that enable AI-assisted vulnerability scanning without violating ethical policies
•    Automated Reporting: Transforming raw outputs from security scanning tools (e.g., Nmap, Nessus) into high-quality professional reports using Generative AI
•    Source Code Analysis: Using AI to quickly identify logical flaws in source code through Static Application Security Testing (SAST)
Module 3: Building a Dedicated Security Copilot
•    RAG (Retrieval-Augmented Generation) for Security: Integrating LLMs with internal enterprise knowledge databases (e.g., infrastructure documentation and historical incident logs) to create a security assistant that understands the organizational context
•    Autonomous Security Agents: Developing agents capable of autonomously executing forensic investigation steps based on natural language requests
Module 4: Security for AI Systems
•    OWASP Top 10 for LLMs: An overview of key vulnerabilities specific to Generative AI, including:
o    Prompt Injection: Techniques used to manipulate AI systems into disclosing sensitive information
o    Data Leakage: Preventing accidental disclosure of confidential data by employees when using tools such as ChatGPT
o    Insecure Output Handling: Risks arising when AI-generated outputs are exploited to execute malicious commands
•    Implementing Guardrails: Using tools such as NeMo Guardrails to control and constrain AI behavior
Module 5: Ethics and Legal Compliance
•    Data Privacy: Using offline AI models (Local LLMs) to ensure sensitive data is not exposed to cloud-based systems
•    Copyright and Bias: Addressing issues related to AI-generated source code ownership and bias in threat detection
4. Duration: 5 days per class
5. Certification Organization: The International Society of Data Scientists (ISODS)