主導老師:臺灣科技大學 林俊叡
本課程探討大型語言模型(LLMs)如何重塑資安領域。學生將學習如何運用 AI 於安全任務、資料整理、機器學習與防禦系統開發。透過專題式學習,團隊將設計並測試真實的 AI+資安解決方案,同時思考倫理、治理,以及「保護 AI」與「運用 AI 防禦」的雙重挑戰。
Applying Large Language Models in Cybersecurity Systems introduces students to the rapidlyevolving intersection of artificial intelligence and cyber defense. The course explores how large language models (LLMs) are transforming cybersecurity practice, from automated threat detection to intelligent defense solutions, while also addressing the unique security challenges AI itself introduces.Students will begin by examining the question “Can AI defend with us?”—a guiding theme that
frames the role of AI as both an ally and a potential risk in digital security. The course then surveys the evolution of AI with a cybersecurity focus, real-world case studies, and the key terminology that shapes the field.Practical skills are emphasized through modules on effective prompting, data curation for threat intelligence, and applying machine learning techniques to security problems. Students will gain hands-on experience in designing, developing, and evaluating AI-powered cyber defense systems, while also considering governance, ethics, and security implications. A distinctive feature of the course is its Project-Based Learning (PBL) track, where students work in teams to translate theoretical knowledge into practical solutions. Through progressive
milestones—requirements, design, proof-of-concept, and final solution—students will learn how to build and evaluate AI-driven security applications that can operate in real-world environments.
By the end of the course, students will be equipped not only with technical competencies in AI and cybersecurity integration but also with the critical perspective required to navigate ethical, organizational, and security governance challenges.
● Weekly assignments are graded on a scale of 1–5 points (0 if not submitted).
● The total score is calculated as 20 base points + the sum of all assignment points, with a maximum of 100 points.
This opening theme sets the stage by asking whether AI can act as a partner in defending cyberspace.
We will examine how AI shifts from a passive tool to an active collaborator.
AI Evolution, a
cybersecurity focus
We trace the evolution of AI, with emphasis on how each wave—from expert systems to
LLMs—intersects with security.
True AI+
Cybersecurity Stories
Real-world case studies illustrate how AI has already been used in cyber defense and offense.
We will examine success stories, failures, and lessons learned.
AI & Cybersecurity
Lingo
This module builds a shared vocabulary at the
intersection of AI and security. Students learn
terms used in both communities to prevent
miscommunication.
Prompting AI for Cybersecurity
Students learn how to craft effective prompts for
LLMs in security tasks. We discuss prompt
design, adversarial prompting, and failure cases.
台科大校慶放假一日,當週仍有進度。
Data Curation for Cybersecurity
We explore how security data must be cleaned,
structured, and curated for effective AI use.
Students will learn challenges of logs, alerts,
and threat intelligence feeds.
Machine Learning for Cybersecurity
This module covers classical and modern
machine learning applied to intrusion detection,
anomaly detection, and malware classification.
Students will see how supervised, unsupervised,
and reinforcement learning differ in security
contexts.
清明連假放假一次,當週仍有進度。
Developing AI-powered Cyber Defense
We transition from theory to system building.
Students design end-to-end workflows for
AI-driven defense, including data pipelines,
model integration, and automation layers.
Governing Ethics and Security
AI in security raises governance and ethical
concerns. Students study bias, accountability,
explainability, and dual-use risks. We also cover
standards, regulations, and compliance
frameworks.
True AI+ Cybersecurity Stories
A second set of case studies builds on earlier
discussions, with deeper analysis of emerging
trends. We examine ongoing incidents where AI
is suspected to play a role.
AI for Cybersecurity
We focus on how AI enhances security
functions such as monitoring, detection, and
response. Students review tools and frameworks
that integrate AI in SOC workflows.
Cybersecurity for AI
Here the perspective flips: securing AI systems
themselves. Students examine threats to models,
data pipelines, and APIs. Topics include
adversarial attacks, data poisoning, and model
theft.
PBL: AI+ Security Requirements
Teams begin project-based learning by gathering
requirements for an AI+security solution. The
focus is on defining scope, use cases, and
constraints.
PBL: AI+ Security Design
Teams progress to high-level and detailed design. Students create system architectures,data flows, and defense logic. Emphasis is on aligning design with requirements while considering risks.
PBL: AI+ Security POC
Teams implement a proof-of-concept based on
their designs. The emphasis is on demonstrating
feasibility, not completeness. Students test core
functions and identify limitations.
PBL: AI+ Security Solution
The course culminates with a full solution built
from requirements, design, and POC iterations.
Students deliver a working system or detailed
prototype.