Security when working with AI – how to avoid data leaks (AISEC)
AI - Artificial Intelligence, AI for End Users
Location, current course term
The course:
Hide detail
-
Introduction to AI security
-
How generative AI systems (LLMs) work and how they handle context and inputs
-
Typical ways companies use AI (ChatGPT, Copilot, internal AI assistants)
-
Main security risks associated with AI tools
-
Overview of real incidents and common user mistakes
-
Working with data when using AI
-
How AI treats information placed in prompts or documents
-
Types of data that should not be shared with AI tools (internal docs, personal data)
-
Risks of sharing sensitive information with external AI services
-
Practical anonymization techniques and safe data handling practices
-
Output manipulation (Prompt Injection)
-
The concept of prompt injection and how outputs can be manipulated
-
Hidden instructions in documents, emails or web content
-
Attack scenarios when working with documents and external sources
-
How to detect these attacks and how to defend against them
-
Trustworthiness of AI responses
-
AI hallucinations and why inaccurate answers occur
-
How AI uses information sources
-
Quick methods to verify AI responses
-
Situations when AI outputs must be checked or avoided
-
Safe AI use in an organization
-
Setting basic rules for AI use in the company
-
AI policy and recommended internal procedures
-
Employee roles in secure AI usage
-
A practical checklist for safe AI practices
-
Practical scenarios and recommendations
-
Analysis of typical AI use cases in the workplace
-
Identifying security risks in those scenarios
-
Recommended procedures for safe AI tool use
-
Discussion of concrete, real-world examples
-
Schedule:
-
2 days (9:00 AM - 5:00 PM )
-
Course price:
-
376.00 € ( 454.96 € incl. 21% VAT)
-
Language:
-