4990,00 Kč
Prerequisites
- basic knowledge of Python programming.
- basic knowledge of Machine Learning (preferred).
Outline
- Overview of generative AI (text, images).
- Evolution of language modeling.
- Transformers.
- Types of transformer-based LLMs (encoder, decoder, encoder-decoder).
- Reinforcement learning with human feedback.
- Overview of the most popular transformer-based LLMs (BERT, GPT, LLAMA, T5, BART…)
- Transformer-based classification example with HuggingFace and OpenAI.
- Prompt engineering: in-context learning, zero shot, one shot and few shot prompting, configuration parameters of the generative process
- Full fine-tuning of large language models, parameter-efficient fine-tuning (LoRA).
- Text generative AI evaluation (ROUGE, BLEU).
Description
This course is designed for anyone who is fascinated by the capabilities of large language models (LLMs) and generative artificial intelligence, and wants to delve into the subject a bit further, from the basic user level and beyond. We’ll use a combination of open source and paid models, to highlight differences in both.
Note: All trainings are in English language.
Additional information
Date | January 12th, 2024, 9:00-16:00 CET (online) |
---|