Generative AI with LLMs



Next course


Further and more detailed information, including the schedule, can be found in the current course tables in the syllabus of the respective course, if the course is offered in the next sessions. The following text serves as information on what can be expected in terms of content in the course.

In recent years, AI technology, specifically LLMs like GPT-3 and GPT-4, have shown an impressive capacity to generate human-like text, enabling new applications in creative writing, chatbots, language translation, and more. However, understanding and effectively leveraging these models can be a complex task due to their sophisticated architecture and the vast amounts of data they require. This course aims to equip learners with the knowledge and skills to navigate these challenges, demystify the underlying principles of LLMs, and explore practical applications. It is designed for AI practition-ers, researchers, and enthusiasts who aspire to harness the power of generative AI in their work or research. Participants will gain a basic understanding of AI and foundational concepts of large language mod-els. They will learn the key components and architecture of models like ChatGPT, and how they are trained to generate human-like text. Students will gain a deep understanding of how to implement these generative AI models for a variety of tasks, such as text generation, text completion, and more complex applications like question an-swering, translation, and summarization. Students will learn how to use these existing pre-trained models via prompt engineering or fine-tune models for various NLP tasks in different domains like finance, marketing, healthcare, and education. Students will have hands-on experience in these areas. As AI grows increasingly powerful, it’s important to understand the ethical implications. Students can expect to learn about the ethical, societal, and legal issues surrounding the use of Large Language Models, including biases in AI, privacy concerns, and the potential misuse of these models.