Course Syllabus
Last updated
Last updated
By the end of this course, you will:
Be proficient in developing LLM-based applications for production applications from day 0.
Have a clear understanding of LLM architecture and pipeline.
Be able to perform prompt engineering to best use generative AI tools such as ChatGPT.
Create an open-source project on a real-time stream of data or static data.
1 – Basics of LLMs
What is generative AI and how it's different
Understanding LLMs
Advantages and Common Industry Applications of LLMs
Bonus section: Google Gemini and Multimodal LLMs
2 – Word Vectors
What are word vectors and word-vector relationships
Role of context in LLMs
Transforming vectors in LLM responses
Bonus Resource: Overview of Transformers Architecture and Vision Transformers
3 – Prompt Engineering
Introduction and in-context learning
Best practices to follow: Few Shot Prompting and more
Token Limits
Prompt Engineering Peer Reviewed Exercise
4 – RAG and LLM Architecture
Introduction to RAG
LLM Architecture Used by Enterprises
Architecture Diagram and LLM Pipeline
RAG vs Fine-Tuning and Prompt Engineering
Key Benefits of RAG for Realtime Applications
Simialrity Search for Efficient Information Retrieval
Bonus Resource: Use of LSH + kNN and Incremental Indexing
5 – Hands-on Project
Installing Dependencies and Pre-requisites
Building a Dropbox RAG App using open-source
Building Realtime Discounted Products Fetcher for Amazon Users
Problem Statements for Projects
Project Submission