🚀
10 Days Realtime LLM Bootcamp
  • Introduction
    • Getting Started
    • Course Syllabus
    • Course Structure
    • Prerequisites
    • Greetings from your Instructors
    • First Exercise (Ungraded)
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention and Further Reads (Bonus Module)
    • Let's Track Our Progress
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Tasks in the Excercise
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG Functioning and LLM Architecture: Pre-trained and Fine-tuned LLMs
    • In-Context Learning
    • High level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Track your Progress
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Using Docker to Build the App
    • Amazon Discounts App
      • How the Project Works
      • Step-by-Step Process
    • How to Run the Examples
  • Live Interactions with Jan Chorowski and Adrian Kosowski | Bonus Resource
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page

Live Interactions with Jan Chorowski and Adrian Kosowski | Bonus Resource

PreviousHow to Run the ExamplesNextFinal Project + Giveaways

Last updated 1 year ago

As a part of this bootcamp, we hosted a captivating Fireside Chat featuring Jan Chorowski, CTO of Pathway. Jan is a renowned figure in Artificial Intelligence with a Ph.D. in Neural Networks and a portfolio that includes more than 10k+ citations and collaborations on research papers with AI pioneers like Geoff Hinton and Yoshua Bengio.

During this captivating Fireside Chat, our host, Anup Surendran, engaged in a deep exploration of the captivating realm of Large Language Models (LLMs) alongside Jan. Their dynamic discussion spanned a wide spectrum of LLM topics, encompassing their diverse applications, the operational hurdles they face, and the intriguing concept of 'learning to forget.' Moreover, the conversation delved into the real-time capabilities of LLMs and illuminated their paramount importance in the ever-evolving landscape of modern technology.

Key Highlights:

  • Gain insights into the evolution of LLMs and their practical applications.

  • Explore the operational challenges faced by LLMs in real-time scenarios.

  • Understand the concept of 'learning to forget' and its role in LLM development.

  • Dive into the discussion on the real-time nature of LLMs and their relevance.

  • Get valuable answers to audience questions about LLMs, document versioning, and more.

Don't miss out on this illuminating Fireside Chat that offers a unique perspective on the evolving world of Large Language Models. Dive into the past conversation to uncover the mysteries and possibilities of LLMs in today's tech-driven world.

Another Bonus Resource: Recorded Interaction on Real-time Data Processing

Key Highlights:

  • Gain a deep understanding of real-time data processing nuances through a discussion on reactive data processing.

  • Explore the key differences and practical applications of stream versus batch processing.

  • Understand the role of transformers in data engineering, especially in managing and streaming data.

  • Discover emerging machine learning tools and approaches that are particularly beneficial for startups.

Don't miss these illuminating Fireside Chats that offer unique perspectives on the fast-evolving domains of Large Language Models and Real-time Data Processing. These sessions provide valuable insights into the wonderful world of AI and machine learning.

This is a session from Pathway's Archives. It's a live interaction between Jon Krohn (Chief Data Scientist at Nebula | ) and Adrian Kosowski (CPO at Pathway | ). Adrian Kosowski, notable for his early PhD at and co-founding Spoj.com, brings over 15 years of diverse research experience to the discussion. It delves into real-time data processing, contrasting stream versus batch processing, and exploring practical ML applications.

GitHub
Google Scholar