🚀
10 Days Realtime LLM Bootcamp
  • Introduction
    • Getting Started
    • Course Syllabus
    • Course Structure
    • Prerequisites
    • Greetings from your Instructors
    • First Exercise (Ungraded)
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention and Further Reads (Bonus Module)
    • Let's Track Our Progress
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Tasks in the Excercise
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG Functioning and LLM Architecture: Pre-trained and Fine-tuned LLMs
    • In-Context Learning
    • High level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Track your Progress
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Using Docker to Build the App
    • Amazon Discounts App
      • How the Project Works
      • Step-by-Step Process
    • How to Run the Examples
  • Live Interactions with Jan Chorowski and Adrian Kosowski | Bonus Resource
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page
  • What are the Examples offered?
  • Simple Way to Run the Examples on LLM App
  1. Hands-on Development

How to Run the Examples

PreviousStep-by-Step ProcessNextLive Interactions with Jan Chorowski and Adrian Kosowski | Bonus Resource

Last updated 1 year ago

Congratulations on coming this far!

Let's say you want to go beyond the Amazon Discounts App and Dropbox Retrieval App. This module is to make it easy for you to build and run your applications using examples on the .

What are the Examples offered?

The repository offers multiple possible use cases under its folder to illustrate various areas of application.

Once you've cloned the LLM App repository and set up the environment variables (per the steps mentioned on ), you're all set to run the examples.

Below is a table that shares the types of examples you can explore.

Example Type
What It Does
What's Special
Good For

contextless

Answers your questions without looking at any additional data.

Simplest example to try. Not RAG based.

Beginners to get started.

contextful

Uses extra documents in a folder to help answer questions.

Better answers by using more data.

More advanced, detailed answers.

contextful_s3

Like "Contextful," but stores documents in S3 (a cloud storage service).

Good for handling a lot of data.

Businesses or advanced projects.

unstructured

Reads different types of files like PDFs, Word docs, etc.

Can handle many file formats and unstructured data.

Working with various file types.

local

Runs everything on your own machine without sending data out.

Keeps your data private.

Those concerned about data privacy.

unstructuredtosql

Takes data from different files and puts it in a SQL database. Then it uses SQL to answer questions.

Great for complex queries.

Advanced data manipulation and queries.

Simple Way to Run the Examples on LLM App

Considering you've done the steps before, here's a recommended, step-by-step process to run the examples easily:

1 - Open a terminal and navigate to the LLM App repository folder:

cd llm-app
  • Option 1: Run the centralized example runner. This allows you to switch between different examples quickly:

    python run_examples.py alert

  • Option 2: Navigate to the specific pipeline folder and run the example directly. This option is more focused and best if you know exactly which example you're interested in:

    python examples/pipelines/contextful/app.py

By following these steps, you're not just running code but actively engaging with the LLM App’s rich feature set, including anything from real-time data syncing to model monitoring.

It's a step closer to implementing your LLM application that can have a meaningful impact.

2 - Choose Your Example. The examples are located in the folder. Say you want to run the 'alert' example. You have two options here:

That's it!

😄
🎉
LLM App
examples
this link
examples