Hands-on Development

Welcome to the final module of this bootcamp!

Now, we will guide you through setting up a Retrieval Augmented Generation (RAG) architecture using LLM App, an open-source production framework for building and serving AI applications and LLM-enabled real-time data pipelines.

While we're working with this tool, consider starring it on GitHub. It is an effortless way to bookmark it for future and track updates, and it also helps the community discover the resource.

By the end of this module, you'll be able to build your LLM application that works with realtime data. This implementation guide is aimed at Mac, Linux, and Windows users.

Note: If you have already completed your first project by consulting the documentation on the LLM App's open-source repository, that's excellent! In that scenario, you may review the videos in this module for additional perspective and proceed to the 'Final Project + Giveaways' module.

Last updated