# Diving Deeper: LLM Architecture Components

In the forthcoming video, we provide a detailed explanation of the essential components that constitute a Large Language Model's architecture. This video aims to extend your comprehension of the LLM architecture, contributing to your foundational understanding of the field.

{% embed url="<https://youtu.be/OXZQBXBvOR4?t=704>" %}

### In this video, we've learned about

* The User Interface Component designed to pose questions
* The Storage Layer, which utilises Vector DB or Vector Indexes
* The Service, Chain, or Pipeline Layer, which is instrumental in the model's functioning (with a brief mention of the Chain Library used for chaining prompts)
* Summary of our learnings around LLM Architecture Components

Let's look at a cleaner architecture diagram, and various steps of the pipeline and summarize the advantages of RAG based on what we've understood so far.

<br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ai-community-iitb-organization.gitbook.io/10-days-realtime-llm-bootcamp/retrieval-augmented-generation-and-llm-architecture/diving-deeper-llm-architecture-components.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
