🚀
10 Days Realtime LLM Bootcamp
  • Introduction
    • Getting Started
    • Course Syllabus
    • Course Structure
    • Prerequisites
    • Greetings from your Instructors
    • First Exercise (Ungraded)
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention and Further Reads (Bonus Module)
    • Let's Track Our Progress
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Tasks in the Excercise
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG Functioning and LLM Architecture: Pre-trained and Fine-tuned LLMs
    • In-Context Learning
    • High level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Track your Progress
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Using Docker to Build the App
    • Amazon Discounts App
      • How the Project Works
      • Step-by-Step Process
    • How to Run the Examples
  • Live Interactions with Jan Chorowski and Adrian Kosowski | Bonus Resource
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page
  1. Prompt Engineering
  2. Prompt Engineering Excercise

Tasks in the Excercise

Subtask 1: Plot Extension

Design a prompt to generate a continuation of the mystery story that uncovers another layer of the hacking plot. Perhaps Valérie finds out there's a mole in her organization, or maybe Sophie encounters another similar case. Craft a prompt that nudges the story into revealing this new dimension. Evaluate how seamlessly the new narrative fits with the original story.

Subtask 2: Few-Shot Prompting for Character Analysis

Utilize a few-shot prompt that instructs ChatGPT to analyze the main characters' emotional states at crucial points in the story. For instance, ask the model to perform sentiment analysis on Valérie when she discovers the hacking, and Sophie when she finally solves the case. Note the efficacy of few-shot prompting in eliciting nuanced character analysis.

Subtask 3: Limitation Recognition due to Outdated Data

Pose a question about the use of AI and machine learning algorithms in contemporary eSports as depicted in the story, asking for the latest advancements as of 2023. Evaluate ChatGPT's response for potential inaccuracies or outdated information given its last training data is from April 2023 (this date is certainly expected to change from time-to-time). Discuss how this limitation affects the believability of the plot and the technological aspects described in the story.

For each subtask, provide the prompt you used, summarize the response, and give a thorough analysis of how well ChatGPT performed in terms of context, plot coherency, and technological accuracy.

PreviousStory for the Excercise: The eSports EnigmaNextRetrieval Augmented Generation and LLM Architecture

Last updated 1 year ago