Thrmal
Certificate of Completion
Thrmal Payments · Thrmalearn Education
This certifies that
Student Name
has successfully completed
AI & LLM Mastery — Level 1
Score: 14/14 Issued: Jan 1, 2025
Thrmalearn
Director of Education
🎓
Certified
Thrmal Payments
Thrmalearn.com

Sign in to continue

Your progress syncs automatically across all devices. Create a free account to get started.

🔒

Access Required

Your subscription is inactive or does not include access to this content. Reactivate or upgrade your plan to continue.

View Plans Sign out
LLM MASTERY — LEVEL 1
👤 ✓ synced
📡 30-Day Learning Program

LLM Mastery Curriculum

A structured, hands-on journey from zero to confident practitioner — with quizzes, checkpoints, and a final certification test.

30Days
5Modules
~42Hours Total
25+Quiz Questions
1Final Exam
// OVERALL PROGRESS 0%
01
Fundamentals
What LLMs Are
Transformers, tokens, embeddings, context windows — the building blocks explained clearly.
⏱ ~8 hrs DAYS 1–6
// Weekly Breakdown
Days 1–2
What is an LLM?
History, scale, capabilities
~2.5 hrs
Days 3–4
Tokens & Embeddings
How text becomes numbers
~2.5 hrs
Days 5–6
Transformers & Attention
The core architecture
~3 hrs
// Learning Topics
What is a Large Language Model? History from GPT-1 to today.
~45 min
What are tokens? How text is split, counted, and why it matters for cost.
~60 min
What are embeddings? How meaning is encoded as vectors.
~60 min
What is a context window? Why it limits what a model "remembers."
~45 min
The Transformer architecture — attention heads explained intuitively.
~90 min
Temperature, top-p, and sampling — controlling model randomness.
~45 min
// Module 1 Quiz
🧠
LLM Fundamentals Quiz
1. What is a "token" in the context of a large language model?
A. A single letter of text
B. A chunk of text — roughly a word or part of a word — that the model processes as a unit
C. A security credential used to authenticate API requests
D. The model's output after processing a prompt
2. What does "context window" mean?
A. The graphical interface where you type prompts
B. The amount of time the model takes to respond
C. The maximum number of tokens the model can consider at once in a single interaction
D. The training dataset size
3. Setting temperature to 0 will make a model…
A. Refuse to answer questions
B. Respond faster
C. Behave more deterministically, usually picking the highest-probability next token
D. Use more tokens per response
02
Prompt Engineering
Prompt Engineering
Zero-shot, few-shot, chain-of-thought, system prompts — the art of communicating with AI.
⏱ ~9 hrs DAYS 7–13
// Weekly Breakdown
Days 7–8
Prompting Basics
Zero-shot & few-shot
~3 hrs
Days 9–10
Chain-of-Thought
Making models reason step-by-step
~3 hrs
Days 11–13
System Prompts & Advanced
Personas, formatting, XML tags
~3 hrs
// Learning Topics
Zero-shot vs. few-shot prompting — when to use examples.
~60 min
Chain-of-thought prompting — "Let's think step by step."
~60 min
System prompts — setting roles, tone, and instructions.
~60 min
Using XML/structured tags to organize complex prompts.
~45 min
Prompt chaining — breaking tasks into multi-step workflows.
~60 min
Hands-on practice: rewrite 5 bad prompts into great ones.
~90 min
// Module 2 Quiz
✍️
Prompt Engineering Quiz
1. "Few-shot prompting" means…
A. Using a very short prompt with no explanation
B. Including a small number of examples in your prompt to show the model the pattern you want
C. Sending multiple prompts in rapid succession
D. Using the model with reduced capabilities
2. Which technique is most effective for getting a model to solve a complex math or logic problem?
A. Making the prompt longer
B. Chain-of-thought prompting — asking it to reason step by step before giving an answer
C. Setting temperature to 1.0
D. Asking the question multiple times
3. A "system prompt" is best described as…
A. A hidden prompt that the user cannot see or modify
B. A prompt about computer systems
C. A set of instructions given at the start of a conversation to set the model's persona, tone, and rules
D. The first message the user sends
03
Evaluation
Evaluating Model Outputs
Hallucinations, confidence, grounding — knowing when to trust what the model tells you.
⏱ ~7 hrs DAYS 14–19
// Weekly Breakdown
Days 14–15
Hallucinations
Why models confabulate facts
~2.5 hrs
Days 16–17
Grounding & RAG
Connecting models to real data
~2.5 hrs
Days 18–19
Output Quality
Frameworks for evaluating responses
~2 hrs
// Learning Topics
What are hallucinations and why do LLMs produce them?
~60 min
Techniques to reduce hallucinations: grounding, citations, self-check prompts.
~60 min
RAG (Retrieval-Augmented Generation) — overview and why it matters.
~90 min
Model confidence vs. accuracy — calibration explained.
~45 min
Building an evaluation rubric for your use case.
~60 min
// Module 3 Quiz
🔍
Evaluation Quiz
1. An LLM "hallucination" refers to…
A. The model generating overly creative responses
B. The model producing confident, plausible-sounding but factually incorrect or fabricated information
C. A model that is uncertain and says "I don't know"
D. A visual generation model creating distorted images
2. What does RAG stand for and what problem does it solve?
A. Random Answer Generation — it makes models more creative
B. Retrieval-Augmented Generation — it grounds model responses in external, real-time data to reduce hallucinations
C. Reinforced Agent Guidance — it trains agents to follow rules
D. Recursive Answer Grading — it evaluates model output quality
04
AI Agents
AI Agents
What agents are, tool use, memory, planning loops — from concept to building your first agent.
⏱ ~10 hrs DAYS 20–25
// Weekly Breakdown
Days 20–21
Agent Fundamentals
What makes something an "agent"
~3 hrs
Days 22–23
Tool Use & Memory
Connecting agents to the world
~3.5 hrs
Days 24–25
Build Your First Agent
Hands-on with LangChain or Claude API
~3.5 hrs
// Learning Topics
What is an AI agent? How agents differ from simple chatbots.
~60 min
The ReAct framework: Reason + Act loops explained.
~60 min
Tool use: giving agents the ability to search, run code, call APIs.
~90 min
Agent memory: short-term (conversation), long-term (vector stores).
~60 min
Multi-agent systems: orchestrators and sub-agents.
~60 min
Hands-on: Build a simple research agent using Claude or LangChain.
~3 hrs
// Module 4 Quiz
🤖
AI Agents Quiz
1. What is the key characteristic that makes a language model an "agent"?
A. It has a personality and a name
B. It can generate images as well as text
C. It can take actions in the world (use tools, call APIs, browse the web) and reason about multi-step goals
D. It runs locally on your computer without internet
2. In the ReAct framework, what does the loop look like?
A. Reason about what to do → Take an action → Observe the result → Repeat until done
B. Receive input → Generate output → Done
C. Read documentation → Ask a human → Execute code
D. Retrieve data → Act immediately → Never reconsider
3. Why do agents need "long-term memory" beyond just the context window?
A. To make them smarter than base models
B. So they can run without an internet connection
C. Context windows are limited in size, so persistent storage (like vector databases) lets agents retain and recall information across many sessions
D. Long-term memory is just a marketing term with no real technical meaning
05
The Science
The Science of LLMs
Training, RLHF, fine-tuning, and how these models are actually built from the ground up.
⏱ ~8 hrs DAYS 26–30
// Weekly Breakdown
Days 26–27
Pre-Training
How models learn from the internet
~3 hrs
Days 28–29
RLHF & Alignment
Making models helpful and safe
~3 hrs
Day 30
Fine-tuning & The Future
Customizing models for your use case
~2 hrs
// Learning Topics
Pre-training: how models learn from massive text corpora via next-token prediction.
~90 min
Supervised fine-tuning (SFT): adapting a base model to follow instructions.
~60 min
RLHF — Reinforcement Learning from Human Feedback — how models learn to be helpful.
~90 min
Constitutional AI & model alignment approaches (Anthropic's approach).
~60 min
Fine-tuning vs. prompt engineering — when each approach makes sense.
~60 min
// Module 5 Quiz
⚗️
LLM Science Quiz
1. During pre-training, what task do LLMs primarily learn to perform?
A. Answering trivia questions correctly
B. Predicting the next token in a sequence, learning language patterns from enormous amounts of text
C. Translating between languages
D. Classifying text as positive or negative sentiment
2. What is RLHF and why is it important?
A. A faster training algorithm that reduces compute costs
B. A technique where human raters score model outputs to train a reward model, which is then used to fine-tune the LLM to produce more helpful, harmless responses
C. A way to compress large models for faster inference
D. A retrieval method for finding relevant documents
3. When does fine-tuning a model make more sense than prompt engineering?
A. Always — fine-tuned models are always better
B. When you want the model to answer questions about recent events
C. When you have a specific, consistent task with many labeled examples and need to reduce prompt length or improve performance at scale
D. Never — prompt engineering is always sufficient
🎓 30-Day Final Certification Test

Complete all 5 modules, then take the comprehensive 14-question exam covering every topic in the curriculum.