[Learning Notes] Course: How Transformer LLMs Work
Course Link: How Transformer LLMs Wor2025-08-1chapter: understanding language models: language as a Bag-of-Wordnon-transformer, encoder-only, decoder-only, encoder-decodedecoder-only: such GPtokenization -> tokens -> vocabulary -> vector embeddingchapter: understanding language models: (word) embeddingword2vec, the way to express the natural meaning is an array of floatsuch as cats [.91, -... Read More