From 2023 to 2026, LLM architecture seemed to change in many directions: tokenizers got larger, positional encoding shifted toward RoPE, attention evolved from MHA to GQA, sliding windows, and MLA, MoE became mainstream again, and normalization and activation functions moved toward combinations like RMSNorm and SwiGLU.
But the main story is not that Transformer was overturned. The main story is that the Transformer core stayed in place, while almost every component around it was optimized for longer context, lower inference cost, higher training efficiency, and stronger multilingual capability.
Start with the Big Picture
An LLM can be roughly divided into several parts:
- Tokenizer: turns text into tokens the model can understand.
- Positional encoding: tells the model where each token is in the sequence.
- Attention mechanism: decides which context each token should look at.
- Feed-forward network: applies more complex nonlinear transformations at each position.
- Normalization: keeps training more stable.
- Activation function: gives the network nonlinear expressive power.
- MoE: splits part of the feed-forward network into multiple experts and activates only a few at a time.
The 2023-2026 evolution is basically these components being optimized one by one.
Tokenizers: From “Can Split Text” to “Uses Fewer Tokens”
The tokenizer turns natural language into token sequences. The model does not see text directly; it sees token IDs.
Earlier tokenizers were often more efficient for English and less efficient for Chinese, code, and multilingual text. If the same sentence is split into too many small pieces, it consumes more context window and increases both training and inference cost.
One clear trend in recent years is larger vocabularies and better multilingual support. Llama 3 uses a 128K-token vocabulary, and Meta explicitly says this encodes language more efficiently and improves model performance. Qwen, DeepSeek, and other models also pay close attention to token efficiency for Chinese, code, and multilingual scenarios.
For beginners, think of it this way: the better the tokenizer, the less fragmented the same text becomes, and the more useful information the model can fit into the same context length.
Positional Encoding: RoPE Became Mainstream
Language has order. “Dog bites man” and “man bites dog” contain similar words, but the order changes the meaning. Positional encoding injects that order information into the model.
Early Transformers used absolute positional encodings, where position 1, position 2, and position 3 each had their own vector. Later LLMs more often used RoPE, or Rotary Positional Embedding. RoPE integrates position information into attention computation and is friendlier to long-context extension.
From the Llama family to many open models, RoPE has become one of the de facto standards. To support longer context, models may also adjust the RoPE base frequency, apply RoPE scaling, or combine it with sliding-window or chunked attention.
Simply put, RoPE does not make a model “suddenly smarter,” but it helps the model handle relative position relationships better in longer text.
Attention: From MHA to GQA, Sliding Windows, and MLA
Attention is the core of Transformer. It lets each token look at the most relevant tokens in the context for the current task.
The classic version is MHA, or Multi-Head Attention. It has multiple attention heads, each learning a different way to focus. The problem is that as models and contexts grow, KV cache becomes expensive and inference cost rises.
After 2023, the main direction of attention optimization was reducing inference cost.
GQA, or Grouped-Query Attention, is an important step. It lets multiple query heads share fewer key/value heads, reducing KV cache pressure. Meta explicitly adopted GQA in Llama 3 to improve inference efficiency.
Mistral 7B represents another direction: sliding-window attention. It does not require every token to attend to the entire history, but focuses mainly on a nearby window, reducing long-sequence computation pressure. For many tasks, local context already carries much of the useful information.
DeepSeek-V2/V3 pushed attention optimization further with MLA, or Multi-head Latent Attention. Its focus is compressing KV cache to reduce inference memory pressure. The DeepSeek-V3 technical report lists MLA and DeepSeekMoE as core architectural features.
You can understand these methods together:
- MHA: the classic approach, strong but expensive.
- GQA: greatly reduces KV cache cost with little loss in expressiveness.
- Sliding-window attention: reduces full-attention cost in long context.
- MLA: further compresses attention cache for efficient inference.
MoE: Many Parameters, but Only Some Are Used Each Time
MoE means Mixture of Experts.
A normal dense model activates all parameters for every token. MoE puts many experts inside the model, but routes each token to only a few experts. This allows the total parameter count to be large while the number of active parameters per inference step stays smaller.
Mixtral 8x7B, released at the end of 2023, was an important moment that brought MoE back into broad attention. Mistral’s paper explains that Mixtral 8x7B largely follows the Mistral 7B architecture, but replaces each feed-forward block with 8 experts and uses sparse routing to select part of them for computation.
DeepSeek-V3 later made MoE a core route. It has a very large total parameter count, but activates only a subset for each token, using DeepSeekMoE to reduce training and inference cost. Qwen3 and other model families also provide both dense and MoE variants, showing that MoE has moved from a research trick to a mainstream engineering option.
For beginners, a dense model is like a company where everyone attends every meeting. MoE is like dividing the company into expert teams and calling only the most relevant teams for each problem.
MoE also has clear difficulties:
- The router must learn to send tokens to suitable experts.
- Expert load must be balanced, so not all tokens crowd into a few experts.
- Distributed training and inference become more complex.
- Large total parameters do not automatically make deployment cheap.
Normalization: RMSNorm Became Common
Normalization stabilizes the distribution of intermediate values inside the neural network. When training large models, unstable values make convergence harder and training less reliable.
Early Transformers commonly used LayerNorm. Many Llama-style models later switched to RMSNorm. RMSNorm is simpler than LayerNorm: it does not compute the mean and focuses on root-mean-square scaling. It is lighter and stable enough in practice.
You do not need to memorize the formula. Just remember that RMSNorm is a lighter stabilizer. It does not determine model capability by itself, but it affects training stability, speed, and engineering implementation.
Activation Functions: From ReLU/GELU to SwiGLU
Activation functions add nonlinear capability to neural networks. Without them, a deep network would easily collapse into a linear transformation.
Earlier Transformers often used GELU. In modern LLMs such as Llama, Mistral, Qwen, and DeepSeek, SwiGLU or similar GLU variants are more common. SwiGLU usually appears inside the feed-forward network and controls information flow through a gating mechanism.
A rough analogy: a normal activation function is like a fixed switch, while SwiGLU is more like a learnable valve. It does not just decide whether information passes through; it can learn which information should be amplified.
SwiGLU makes the feed-forward layer slightly more complex, but in large-model practice it has become a common high-performance component.
The Overall Trend from 2023 to 2026
The timeline can be summarized like this:
- 2023: Llama, Mistral 7B, Mixtral, and other open models popularized combinations such as RoPE, RMSNorm, SwiGLU, GQA, sliding-window attention, and MoE.
- 2024: Llama 3, Qwen2.5, DeepSeek-V2/V3, and others expanded vocabularies, improved long context, strengthened inference efficiency, and made MoE and efficient attention central topics.
- 2025: DeepSeek-V3/R1 made more people pay attention to MLA, DeepSeekMoE, FP8, MTP, and the deep connection between architecture optimization and system engineering.
- 2026: The trend remains efficiency and engineering maturity: dense models continue to pursue stable general capability, MoE models expand capacity, and efficient attention reduces long-context cost.
The most important change was not one component replacing Transformer. It was the realization that adding parameters alone is not enough: architecture, data, training systems, and inference services must be optimized together.
How Beginners Should Learn This
If you are starting from zero, do not begin by forcing yourself through every paper. A better order is:
- Understand the basic Transformer structure: tokens, embeddings, attention, and FFN.
- Learn why RoPE, RMSNorm, and SwiGLU became common.
- Study GQA and KV cache to understand why inference consumes so much memory.
- Learn MoE, focusing on the difference between total parameters and active parameters.
- Finally, read model reports such as DeepSeek-V3, Mixtral, and Llama 3 to place these components back into real models.
Do not treat these terms as isolated facts. Most of them answer the same question: how can models become stronger while remaining trainable, deployable, and fast enough to serve?
Summary
The 2023-2026 evolution of LLM architecture can be seen as the engineering maturation of Transformer. Tokenizers reduce token waste, RoPE represents position more effectively, GQA, sliding-window attention, and MLA reduce attention cost, MoE expands capacity while controlling active computation, and RMSNorm plus SwiGLU make training and representation more stable and efficient.
For beginners, the key is not memorizing terms. The key is understanding the main tradeoff: almost every modern LLM architecture change is about cost, efficiency, context length, and scalability.
References: