
The story of modern AI isn’t a straight line of “bigger models = better results.” It’s a sequence of paradigm shifts, each one redefining where compute — as explored in the economics of AI compute infrastructure — creates capability.
Understanding these phases isn’t academic — it’s the structural map that explains why 2025’s models weren’t dramatically larger than 2023’s, yet reasoned dramatically better.
And why RLVR — the fifth phase — represents the most consequential training breakthrough since pretraining itself.









