Statistical Foundations of Large Language Models
Large language models (LLMs) are massive Transformer-based neural networks, posing substantial challenges in understanding their predictive processes and internal mechanisms. This is not unexpected, given our limited understanding of even much smaller, simpler multilayer neural networks. At present, there appear to be no effective mathematical theories to help us reason about such enormous, complex, and nonlinear systems. Indeed, Stephen Wolfram has argued that reducing complex systems to smaller ones while retaining their essence (i.e., “understanding”) might, in general, be impossible. In a way, this resonates with the line Theory will only take you so far from the movie Oppenheimer.
Why statistics?
Statistics has a long history of shedding light on complex systems well before their internal mechanisms are fully understood, as evidenced by applications in astronomy and genetics. The power of statistical approaches in the context of LLMs lies in our ability to overlook highly intricate details and instead focus on quantitative patterns in specific aspects of these models. Often, as the complexity of a system grows, certain simplifications emerge.
That being said, we do not expect statistical approaches — what we term a second principle approach — to address most of the crucial challenges posed by LLMs. Nevertheless, it is a viable first step, in part because it is generally compute-light!
Our work in this direction
Since early 2023, my group has been actively pursuing this line of research, examining several facets of LLMs through a statistical lens. These include watermarks, biases in RLHF, data separation, and copyright concerns:
- Statistical Impossibility and Possibility of Aligning LLMs with Human Preferences: From Condorcet Paradox to Nash Equilibrium. K. Liu, Q. Long, Z. Shi, W. Su, and J. Xiao.
- An Overview of Large Language Models for Statisticians. W. Ji, W. Yuan, E. Getzen, K. Cho, M. Jordan, S. Mei, J. Weston, W. Su, J. Xu, and L. Zhang.
- Robust Detection of Watermarks for Large Language Models under Human Edits. X. Li, F. Ruan, H. Wang, Q. Long, and W. Su.
- Debiasing Watermarks for Large Language Models via Maximal Coupling. Y. Xie, X. Li, T. Mallick, W. Su, and R. Zhang.
- A Law of Next-Token Prediction in Large Language Models. H. He and W. Su.
- On the Algorithmic Bias of Aligning Large Language Models with RLHF: Preference Collapse and Matching Regularization. J. Xiao, Z. Li, X. Xie, E. Getzen, C. Fang, Q. Long, and W. Su.
- A Statistical Framework of Watermarks for Large Language Models: Pivot, Detection Efficiency and Optimal Rules. X. Li, F. Ruan, H. Wang, Q. Long, and W. Su.
- An Economic Solution to Copyright Challenges of Generative AI. J. Wang, Z. Deng, H. Chiba-Okabe, B. Barak, and W. Su.