Emergent Abilities of Large Language Models
🔮 Two new papers are changing how we view Emergent Abilities.
One of the most exciting things in LLMs is this idea of Emergent Abilities. Researchers have long explored the emergent features of complex systems and there is some debate to what impact if any, will this have on the immediate future of A.I.
Emergence can be defined as the sudden appearance of novel behavior. This is seen in any number of complex systems, from physics to biology to mathematics. Nobel Prize-winning physicist P.W. Anderson’s commentary “More Is Different” is one notable example. Large Language Models apparently display emergence by suddenly gaining new abilities as they grow. Why does this happen, and what does this mean?
Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. The original paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. The researchers considers an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence implies that additional scaling could further expand the range of capabilities of language models. (June, 2022)
There’s a couple of new papers on Emergent Abilities in 2023 that warrants our attention.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.