Epoch Impact Report 2022
Our impact report for 2022.
Trends in the dollar training cost of machine learning systems
I combine training compute and GPU price-performance data to estimate the cost of compute in US dollars for the final training run of 124 machine learning systems published between 2009 and 2022, and find that the cost has grown by approximately 0.5 orders of magnitude per year.
Scaling Laws Literature Review
I have collected a database of scaling laws for different tasks and architectures, and reviewed dozens of papers in the scaling law literature.
An interactive model of AI takeoff speeds
We have developed an interactive website for understanding a new model of AI takeoff speeds.
Literature review of Transformative Artificial Intelligence timelines
We summarize and compare several models and forecasts predicting when transformative AI will be developed.
Revisiting algorithmic progress
We use a dataset of over a hundred computer vision models from the last decade to investigate how better algorithms and architectures have enabled researchers to use compute and data more efficiently. We find that every 9 months, the introduction of better algorithms contribute the equivalent of a doubling of compute budgets.
Predicting GPU performance
We develop a simple model that predicts progress in the performance of field-effect transistor-based GPUs under the assumption that transistors can no longer miniaturize after scaling down to roughly the size of a single silicon atom. Our model forecasts that the current paradigm of field-effect transistor-based GPUs will plateau sometime between 2027 and 2035, offering a performance of between 1e14 and 1e15 FLOP/s in FP32.
Will we run out of ML data? Evidence from projecting dataset size trends
Based on our previous analysis of trends in dataset size, we project the growth of dataset size in the language and vision domains. We explore the limits of this trend by estimating the total stock of available unlabeled data over the next decades.
Trends in Training Dataset Sizes
We collected a database of notable ML models and their training dataset sizes. We use this database to find historical growth trends in dataset size for different domains, particularly language and vision.
The longest training run
Training runs of large Machine Learning systems are likely to last less than 14-15 months. This is because longer runs will be outcompeted by runs that start later and therefore use better hardware and better algorithms.
A time-invariant version of Laplace's rule
We explore how to estimate the probability of an event given information of past occurrences. We explain a problem with the naive application of Laplace's rule in this context, and suggest a modification to correct it.
Machine Learning Model Sizes and the Parameter Gap
The model size of notable Machine Learning systems has grown ten times faster than before since 2018. After 2020 growth has not been entirely continuous: there was a jump of one order of magnitude which persists until today. This is relevant for forecasting model size and thus AI capabilities.
Trends in GPU price-performance
Using a dataset of 470 models of graphics processing units released between 2006 and 2021, we find that the amount of floating-point operations/second per $ doubles every ~2.5 years.
Announcing Epoch: A research initiative investigating the road to transformative AI
We’re a new research initiative forecasting developments in AI. Come join us!
Grokking “Semi-informative priors over AI timelines”
I give visual explanations for Tom Davidson’s report, Semi-informative priors over AI timelines, and summarise the key assumptions and intuitions
Grokking “Forecasting TAI with biological anchors”
I give a visual explanation of Ajeya Cotra’s draft report, Forecasting TAI with biological anchors, summarising the key assumptions, intuitions, and conclusions.
Projecting compute trends in Machine Learning
Projecting forward 70 years worth of trends in the amount of compute used to train Machine Learning models.
Compute Trends Across Three Eras of Machine Learning
We’ve compiled a dataset of the training compute for over 120 Machine Learning models, highlighting novel trends and insights into the development of AI since 1952, and what to expect going forward.
Estimating Training Compute of Deep Learning Models
We describe two approaches for estimating the training compute of Deep Learning systems, by counting operations and looking at GPU time.
What’s the backward-forward FLOP ratio for Neural Networks?
Determining the backward-forward FLOP ratio for neural networks, to help calculate their total training compute.
How to measure FLOP/s for Neural Networks empirically?
Computing the utilization rate for multiple Neural Network architectures.