Articles
All articles

paper
· 3 min read
Who is leading in AI? An analysis of industry AI research
The private sector has emerged as a driving force in artificial intelligence, fueled by an explosion of investment in hardware and talent. But which companies are steering the field? Our new article compares leading AI companies by research publications, citations, size of training runs, and contributions to key algorithmic innovations. In this blog post, we summarize the key findings as well as some policy implications.

report
· 31 min read
Challenges in predicting AI automation
We review the literature on predicting AI automation of tasks in the economy. There are vast differences between existing approaches, both in methodologies and predictions. We examine the significant challenges these prediction methodologies face, and how predictions relate to empirical evidence so far. This review aims to provide a comprehensive overview for researchers and policymakers on the state of AI automation predictions, their challenges, and their future potential.

report
· 27 min read
Trends in Machine Learning Hardware
We analyze recent trends in machine learning hardware performance, focusing on metrics such as computational performance, memory, interconnect bandwidth, price-performance, and energy efficiency across different GPUs and accelerators. The analysis aims to provide a holistic view of ML hardware capability and bottlenecks.

announcement
· 1 min read
Announcing Epoch’s Updated Parameter, Compute and Data Trends Database
We are releasing a newly expanded database, which tracks the parameters, datasets, training compute and other details of over 500 notable machine learning systems

paper
· 11 min read
Explosive Growth from AI: A Review of the Arguments
Our new article reviews growth theory and empirical arguments regarding the potential of advanced AI substantially accelerating economic growth. We take stock of the key arguments for why we might or might not expect growth that is on the order of ten-fold the growth rates common in today’s frontier economies once advanced AI systems are widely deployed.

report
· 27 min read
Trading Off Compute in Training and Inference
Some techniques allow to increase the performance of Machine Learning models at the cost of more expensive inference, or reduce inference compute at the cost of lower performance. This possibility induces a tradeoff between spending more resources on training or on inference. We explore the characteristics of this tradeoff and outline some implications for AI governance.

report
· 10 min read
The Limited Benefit of Recycling Foundation Models
I investigate the benefits of recycling old foundation models to save training costs on large training runs, finding that it seems unlikely that model recycling will result in more than a modest increase in AI capabilities.

report
· 3 min read
Extrapolating Performance in Language Modeling Benchmarks
We study trends in language model performance across five orders of magnitude of parameter scaling, finding that compute-focused extrapolations are a promising way to forecast AI capabilities.

announcement
· 3 min read
Epoch and FRI Mentorship Program Summer 2023
We are launching the Epoch and FRI mentorship program for women, non-binary people, and trans people of all genders.

report
· 14 min read
Direct Approach Interactive Model
The Direct Approach framework bounds the compute requirements for transformative AI by extrapolating neural scaling laws. We combine those estimates with simple models of future progress in algorithms, investment, and compute costs to produce a user-adjustable forecast over the date at which TAI will be achieved.

viewpoint
· 26 min read
A Compute-Based Framework for Thinking About the Future of AI
I explain a framework for predicting the future of AI. The framework states that compute is ultimately the most important driver of progress in AI, and that AI will likely dramatically increase the world economic growth rate later this century. I also defend the idea that progress in AI will likely become relatively predictable, allowing us to anticipate AI capabilities before they are fully formed.

viewpoint
· 1 min read
Please Report Your Compute
We’ve written an opinion piece pushing for AI researchers and engineers to be more transparent in reporting their compute usage. This can help forecast future developments and risks, inform AI governance and policy, and benefit the broader AI community.

report
· 10 min read
The Direct Approach
Empirical scaling laws can help predict the cross-entropy loss associated with training inputs, such as compute and data. However, in order to predict when AI will achieve some subjective level of performance, it is necessary to devise a way of interpreting the cross-entropy loss of a model. This blog post provides a discussion of one such theoretical method, which we call the Direct Approach.

paper
· 2 min read
Power Laws in Speedrunning and Machine Learning
We develop a model for predicting record improvements in video game speedrunning and apply it to predicting Machine Learning benchmarks. This model suggests that Machine Learning benchmarks are not close to saturation, and that large sudden improvements are infrequent, but not ruled out.

announcement
· 1 min read
Announcing Epoch’s Dashboard of Key Trends and Figures in Machine Learning
We are launching a dashboard that provides key data from our research on Machine Learning, aiming to serve as a valuable resource for understanding the present and future of the field.

announcement
· 1 min read
Epoch Impact Report 2022
Our impact report for 2022.

report
· 66 min read
Trends in the Dollar Training Cost of Machine Learning Systems
I combine training compute and GPU price-performance data to estimate the cost of compute in US dollars for the final training run of 124 machine learning systems published between 2009 and 2022, and find that the cost has grown by approximately 0.5 orders of magnitude per year.

report
· 6 min read
Scaling Laws Literature Review
I have collected a database of scaling laws for different tasks and architectures, and reviewed dozens of papers in the scaling law literature.

announcement
· 1 min read
An Interactive Model of AI Takeoff Speeds
We have developed an interactive website showcasing a new model of AI takeoff speeds.

report
· 16 min read
Literature Review of Transformative Artificial Intelligence Timelines
We summarize and compare several models and forecasts predicting when transformative AI will be developed.

paper
· 2 min read
Revisiting Algorithmic Progress
We use a dataset of over a hundred computer vision models from the last decade to investigate how better algorithms and architectures have enabled researchers to use compute and data more efficiently. We find that every 9 months, the introduction of better algorithms contribute the equivalent of a doubling of compute budgets.

report
· 28 min read
Predicting GPU Performance
We develop a simple model that predicts progress in the performance of field-effect transistor-based GPUs under the assumption that transistors can no longer miniaturize after scaling down to roughly the size of a single silicon atom. Our model forecasts that the current paradigm of field-effect transistor-based GPUs will plateau sometime between 2027 and 2035, offering a performance of between 1e14 and 1e15 FLOP/s in FP32.

paper
· 3 min read
Will We Run Out of ML Data? Evidence From Projecting Dataset Size Trends
Based on our previous analysis of trends in dataset size, we project the growth of dataset size in the language and vision domains. We explore the limits of this trend by estimating the total stock of available unlabeled data over the next decades.

report
· 5 min read
Trends in Training Dataset Sizes
We collected a database of notable ML models and their training dataset sizes. We use this database to find historical growth trends in dataset size for different domains, particularly language and vision.

report
· 12 min read
The Longest Training Run
Training runs of large Machine Learning systems are likely to last less than 14-15 months. This is because longer runs will be outcompeted by runs that start later and therefore use better hardware and better algorithms.

report
· 22 min read
A Time-Invariant Version of Laplace’s Rule
We explore how to estimate the probability of an event given information of past occurrences. We explain a problem with the naive application of Laplace's rule in this context, and suggest a modification to correct it.

paper
· 2 min read
Machine Learning Model Sizes and the Parameter Gap
The model size of notable Machine Learning systems has grown ten times faster than before since 2018. After 2020 growth has not been entirely continuous: there was a jump of one order of magnitude which persists until today. This is relevant for forecasting model size and thus AI capabilities.

report
· 14 min read
Trends in GPU Price-Performance
Using a dataset of 470 models of graphics processing units released between 2006 and 2021, we find that the amount of floating-point operations/second per $ doubles every ~2.5 years.

announcement
· 4 min read
Announcing Epoch: A Research Initiative Investigating the Road to Transformative AI
We are a new research initiative forecasting developments in AI. Come join us!

report
· 16 min read
Grokking “Semi-informative priors over AI timelines”
I give visual explanations for Tom Davidson’s report, Semi-informative priors over AI timelines, and summarise the key assumptions and intuitions

report
· 16 min read
Grokking “Forecasting TAI With Biological Anchors”
I give a visual explanation of Ajeya Cotra’s draft report, Forecasting TAI with biological anchors, summarising the key assumptions, intuitions, and conclusions.

paper
· 7 min read
Compute Trends Across Three Eras of Machine Learning
We’ve compiled a dataset of the training compute for over 120 Machine Learning models, highlighting novel trends and insights into the development of AI since 1952, and what to expect going forward.

report
· 22 min read
Estimating Training Compute of Deep Learning Models
We describe two approaches for estimating the training compute of Deep Learning systems, by counting operations and looking at GPU time.

report
· 8 min read
What’s the Backward-Forward FLOP Ratio for Neural Networks?
Determining the backward-forward FLOP ratio for neural networks, to help calculate their total training compute.

report
· 9 min read
How to Measure FLOP/s for Neural Networks Empirically?
Computing the utilization rate for multiple Neural Network architectures.