Research

Our current research focuses on three main directions, which have a strong degree of overlap and interaction.

Modeling the Future of AI

Improving our understanding of how future AI developments will unfold, by creating and improving forecasting models, and answering fundamental questions of central strategic importance to AI safety.

Understanding the AI Landscape

Doing foundational research to understand the inner workings of the AI production function, and what impacts this will have as we head into a world with increasingly advanced AI.

Machine Learning Trends

Gathering crucial data about the inputs and outputs of Machine Learning systems, analysing trends, helping build a big-picture understanding of developments in AI.


Publications

A Tour of AI Timelines

Sequence

Jun. 06, 2022

Anson Ho

This is an overview of the AI Timelines landscape that aims to make previous timelines research more accessible, and to also introduce new ideas/framings.

Are models getting harder to find?

Paper

Aug. 2020

Tamay Besiroglu

We estimate an R&D-based growth model using: (1) data on machine learning performance using a monthly panel dataset on the top performance across 93 machine learning benchmarks, and (2) data...

Compute Trends Across Three Eras of Machine Learning

Paper

Feb. 11, 2022

Jaime Sevilla, Tamay Besiroglu, Anson Ho, Lennart Heim, Marius Hobbhahn, and Pablo Villalobos

Compute, data, and algorithmic advances are the three fundamental factors that guide the progress of modern Machine Learning (ML). In this paper we study trends in the most readily quantified...

Estimating Training Compute of Deep Learning Models

Report

Jan. 20, 2022

Jaime Sevilla, Lennart Heim, Marius Hobbhahn, Tamay Besiroglu, Anson Ho and Pablo Villalobos

We describe two approaches for estimating the training compute of Deep Learning systems, by counting operations and looking at GPU time.

Parameters, Compute and Data Trends in Machine Learning

Database

Sep. 2022

Jaime Sevilla et al.

Public dataset

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

Paper

Jul. 2022

Tilman Rauker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell

The last decade of machine learning has seen drastic increases in scale and capabilities, and deep neural networks (DNNs) are increasingly being deployed across a wide range of domains. However,...