Recent developments in large-scale machine learning suggest that by scaling up data, model size and training time properly, one might be able to observe that improvements in pre-training would transfer favorably to few-shot settings in most downstream tasks. In this work we systematically study this phenomena and establish that, as we increase the upstream accuracy, performance of downstream tasks saturates. In particular, we consider more than 4800 experiments on image recognition task with the family of Vision Transformers of varying size and configurations, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion trained on the largest scale of available image data (JFT, ImageNet21k) and evaluated on more than 20 downstream tasks.
We model the relationship between downstream and upstream accuracy that reflects the saturation phenomena and capture a nonlinear relationship in performance of upstream and downstream tasks. We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other, i.e., in order to have a better downstream performance, we need to hurt upstream accuracy. We delve deeper into understanding the reasons that give rise to these phenomena.