OpenELM: An Efficient Language Model Family with Open Training and Inference Framework

Abstract

We introduce OpenELM, a family of Open Efficient Language Models. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the CoreNet library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.

Iman Mirzadeh
Iman Mirzadeh
Machine Learning Research Engineer