Mamba-3B-SlimPJ: State-space models rivaling the best Transformer architecture
The Mamba architecture, building on a long line of work on state-spaces models (e.g S4) and hardware-efficient algorithms (e.g. FlashAttention), has emerged as a strong contender to Transformers, but with linear scaling in sequence length and fast inference. As part of a collaboration between us and Together AI & Cartesia AI, we are releasing a Mamba model with 3B parameters trained on 600B tokens on the SlimPajama dataset, under the Apache 2 license.
Model code: https://github.com/state-spaces/mamba
Model weights: https://huggingface.co/state-spaces/mamba-2.8b-slimpj
Trained on 600B tokens, Mamba-3B-SlimPJ matches the quality of some of the best 3B Transformers such as BTLM-3B-8K (also trained for 600B tokens) with 17% fewer FLOPs. BTLM-3B-8K uses a strong Transformer architecture with advanced training techniques that even surpasses some of the 7B Transformers. This further validates that Mamba is a promising architecture for building foundation models.
Training details
We trained Mamba-3B-SlimPJ on 600B tokens, with context length 2048, using the same hyperparameters as Mamba-3B on the Pile (300B tokens), except with a longer learning rate decay to accommodate more tokens.
We use the SlimPajama dataset, with the GPT-NeoX tokenizer. The SlimPajama dataset is a cleaned and deduplicated version of RedPajama. This is what we love about open-source AI: different groups building on each other’s work on data and models.
Evaluation
Mamba-3B-SlimPJ matches the quality of very strong Transformers (BTLM-3B-8K), with 17% fewer training FLOPs. Generally more data and compute would yield better models, for example a similar sized StableLM-3B-4E1T trained on 7x more tokens (1T tokens for 4 epochs) still performs better than Mamba-3B-SlimPJ or BTLM-3B-8K.
We evaluate Mamba-3B-SlimPJ on 10 tasks following the procedure in BTLM-3B-8K: BoolQ, PIQA, HellaSwag, WinoGrande, ARC easy, ARC challenge, OpenBookQA, RACE-high, TruthfulQA, and MMLU. All evaluations use zero-shot, except MMLU which uses 5 shots. We report normalized accuracies for PIQA, HellaSwag, ARC-e, ARC-c, OpenBookQA, MMLU, and accuracies for BoolQ, WinoGrande, RACE-high, and TruthfulQA (MC2 score).
Looking forward
Transformers such as BTLM-3B-8K can make use of more advanced techniques such as variable length training and maximal update parameterization. We look forward to exploring these techniques to improve Mamba training in the future.
We’ve been very happy to see the excitement around SSMs and architectures beyond Transformers in general, and Mamba in particular. Part of the motivation for this release is to provide a stronger base model for experimentation and understanding, as well as for chat and instruction-tuned models. We believe that Mamba can be a strong candidate for foundation models on diverse applications language, genomics, audio, and video.
Acknowledgement
Thanks to Cerebras for the SlimPajama dataset, and to Cerebras and OpenTensor for BTLM-3B-8K model. We also thank EleutherAI for the Pile dataset and lm-evaluation-harness.
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.
article