Can you feel the MoE? Mixtral available with over 100 tokens per second through Together Platform!
Today, Mistral released Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights.
Mixtral-8x7b-32kseqlen, DiscoLM-mixtral-8x7b-v2 and are now live on our inference platform! We have optimized the Together Inference Engine for Mixtral and it is available at up to 100 token/s for $0.0006/1K tokens — to our knowledge the fastest performance at the lowest price!
Chat with it in our playground:
Or use this code snippet:
More on Mixtral
Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.
Mixtral...
- Handles a context of 32k tokens.
- Handles English, French, Italian, German and Spanish.
- Shows strong performance in code generation.
- Can be finetuned into an instruction-following model that achieves a score of 8.3 on MT-Bench.
Transitioning from OpenAI?
Here’s how simple it is to switch from Open AI to Together’s Mixtral serverless endpoint -
Simply add your "TOGETHER_API_KEY" (which you can find here), change the base URL to: https://api.together.xyz, and the model name to one of our 100+ open source models, and you'll be off to the races!
- Lower
Cost20% - faster
training4x - network
compression117x
Q: Should I use the RedPajama-V2 Dataset out of the box?
RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.
article