Matt Berman x Together AI
Try DeepSeek-R1 70B Distilled free
Start experimenting with the power of reasoning models today.

DeepSeek-R1 on Together AI
Security-first approach
We host all models in our own data centers, with no data sharing back to DeepSeek. Developers retain full control over their data with opt-out privacy settings.
Full R1 model family
While others may only serve distilled models, we provide access to larger R1 model, and the distilled variants, ensuring you can test and deploy the model that best suits your needs.
Serverless infrastructure
Our infrastructure is optimized for large-scale models like DeepSeek-R1, providing the high throughput and low latency necessary for production workloads, with the flexibility of pay-per-token pricing.
Introducing the DeepSeek-R1 Distilled Family
The DeekSeek-R1 family of distilled models are variants of SOTA open source models that have been distilled by DeepSeek-R1 to have reasoning capabilites.
Llama 70B R1 Distilled
Llama 70B distilled with reasoning capabilities from Deepseek R1. Surpasses GPT-4o with 94.5% on MATH-500 & matches o1-mini on coding.
Qwen 14B R1 Distilled
Qwen 14B distilled with reasoning capabilities from Deepseek R1. Outperforms GPT-4o in math & matches o1-mini on coding.
Qwen 1.5B R1 Distilled
Small Qwen 1.5B distilled with reasoning capabilities from Deepseek R1. Beats GPT-4o on MATH-500 whilst being a fraction of the size.