Models / Chat / DeepSeek R1
DeepSeek R1
Open-source reasoning model rivaling OpenAI-o1, excelling in math, code, reasoning, and cost efficiency.


Deploy DeepSeek R1 at scale
Run DeepSeek R1 on the fastest DeepSeek-R1 671B endpoint or deploy on Together Reasoning Clusters to get dedicated GPU infrastructure for high-throughput, low-latency inference, optimized for variable, token-heavy reasoning workloads.
API Usage
Endpoint
RUN INFERENCE
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1",
"messages": [{"role": "user", "content": "What are some fun things to do in New York?"}]
}'
JSON RESPONSE
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=[{"role": "user", "content": "What are some fun things to do in New York?"}],
)
print(response.choices[0].message.content)
JSON RESPONSE
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
messages: [{"role": "user", "content": "What are some fun things to do in New York?"}],
model: "deepseek-ai/DeepSeek-R1",
});
console.log(response.choices[0].message.content)
JSON RESPONSE
Model Provider:
DeepSeek
Type:
Chat
Variant:
Parameters:
685B
Deployment:
✔ Serverless
Quantization
FP8
Context length:
164K
Pricing:
$3 input / $7 output
Run in playground
Deploy model
Quickstart docs
How to use DeepSeek R1
Reasoning models are trained very differently from their non-reasoning counter parts, and as a result they serve different purposes. Below we'll compare both types of models, details for reasoning models, pros and cons, applications and example use-cases.
Reasoning models like DeepSeek-R1 are specifically developed to engage in extended, deep analysis of complex challenges. Their strength lies in strategic thinking, developing comprehensive solutions to intricate problems, and processing large amounts of nuanced information to reach decisions. Their high precision and accuracy make them particularly valuable in specialized fields traditionally requiring human expertise, such as mathematics, scientific research, legal work, healthcare, financial analysis.
Non-reasoning models such as Llama 3.3 70B or DeepSeek-V3 are trained for efficient, direct task execution with faster response times and better cost efficiency.
Your application can leverage both types of models: using DeepSeek-R1 to develop the strategic framework and problem-solving approach, while deploying non-reasoning models to handle specific tasks where swift execution and cost considerations outweigh the need for absolute precision.
Reasoning models excel for tasks where you need:
- High accuracy and dependable decision-making capabilities
- Solutions to complex problems involving multiple variables and ambiguous data
- Can afford higher query latencies
- Have a higher cost/token budget per task
Non-reasoning models are optimal when you need:
- Faster processing speed(lower overall query latency) and lower operational costs
- Execution of clearly defined, straightforward tasks
- Function calling, JSON mode or other well structured tasks
Model details
1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,we introduce DeepSeek-R1, which incorporates cold-start data before RL.DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the Usage Recommendation section.

2. Model Summary
Post-Training: Large-Scale Reinforcement Learning on the Base Model
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.We believe the pipeline will benefit the industry by creating better models.
Distillation: Smaller Models Can Be Powerful Too
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
3. Model Downloads
DeepSeek-R1 Models
Model | #Total Params | #Activated Params | Context Length | Download |
---|---|---|---|---|
DeepSeek-R1-Zero | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1 | 671B | 37B | 128K | 🤗 HuggingFace |
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to DeepSeek-V3 repository.
DeepSeek-R1-Distill Models
Model | Base Model | Download |
---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | Qwen2.5-Math-1.5B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-7B | Qwen2.5-Math-7B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-8B | Llama-3.1-8B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-14B | Qwen2.5-14B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Qwen-32B | Qwen2.5-32B | 🤗 HuggingFace |
DeepSeek-R1-Distill-Llama-70B | Llama-3.3-70B-Instruct | 🤗 HuggingFace |
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.We slightly change their configs and tokenizers. Please use our setting to run these models.
4. Evaluation Results
DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
Architecture | - | - | MoE | - | - | MoE | |
# Activated Params | - | - | 37B | - | - | 37B | |
# Total Params | - | - | 671B | - | - | 671B | |
English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 |
Distilled Model Evaluation
Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
---|---|---|---|---|---|---|---|
Architecture | - | - | MoE | - | - | MoE | |
# Activated Params | - | - | 37B | - | - | 37B | |
# Total Params | - | - | 671B | - | - | 671B | |
English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | 91.8 | 90.8 |
MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | 92.9 | |
MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | 84.0 | |
DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | 92.2 | |
IF-Eval (Prompt Strict) | 86.5 | 84.3 | 86.1 | 84.8 | - | 83.3 | |
GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | 75.7 | 71.5 | |
SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | 47.0 | 30.1 | |
FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | 82.5 | |
AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | 87.6 | |
ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | 92.3 | |
Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | 65.9 |
Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | 96.6 | 96.3 | |
Codeforces (Rating) | 717 | 759 | 1134 | 1820 | 2061 | 2029 | |
SWE Verified (Resolved) | 50.8 | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | |
Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | 61.7 | 53.3 | |
Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | 79.8 |
MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | 97.3 |
Usage Recommendations
We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:
- Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
- Avoid adding a system prompt; all instructions should be contained within the user prompt.
- For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
- When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "<think>\n\n</think>") when responding to certain queries, which can adversely affect the model's performance.To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "<think>\n" at the beginning of every output.
5. License
This code repository and the model weights are licensed under the MIT License.DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under llama3.1 license.
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under llama3.3 license.
6. Citation
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
7. Contact
If you have any questions, please raise an issue or contact us at service@deepseek.com.
Prompting DeepSeek R1
Prompting DeepSeek-R1, and other reasoning models in general, is quite different from working with non-reasoning models.
Below we provide guidance on how to get the most out of DeepSeek-R1:
- Clear and specific prompts: Write your instructions in plain language, clearly stating what you want. Complex, lengthy prompts often lead to less effective results.
- Sampling parameters: Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. Also, a top-p of 0.95 is recommended.
- No system prompt: Avoid adding a system prompt; all instructions should be contained within the user prompt.
- No few-shot prompting: Do not provide examples in the prompt, as this consistently degrades model performance. Rather, describe in detail the problem, task, and output format you want the model to accomplish. If you do want to provide examples, ensure that they align very closely with your prompt instructions.
- Structure your prompt: Break up different parts of your prompt using clear markers like XML tags, markdown formatting, or labeled sections. This organization helps ensure the model correctly interprets and addresses each component of your request.
- Set clear requirements: When your request has specific limitations or criteria, state them explicitly (like "Each line should take no more than 5 seconds to say..."). Whether it's budget constraints, time limits, or particular formats, clearly outline these parameters to guide the model's response.
- Clearly describe output: Paint a clear picture of your desired outcome. Describe the specific characteristics or qualities that would make the response exactly what you need, allowing the model to work toward meeting those criteria.
- Majority voting for responses: When evaluating model performance, it is recommended to generate multiple solutions and then use the most frequent results.
- No chain-of-thought prompting: Since these models always reason prior to answering the question, it is not necessary to tell them to "Reason step by step..."
- Math tasks: For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
- Forcing <think>: On rare occasions, DeepSeek-R1 tends to bypass the thinking pattern, which can adversely affect the model's performance. In this case, the response will not start with a <think> tag. If you see this problem, try telling the model to start with the <think> tag.
Applications & Use Cases
Reasoning models use-cases
- Analyzing and assessing AI model outputs: Reasoning models excel at evaluating responses from other systems, particularly in data validation scenarios. This becomes especially valuable in critical fields like law, where these models can apply contextual understanding rather than just following rigid validation rules.
- Code analysis and improvement: Reasoning models are great at conducting thorough code reviews and suggesting improvements across large codebases. Their ability to process extensive code makes them particularly valuable for comprehensive review processes.
- Strategic planning and task delegation: These models shine in creating detailed, multi-stage plans and determining the most suitable AI model for each phase based on specific requirements like processing speed or analytical depth needed for the task.
- Complex document analysis and pattern recognition: The models excel at processing and analyzing extensive, unstructured documents such as contract agreements, legal reports, and healthcare documentation. They're particularly good at identifying connections between different documents and making connections.
- Precision information extraction: When dealing with large volumes of unstructured data, these models excel at pinpointing and extracting exactly the relevant information needed to answer specific queries, effectively filtering out noise in search and retrieval processes. This makes them great to use in RAG or LLM augmented internet search use-cases.
- Handling unclear instructions: These models are particularly skilled at working with incomplete or ambiguous information. They can effectively interpret user intent and will proactively seek clarification rather than making assumptions when faced with information gaps.
Looking for production scale? Deploy on a dedicated endpoint
Deploy DeepSeek R1 on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.
