Models / Mistral AI
Chat

Ministral 3 8B Instruct 2512

Balanced 8B multimodal model for versatile assistants, agents, and multilingual understanding.

Performance benchmarks

Model

AIME 2025

GPQA Diamond

HLE

LiveCodeBench

MATH500

SWE-bench verified

78.7%

66.8%

61.6%

Related open-source models

Competitor closed-source models

Claude Opus 4.6

90.5%

34.2%

78.7%

OpenAI o3

83.3%

24.9%

99.2%

62.3%

OpenAI o1

76.8%

96.4%

48.9%

GPT-4o

49.2%

2.7%

32.3%

89.3%

31.0%

This model is coming soon to Together’s Serverless API.

Deploy this model on an on-demand Dedicated Endpoint or pick a supported alternative from the Model Library.

Related models
  • Model provider
    Mistral AI
  • Type
    Chat
  • Main use cases
    Chat
    Small & Fast
  • Parameters
    8.9B
  • Context length
    256K