Models / Chat / Arcee AI Blitz
Arcee AI Blitz
LLM
Efficient 24B SLM with strong world knowledge, offering fast, affordable performance across diverse tasks.
Try our Arcee AI API
New
API Usage
Endpoint
arcee-ai/arcee-blitz
RUN INFERENCE
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "arcee-ai/arcee-blitz",
"messages": [{"role": "user", "content": "What are some fun things to do in New York?"}]
}'
JSON RESPONSE
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="arcee-ai/arcee-blitz",
messages=[{"role": "user", "content": "What are some fun things to do in New York?"}]
)
print(response.choices[0].message.content)
JSON RESPONSE
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
messages: [{"role": "user", "content": "What are some fun things to do in New York?"}],
model: "arcee-ai/arcee-blitz"
});
console.log(response.choices[0].message.content)
JSON RESPONSE
Model Provider:
Arcee AI
Type:
Chat
Variant:
Parameters:
23.6B
Deployment:
✔ Serverless
Quantization
Context length:
32k
Pricing:
$0.45 input / $0.75 output
Run in playground
Deploy model
Quickstart docs
Quickstart docs
How to use Arcee AI Blitz
Model details
Prompting Arcee AI Blitz
Applications & Use Cases
Looking for production scale? Deploy on a dedicated endpoint
Deploy Arcee AI Blitz on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.
