Models / MetaLlama / / Llama 3.3 70B Instruct Turbo Free API
Llama 3.3 70B Instruct Turbo Free API
Free
LLM
Free endpoint to try this 70B multilingual LLM optimized for dialogue, excelling in benchmarks and surpassing many chat models.
Try our Llama 3.3 Free API

Free
Llama 3.3 70B Instruct Turbo Free API Usage
Endpoint
meta-llama/Llama-3.3-70B-Instruct-Turbo-Free
RUN INFERENCE
curl -X POST "https://api.together.xyz/v1/chat/completions" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
"messages": [],
"stream": true
}'
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
messages=[],
stream=True
)
for token in response:
if hasattr(token, 'choices'):
print(token.choices[0].delta.content, end='', flush=True)
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
messages: [],
model: "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
stream: true
});
for await (const token of response) {
console.log(token.choices[0]?.delta?.content)
}