Models / Chat / Llama 4 Maverick
Llama 4 Maverick
SOTA 128-expert MoE powerhouse for multilingual image/text understanding, creative writing, and enterprise-scale applications.

Together AI offers day 1 support for the new Llama 4 multilingual vision models that can analyze multiple images and respond to queries about them.
Register for a Together AI account to get an API key. New accounts come with free credits to start. Install the Together AI library for your preferred language.
API Usage
Endpoint
RUN INFERENCE
curl -X POST https://api.together.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-d '{
"model":"meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
"messages":[{
"role":"user",
"content":[
{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}},
{"type":"text","text":"Describe this image."}
]
}]
}'
JSON RESPONSE
RUN INFERENCE
from together import Together
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}
},
{
"type": "text",
"text": "Describe this image."
}
]
}
]
)
print(response.choices[0].message.content)
JSON RESPONSE
RUN INFERENCE
import Together from "together-ai";
const together = new Together();
const response = await together.chat.completions.create({
model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages: [{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}
},
{
"type": "text",
"text": "Describe this image."
}
]
}]
});
console.log(response.choices[0].message.content);
JSON RESPONSE
Model Provider:
Meta
Type:
Chat
Variant:
Parameters:
400B
Deployment:
✔️ Serverless ✔️ Dedicated
Quantization
FP8
Context length:
1M
Pricing:
Input: $0.27 | Output: $0.85
Run in playground
Deploy model
Quickstart docs
Quickstart docs
How to use Llama 4 Maverick
Input
from together import Together
client = Together() # API key via api_key param or TOGETHER_API_KEY env var
# Query image with Llama 4 Maverick model
response = client.chat.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What can you see in this image?"},
{"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"}}
]
}]
)
print(response.choices[0].message.content)
Output
The image depicts a serene landscape of Yosemite National Park, featuring a river flowing through a valley surrounded by towering cliffs and lush greenery.
* **River:**
* The river is calm and peaceful, with clear water that reflects the surrounding scenery.
* It flows gently from the bottom-left corner to the center-right of the image.
* The riverbank is lined with rocks and grasses, adding to the natural beauty of the scene.
* **Cliffs:**
* The cliffs are massive and imposing, rising steeply from the valley floor.
* They are composed of light-colored rock, possibly granite, and feature vertical striations.
* The cliffs are covered in trees and shrubs, which adds to their rugged charm.
* **Trees and Vegetation:**
* The valley is densely forested, with tall trees growing along the riverbanks and on the cliffsides.
* The trees are a mix of evergreen and deciduous species, with some displaying vibrant green foliage.
* Grasses and shrubs grow in the foreground, adding texture and color to the scene.
* **Sky:**
* The sky is a brilliant blue, with only a few white clouds scattered across it.
* The sun appears to be shining from the right side of the image, casting a warm glow over the scene.
In summary, the image presents a breathtaking view of Yosemite National Park, showcasing the natural beauty of the valley and its surroundings. The calm river, towering cliffs, and lush vegetation all contribute to a sense of serenity and wonder.
Function Calling
Input
import os
import json
import openai
client = openai.OpenAI(
base_url = "https://api.together.xyz/v1",
api_key = os.environ['TOGETHER_API_KEY'],
)
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit"
]
}
}
}
}
}
]
messages = [
{"role": "system", "content": "You are a helpful assistant that can access external functions. The responses from these function calls will be appended to this dialogue. Please provide responses based on the information from these function calls."},
{"role": "user", "content": "What is the current temperature of New York, San Francisco and Chicago?"}
]
response = client.chat.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages=messages,
tools=tools,
tool_choice="auto",
)
print(json.dumps(response.choices[0].message.model_dump()['tool_calls'], indent=2))
Output
[
{
"id": "call_1p75qwks0etzfy1g6noxvsgs",
"function": {
"arguments": "{\"location\":\"New York, NY\",\"unit\":\"fahrenheit\"}",
"name": "get_current_weather"
},
"type": "function"
},
{
"id": "call_aqjfgn65d0c280fjd3pbzpc6",
"function": {
"arguments": "{\"location\":\"San Francisco, CA\",\"unit\":\"fahrenheit\"}",
"name": "get_current_weather"
},
"type": "function"
},
{
"id": "call_rsg8muko8hymb4brkycu3dm5",
"function": {
"arguments": "{\"location\":\"Chicago, IL\",\"unit\":\"fahrenheit\"}",
"name": "get_current_weather"
},
"type": "function"
}
]
Query models with multiple images
Currently this model supports 5 images as input.
Input
# Multi-modal message with multiple images
response = client.chat.completions.create(
model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": "Compare these two images."
},
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
}
},
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/slack.png"
}
}
]
}]
)
print(response.choices[0].message.content)
Output
Model details
- Model String: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
- Specs:
- 17B active parameters (400B total)
- 128-expert MoE architecture
- 524,288 context length (will be increased to 1M)
- Support for 12 languages: Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese
- Multimodal capabilities (text + images)
- Support Function Calling
- Best for: Enterprise applications, multilingual support, advanced document intelligence
- Knowledge Cutoff: August 2024
Prompting Llama 4 Maverick
Applications & Use Cases
- Multilingual customer support with visual context: Process and respond to customer inquiries with attached screenshots in 12 different languages, enabling support teams to quickly diagnose technical issues by understanding both the user's description and visual evidence simultaneously.
- Generating marketing content from multimodal PDFs: Create compelling marketing materials by analyzing existing multimedia PDFs containing both text and visuals, extracting key themes, and generating new content that maintains brand consistency across formats.
- Advanced document intelligence with text, diagrams, and tables: Extract structured information from complex documents containing a mix of text, diagrams, tables, and graphs, enabling automated analysis of technical manuals, financial reports, and research papers with unprecedented accuracy.
Looking for production scale? Deploy on a dedicated endpoint
Deploy Llama 4 Maverick on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.
