together.products
Build and run generative AI applications with accelerated performance, maximum accuracy, and lowest cost at production scale.
Inference that’s fast, simple, and scales as you grow.
Fast
Run leading open-source models like Llama-3 on the fastest inference stack available, up to 4x faster than vLLM.
Outperforms Amazon Bedrock, and Azure AI by over 2x.
COST-EFFICIENT
Together Inference is 11x lower cost than GPT-4o when using Llama-3 70B. Our optimizations bring you the best performance at the lowest cost.
scalable
We obsess over system optimization and scaling so you don’t have to. As your application grows, capacity is automatically added to meet your API request volume.
Serverless Endpoints for leading open-source models
Perfect for enterprises — performance, privacy, and scalability to meet your needs.
Performance
You get faster tokens per second, higher throughput and lower time to first token3. And, all these efficiencies mean we can provide you compute at a lower cost.
SPEED RELATIVE TO VLLM
LLAMA-3 8B AT FULL PRECISION
COST RELATIVE TO GPT-4o
The Together Inference Engine sets us apart.
We built the blazing fast inference engine that we wanted to use. Now, we’re sharing it with you.
The Together Inference Engine deploys the latest inference techniques:
01
The Together Inference Engine integrates and builds upon kernels from FlashAttention-3 along with proprietary kernels for other operators.
02
The Together Inference Engine integrates speculative decoding algorithms such as Medusa and SpecExec. It also comes with custom-built draft models that are more than 10x Chinchilla optimal, to achieve the fastest performance.
03
Together AI quantization achieves the highest accuracy and performance. Built on proprietary kernels including MHA and GEMMs that are optimized for LLM inference, tuned for both pre-fill and decoding phases.
Privacy
control
Privacy settings put you in control of what data is kept and none of your data will be used by Together AI to train new models, unless you explicitly opt in to share it.
autonomy
When you fine-tune or train a model with Together AI the resulting model is your own private model. You own it.
security
Together AI offers flexibility to deploy in a variety of secure clouds for enterprise customers.
Customize leading open-source models with your own private data.
Achieve higher accuracy on your domain tasks.
Start by preparing your dataset — one row per label in a .jsonl file, following the prompt template of the model you are fine-tuning.
Validate that your dataset has the right format and upload it.
Begin fine-tuning with a single command — with full control over hyper parameters.
Monitor results on Weights & Biases, or deploy checkpoints and test them through the Together Playgrounds.
Fine-tune models with your data.
Host your fine-tuned model for inference when it’s ready.
Together Custom Models is designed to help you train your own state-of-the-art AI model.
Benefit from cutting-edge optimizations in the Together Training stack like FlashAttention-3.
Once done the model is yours. You retain full ownership of the model that is created, and you can run your model wherever you please.
Together Custom Models helps you through all stages of building your state-of-the-art AI model:
01. Start with data design.
Incorporate quality signals from RedPajama-v2 (30T tokens) into your model to boost its quality.
Choose data based on similarity to Wikipedia, amount of code, or how often the text uses bullets for brevity. For more details on the quality slices in RedPajama-v2 read the blog post.
Leverage advanced data selection tools like DSIR to select data slices and then optimize the amount of each slice used with DoReMi.
02. Select model architecture & training recipe.
We provide proven training recipes for instruction-tuning, long context optimization, conversational chat, and more.
Work in collaboration with our team of experts to determine the optimal architecture and training recipe.
03. Train your model.
Press go. Together Custom Models schedules, orchestrates, and optimizes your training jobs over any number of GPUs.
Up to
9x faster training
with FlashAttention-29
Up to
75% lower cost
than training on AWS10
04. Tune and align your model.
Further customize and tailor your model to follow instructions and your business rules.
05. Evaluate model quality.
Evaluate your final model on public benchmarks such as HELM and LM Evaluation Harness, and your own custom benchmark — so you can iterate quickly on model quality.
Build models from scratch
We love to build state-of-the-art models. Use Together Custom Models to train your next generative AI model.
Together GPU Clusters
We offer high-end compute clusters for training and fine-tuning. But premium hardware is just the beginning. Our clusters are ready-to-go with the blazing fast Together Training stack. And our world-class team of AI experts is standing by to help you. Together GPU Clusters has a >95% renewal rate. Come build with us, and see what the hubbub is about.
Software stack ready for distributed training
Train with the Together Training stack, delivering nine times faster training speed with FlashAttention-2.11
Slurm configured out-of-the-box for distributed training and the option to use your own scheduler.
Directly SSH into the cluster, download your dataset and you’re ready to go.
Performance metrics
training horsepower
relative to aws
training speed
Benefits
Scale infra – at your pace
Start with as little as 30 days — and expand at your own pace. Scale up or down as your needs change — from 16 to 10,000 GPUS.
SNAPPY SETUP. BLAZING FAST TRAINING.
We value your time. Your cluster comes optimized for distributed training with the high performance Together Training stack and a setup Slurm cluster out of the box. You focus on your model and we’ll ensure everything runs smoothly. ssh in, download your data, and start training.
EXPERT SUPPORT
Our team is dedicated to your success. Our expert team will help unblock you, whether you have AI or system issues. Guaranteed uptime SLA and support included with every cluster. Additional engineering services available when needed.
Hardware specs
- 01
A100 PCIe Cluster Node Specs
- 8x A100 / 80GB / PCIe
- 200Gb non-blocking Ethernet
- 120 vCPU Intel Xeon (Ice Lake)
- 960GB RAM
- 7.68 TB NVMe storage - 02
A100 SXM Cluster Node Specs
- 8x NVIDIA A100 80GB SXM4
- 200 Gbps Ethernet or 1.6 Tbps Infiniband configs available
- 120 vCPU Intel Xeon (Sapphire Rapids)
- 960 GB RAM
- 8 x 960GB NVMe storage - 03
H100 Clusters Node Specs
- 8x Nvidia H100 / 80GB / SXM5
- 3.2 Tbps Infiniband network
- 2x AMD EPYC 9474F 18 Cores 96 Threads 3.6GHz CPUs
- 1.5TB ECC DDR5 Memory
- 8x 3.84TB NVMe SSDs