This website uses cookies to anonymously analyze website traffic using Google Analytics.

together.gpu-clusters

Frontier GPU clusters with
16-1000+ interconnected NVIDIA H100 and H200 GPUs, now featuring Together Kernel Collection.

Cutting-edge hardware

  • The fastest network for distributed training — 3.2Tbps Infiniband.

  • State-of-the-art training clusters with the fastest interconnected compute available — NVIDIA H100, H200 and A100 GPUs.

  • Directly SSH into the cluster, download your dataset and you’re ready to go.

Training speed comparison graph

Software stack ready for distributed training 

  • Train with the Together Training stack, delivering nine times faster training speed with FlashAttention-3.11

  • Slurm configured out-of-the-box for distributed training and the option to use your own scheduler. 

  • Directly SSH into the cluster, download your dataset and you’re ready to go.

Training speed comparison

Performance metrics

training horsepower

20
exaflops12

relative to aws

4x lower cost13

training speed

9x faster14

Benefits

Hardware specs

  • 01

    A100 PCIe Cluster Node Specs

    - 8x A100 / 80GB / PCIe
    - 200Gb non-blocking Ethernet
    - 120 vCPU Intel Xeon (Ice Lake)
    - 960GB RAM
    - 7.68 TB NVMe storage

  • 02

    A100 SXM Cluster Node Specs

    - 8x NVIDIA A100 80GB SXM4
    - 200 Gbps Ethernet or 1.6 Tbps Infiniband configs available
    - 120 vCPU Intel Xeon (Sapphire Rapids)
    - 960 GB RAM
    - 8 x 960GB NVMe storage

  • 03

    H100 Clusters Node Specs

    - 8x Nvidia H100 / 80GB / SXM5
    - 3.2 Tbps Infiniband network
    - 2x AMD EPYC 9474F 18 Cores 96 Threads 3.6GHz CPUs
    - 1.5TB ECC DDR5 Memory
    - 8x 3.84TB NVMe SSDs

Customers Love Us

“Together GPU Clusters provided a combination of amazing training performance, expert support, and the ability to scale to meet our rapid growth to help us serve our growing community of AI creators.”

Demi Guo

CEO, Pika Labs

After pre-training a model using

Together GPU Clusters

, you instruction-tune with

Together Fine-tuning

and host with

Together Inference
Contact us

After selecting a model with

Together Inference

, you can customize it with your own private data using

Together Fine-tuning
Try now

After building your model on

Together GPU Clusters

, you deploy your own Dedicated Instances for your production traffic with

Together Inference
Contact us