This website uses cookies to anonymously analyze website traffic using Google Analytics.

GPU Cluster Requests

  • 16 → 100K+ Frontier GPU Clusters: Optimized for foundation model training.
  • NVIDIA GB200 NVL72: 36 Grace CPUs connected to 72 Blackwell GPUs.
  • NVIDIA B200: Up to 15X more real-time inference and 3X faster training.
  • NVIDIA H100 and H200: Massive fleet of Hopper GPUs available now.
  • Together Kernel Collection: Accelerating fundamental AI operations.
  • Flexible Commits: Starting from one month with scheduled buildup options.
  • Premium Support: Included with every cluster.
  • Industry-Leading Research: 9x faster training with FlashAttention-3.

“Together GPU Clusters provided a combination of amazing training performance, expert support, and the ability to scale to meet our rapid growth to help us serve our growing community of AI creators.” — Demi Guo, CEO, Pika Labs