This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

Together AI partners with Meta to release Meta Llama 3 for inference and fine-tuning

April 18, 2024

By 

Together AI

Together AI is proud be a launch partner for Meta Llama 3 on the new Together Inference Engine providing best in class performance up to 350 tokens per second. This industry-leading performance enables enterprises to build production applications in the environment of their choice (cloud, private cloud and on-prem).

Llama 3 is an accessible, open large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community.

This release features pretrained and instruction fine-tuned language models with 8B and 70B parameter counts that can support a broad range of use cases, and today we are making available each of the following models for inference and fine-tuning through the Together API:

Additionally, Meta LlamaGuard-V2-8B is available as a safety model to be used in conjunction with any of the over 200 open-source models available through the Together API.

Llama 3 includes a number of significant advancements:

  • Improvements in post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses. Post-training also greatly improved capabilities like reasoning, code generation, and following instructions.
  • Llama 3 uses a relatively standard decoder-only transformer architecture, however, it uses a tokenizer that encodes language much more efficiently, which leads to substantially improved model performance.

Together Inference Engine 2.0

As part of today’s release we are excited to share a preview of the latest version of the Together Inference Engine, providing up to 350 tokens per second for Llama 3 8B and up to 150 tokens per second for Llama 3 70B, running in full FP16 precision. The new Together Inference Engine will roll out for all models in the coming weeks.
As generative AI applications expand from chat interfaces to enterprise use cases that leverage agents, function calling between LLMs, and integration into business processes, low-latency responses are crucial. The Together Inference Engine provides industry-leading performance for these demanding applications.

Llama 3 8B Instruct Performance

Llama 3 70B Instruct Performance

Open source AI thriving

  • We have long believed that openness leads to better, safer products, faster innovation, and a healthier overall market.
  • We’re dedicated to developing Llama 3 in a responsible way, and we’re offering various resources to help others use it responsibly as well. This includes introducing new trust and safety tools with Llama Guard 2, Code Shield, and CyberSec Eval 2.

Get started today!

To get started using Llama 3 on the Together API visit api.together.ai to sign up for an account. Documentation for using both the inference and fine-tuning APIs are available at docs.together.ai.

Additionally, we offer the ability to run the Together Platform in VPC and on-premise deployments for enterprise customers as well as dedicated instances on the Together Cloud. If this is of interest contact sales to learn more.

We can’t wait to see what you’ll build!

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

Run Meta Llama 3 for your production traffic

Interested in the ability to run the Together Platform in VPC and on-premise deployments or as a dedicated instance on the Together Cloud?

No items found.
Start
building
yours
here →