This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

Fine-tuning API: Introducing long-context training, conversation data support and more configuration options

November 25, 2024

By 

Max Ryabinin, Artem Chumachenko, George Grigorev, Arsh Zahed, Gleb Vazhenin

As organizations race to gain competitive advantages with generative AI, fine-tuning Large Language Models has become critical for enhancing performance on specific tasks. Today, we are launching several new features for our Fine-tuning API that make it easier for ML teams to customize open models. We've been working with companies like Salesforce and Zomato to improve our fine-tuning capabilities and enable these companies to easily fine-tune models with their own data. Now, we are excited to unveil these capabilities to all users of the Together platform.

Here are the new Fine-tuning API features and updates at a glance:

  • Longer-context fine-tuning: Train models on extended context windows for handling large documents and complex data inputs. We now support up to 32K context length for Llama 3.1 8B and 70B fine-tuning and inference.
  • Conversational and instruction data format support: Feed conversation or instruction data into the Fine-Tuning API directly without the need to manually format examples, and easily choose between training on complete examples or model outputs only.
  • Training quality improvements: Get even more capable models with no changes in hyperparameters, inputs or cost of fine-tuning jobs.
  • Validation dataset support: Test models on unseen data during training to assess how they generalize.
  • Quality-of-life improvements: We offer new options to customize your training jobs, improve experiment tracking via Weights & Biases, and provide an automated batch size setting to easily start more efficient fine-tuning runs.

Below we will describe each of these new features in more detail and show you how to use them in your fine-tuning experiments on the Together platform.

{{custom-cta-1}}

Longer-context fine-tuning

Even the most capable language models of today can struggle with processing long-sequence data. Training on longer examples allows models to retain and interpret broader sections of content, making it invaluable for tasks like document review or long-form generation.

To support this use case, we now offer fine-tuning of Llama 3.1 8B and 70B models with up to 32k context lengths. To use instruction-tuned models with extended context, just specify meta-llama/Meta-Llama-3.1-8B-32k-Instruct-Reference or meta-llama/Meta-Llama-3.1-70B-32k-Instruct-Reference as the model name when you create a fine-tuning job. See the full list of models supported for long-context training in our docs.

To learn more about potential applications of long-context fine-tuning, read our deep-dive blogpost, where we showcase this feature on a synthetic repetition task, as well as on long document summarization. We show how a fine-tuned version of Llama 3.1-8B outperforms its 70B base counterpart by over 10% in terms of the ROUGE score. This example shows how fine-tuning can result in both lower inference costs and better task performance.

Conversation and instruction data format support

Many developers are working on applications like chatbots and virtual assistants, which rely on high-quality, context-aware responses. Conversational and instruction-based data formats streamline data preparation by allowing developers to feed conversation histories and instruction datasets directly into the fine-tuning API using standard formats. This also eliminates the need for manual reformatting of examples, allowing you to easily switch between different models available in the API — if you train an instruction-finetuned model, the correct chat template will be used automatically. Lastly, the conversation format is directly compatible with our chat completions API for inference, as well as the OpenAI fine-tuning API data format: now, you can easily upload your existing data to Together and start training open models.

To submit a fine-tuning job with a conversation data format, you simply need to create and upload a JSON Lines (JSONL) file, with each line containing a JSON object with a list of messages, where each message consists of the content and the role. Here is an example of one line from such a dataset:


{
  "messages": [
    {"role": "system", "content": "This is a system prompt."},
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
    {"role": "user", "content": "Can you explain machine learning?"},
    {"role": "assistant", "content": "Machine learning is..."}
  ]
}

For an example of an instruction dataset, read the section about supported data formats in our docs. Also, for both dataset formats you can choose if you wish to train the model on complete examples or only the assistant messages (or completions in case of instruction data) with the --train-on-inputs option. By default, we train only on the model outputs, but changing the behavior with --train-on-inputs false can lead to better results on your specific dataset.

Check out our deep-dive blogpost about conversation data fine-tuning, which provides a complete example of how to use this new feature to improve the ability of Llama 3.1 to answer questions about dialogues. With this convenient way to submit conversation datasets, the exact match score on the task improves from 0.043 to 0.62! While it was possible to achieve similar gains before with manual formatting, now it’s much easier, as you can directly submit structured data and effortlessly switch between models with different chat templates.

Training quality improvements

We have made a range of improvements to the training procedure that improve the capabilities of models that you get at the end of fine-tuning. Now, the fine-tuning jobs you create through our service with the same parameters as before will get even better — at no additional cost to you.

To demonstrate the impact of those changes, we ran several experiments on our API, using a range of sufficiently complex benchmarks. We used 2 categories of benchmarks: mathematics (MATH-Hard and GSM-Plus) and knowledge-based (MMLU-Pro Humanities subset). For training, we used 100k samples from OpenMathInstruct-2 and the auxiliary train set of MMLU for mathematics and knowledge tasks correspondingly.

We ran our experiments with the instruction-finetuned version of Llama 3.1-8B, as this is one of the most popular models on our platform. For both task categories, we trained for 1 epoch, reporting the average of results across 3 runs.

Benchmark Original Llama 3.1 8B-Instruct Fine-tuned via Together
Before the update After the update
MATH-Hard 11.6 7.4 14.6
GSM-Plus 53.8 47.4 54.1
MMLU-Pro Humanities 37.6 33.9 38.4

The results of our experiments can be seen in the table above: as you can see, there is a noticeable performance boost compared to the prior fine-tuning results (ranging from 10% to almost 200%). Importantly, the model improves even when compared to a strong baseline: original Llama 3.1 performs very well on these benchmarks already, which can be attributed to its diverse post-training dataset that might contain tasks similar to ours. When you upload your own datasets to Together, you should expect to get small yet consistent improvements in their quality after the update.

Validation dataset support

With a validation dataset, you can now monitor the loss of the model on unseen data during training to make sure it can generalize to new examples. This can guide the development process, helping you choose the optimal hyperparameters and the overall training setup before proceeding to deployment.

To run a job with periodic evaluations, upload a validation dataset in the regular way, and then submit a job with the following new arguments:

  • --validation-file  to specify the ID of the file that you have uploaded to run validation
  • --n-evals to specify the total number of evaluations that will run during training.

An example command to start fine-tuning with a validation dataset is shown below:


together fine-tuning create \
  --training-file $TRAINING_FILE_NAME \
  --validation-file $VALIDATION_FILE_NAME \
  --n-evals 10 \
  --model "meta-llama/Meta-Llama-3.1-8B-Instruct-Reference"

Learn more about working with validation datasets in our documentation.

Quality-of-life enhancements

In addition to the above, we have added several smaller improvements to the service, which can help you manage your experience on the platform more easily and achieve better results with fine-grained hyperparameter choices.

  • Enhanced Weights & Biases integration: you can now specify the project name or the run name for your experiment, as well as change the W&B base URL if you are running a custom instance of Weights & Biases.
  • Automated batch size setting: to get the highest training efficiency, you can now create fine-tuning jobs that will use the largest possible batch size for any model you choose. To enable this, just set --batch-size max when you use the Together Python client to submit a fine-tuning request, or set the batch size to max . This way, you will not need to manually check the limits for every model, and the training will run as fast as possible.
  • More options for the learning rate schedule: for a better control over how the learning rate is adjusted over time, we added the --warmup-ratio parameter to control the percentage of training steps used for warmup, as well as --min-lr-ratio , which defines the final learning rate relative to the peak value.
  • Configurable weight decay and gradient clipping: it is now possible to add weight decay to control the regularization strength, and to control the gradient clipping behavior by increasing the maximum gradient norm or disabling clipping completely.

All of those parameters are documented in our API reference and are ready to use today.

Why choose the Together Fine-Tuning API?

  • Improve model quality and decrease costs: Our platform allows you to specialize the best open models on your tasks, bringing smaller and more efficient LLMs to the level of performance usually achieved by much larger models.
  • Full ownership and flexibility: Unlike some LLM customization services, Together Fine-Tuning API allows users to retain complete control over their models after training, offering an option to download final and intermediate checkpoints to run them locally.
  • High configurability: We offer a broad choice of fine-tuning models and training parameters that you can use in your experiments, including a variety of supported data formats and training hyperparameters.
  • Iterate and experiment faster: Together Fine-Tuning API supports rapid testing and optimization, enabling fast-paced iteration cycles.

Get started with fine-tuning on Together AI

We are excited to see what you will build using our new fine-tuning API features. Check out the docs to learn more about them and get started with the API. Join us on December 12 for a webinar about fine-tuning, chat with the community on Discord, or contact us directly if you’re interested in applying fine-tuning for your use cases.

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Get started with Together Fine-tuning

Start customizing models with your own data and see improved task accuracy.

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

Get started with Together Fine-tuning

Start customizing models with your own data and see improved task accuracy.

No items found.
Start
building
yours
here →