Building a personalized code assistant with open-source LLMs using RAG Fine-tuning
RAG fine-tuning combines code retrieval with model training, addressing the limitations of outdated knowledge and hallucinations in LLMs. Our experiments with fine-tuning Mistral 7B Instruct v0.2 on Together AI Platform show that RAG fine-tuned models achieve up to 16% better accuracy than Claude 3 Opus, while offering 3.7x faster speed and an astounding 150x cost reduction. Compared to GPT-4o, the models achieve up to 19% quality improvement, 1.1x faster speed at 37.5x cost reduction.

*The data presented in this graph was sourced from Artificial Analysis for serverless models.
Overview
Large Language Models (LLMs) have shown promising capabilities on multiple applications such as code generation, task planning, and document understanding. Despite the impressive performance, these models often fall short due to two main reasons: hallucinations and outdated knowledge in the models. For example, LLMs are limited on generating code referencing specific codebases as it requires up-to-date code and accurate code structures [1,2,3].
Retrieval-Augmented Generation (RAG) is a technique designed to address such limitations in language model capabilities by integrating retrieval methods into the text generation process. This approach involves two key phases: indexing and querying. In the indexing phase, which is typically conducted offline, external knowledge sources—such as internal documents—are divided into chunks. These chunks are then transformed into vectors using an embedding model then stored in a vector database. During the querying phase, relevant information is retrieved from this database and combined with the initial query in a specific format. This enriched input is then used by a generation model to produce the final output.
The success of RAG critically depends on the effectiveness of the indexing, retrieval, and generation processes using the retrieved context. To enhance the indexing and retrieval stages, we partnered with Morph Labs, leveraging their advanced technologies in codebase search and synthetic data generation. For the generation portion, it was vital to ensure that the system could accurately utilize the retrieved context. Thus, we fine-tuned Mistral 7B Instruct v0.2 using Together’s Fine-tuning API to ensure the model’s responses are up-to-date by incorporating the latest coding standards and practices. The results of this fine-tuning have been remarkable, showing significant enhancements in the model's performance. Many instances of our fine-tuned models have demonstrated at par or better capabilities than GPT-4o and Claude 3 Opus in popular AI open source codebases.
The Challenge of Code Generation with LLMs
When using LLMs to generate codes, they have two primary issues:
- Hallucination: Existing LLMs are trained on large amounts of data to improve the generalization. However, with large-scaled data, LLMs may also learn a lot of noisy information with inaccurate knowledge or alignment. Also, with more generalized capabilities, LLMs might sacrifice their domain-specific knowledge, such as knowledge on specific codebases. These noisy training data and over generalization cause the LLMs to generate plausible but incorrect codes based on users queries.
- Outdated Information: As codebases evolve, the static knowledge of an LLM becomes less relevant, leading to recommendations that may not comply with the latest programming practices or library updates.

Online repository-level fine-tuning with retrieval
RAG is a popular technique to address these limitations, but existing open-sourced models are limited to utilize the retrieved information in the prompt with domain-specific knowledge or format. To improve the RAG performance, we studied the RAG fine-tuning method and applied it to code generation tasks. Unlike standard fine-tuning, this approach first retrieves relevant code snippets from a code repository based on each query. Then, the snippets serve as contextual grounding during the model’s training phase, ensuring that generated code is not only accurate but also contextually appropriate based on up-to-date codebases. Figure 2 shows the overall framework for the RAG fine-tuning on code generation.
Data Morph Labs generated synthetic datasets for training and evaluation where each sample contains a natural-language question and the correct answer. For each question, a set of code snippets retrieved via the Morph Code API are used as the RAG context and concatenated with the question-answer pairs. This allows the model to learn how to utilize real-time knowledge given in the query rather than relying on pre-existing data. The below example shows a sample for vLLM:
In this study, we conducted experiments on five different codebases – Axolotl, Deepspeed, vLLM, Mapbox, WandB. These codebases offer comprehensive coverage of commonly used machine learning codebases spanning training, inference and interface. Although they are widely used by ML researchers, engineers, and practitioners, many existing LLMs show limited understanding of these codebases due to their complexity and rapidly evolving nature. Table 1 shows the number of training tokens per each codebase.
Fine-tuning To create a personalized code assistant, we fine-tuned a language generation model for each codebase on Together Fine-tuning API, using the next token prediction. In our experiment, we used the Mistral 7B Instruct v0.2 as a base model and with 4 epochs, batch size 12, and learning rate 4e-6.
Inference During the live query handling, the model performs an online retrieval via the Morph Code API to fetch the most relevant and recent code snippets for each query before generating the response.
Results
To evaluate the performance of generated examples, we used the HitRate (%) as the evaluation metric. For each query, we check whether a key identifier string is in the generated code. The identifier string is labeled by human annotators and if the generated code contains the identifier string, it indicates that the model has good understanding and reasoning capability for the codebase. We compared the RAG fine-tuned Mistral 7B Instruct v0.2 to 3 other models – Mistral 7B Instruct v0.2 (not fine-tuned), Claude 3 Opus, and GPT-4o. All of them are evaluated in the RAG setting (denoted as “RAG with” in Figure 3) where the retrieved context from the Morph Code API is incorporated in the query. Additionally, we evaluated GPT-4o without the retrieved context (i.e. “Closed-book”).
Figure 3 shows that RAG fine-tuning achieves significant performance improvements of the original model on 4 out of 5 codebases under the RAG setting. The RAG fine-tuned models with 7 billion parameters even achieve at par or better performance compared to much larger models like GPT-4o and Claude 3 Opus except on WandB. Furthermore, beyond their benefits in quality, when you build and deploy the RAG fine-tuned models in the Together platform, they also offer significant economic advantages, up to 150x cheaper, while it’s 3.7x faster during inference. Additionally, GPT-4o shows substantially lower accuracy in the closed-book setting which demonstrates that such models can often hallucinate classes and methods when the codebases are updated.

Conclusion
Our study on code-generation RAG fine-tuning provides a promising path for improved AI code assistants with repository-level context and fine-tuning an open source LLM. By leveraging the Together API and the Morph Code API, we can greatly improve the accuracy and applicability of LLM-generated code, making these models more practical and valuable tools in the hands of developers.
Generated Examples
Reference
[1] Exploring and Evaluating Hallucinations in LLM-Powered Code Generation. Liu, F; Liu, Y; Shi, L; Huang, H; Wang, R; Yang, Z; and Zhang, L. arXiv:2404.00971
[2] Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation. Liu, J; Xia, C; Wang, Y; and Zhang, L. arXiv:2305.01210
[3] Can ChatGPT replace StackOverflow? A Study on Robustness and Reliability of Large Language Model Code Generation. Zhong, L; and Wang, Z. arXiv:2308.10335.
LOREM IPSUM
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
LOREM IPSUM
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
List Item #2
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
List Item #3
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Grow
Benefits included:
✔ Up to $30K in free platform credits*
✔ 6 hours of free forward-deployed engineering time.
Funding: $5M-$10M
Scale
Benefits included:
✔ Up to $50K in free platform credits*
✔ 10 hours of free forward-deployed engineering time.
Funding: $10M-$25M
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:
Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:
Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.
- A. 2.08*1e-1 m
- B. 2.08*1e-9 m
- C. 2.08*1e-6 m
- D. 2.08*1e-3 m
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:
Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by
- A. releasing nitrogen in the soil.
- B. crowding out non-native species.
- C. adding carbon dioxide to the atmosphere.
- D. removing water from the soil and returning it to the atmosphere.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:
Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.
Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:
Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?
article