Mini Project: Fine-Tune Llama 3 for Code Review
Fine-tune Llama 3 8B on code review data using QLoRA — runs on a single 16GB GPU (A10G on RunPod) or Google Colab Pro. The model learns to give actionable, style-aware Python code review in the format of a senior engineer.
Complete Code Review Fine-Tuning Project
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training, TaskType
from trl import SFTTrainer, SFTConfig
from datasets import Dataset
BASE_MODEL = "meta-llama/Meta-Llama-3-8B-Instruct"
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, quantization_config=bnb_config, device_map="auto")
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, LoraConfig(r=16, lora_alpha=32,
target_modules=["q_proj","k_proj","v_proj","o_proj"],
lora_dropout=0.05, bias="none", task_type=TaskType.CAUSAL_LM))
model.print_trainable_parameters()
# DATASET -- code review pairs
code_review_examples = [
{
"code": "def get_user(id):\n sql = 'SELECT * FROM users WHERE id = ' + str(id)\n cursor.execute(sql)\n return cursor.fetchone()",
"review": "CRITICAL: SQL Injection vulnerability. Use parameterized queries: cursor.execute('SELECT * FROM users WHERE id = %s', (id,)). Also add type annotations and error handling.",
},
{
"code": "def process_list(data):\n result = []\n for i in range(len(data)):\n if data[i] > 0:\n result.append(data[i] * 2)\n return result",
"review": "Not Pythonic. Use list comprehension: result = [x * 2 for x in data if x > 0]. Add type hints: def process_list(data: list[float]) -> list[float]. Add docstring.",
},
] * 50
def format_code_review(ex: dict) -> dict:
messages = [
{"role": "system", "content": "You are a senior Python engineer doing code review. Identify bugs, security issues, and style problems."},
{"role": "user", "content": f"Please review this code:\n\n{ex['code']}"},
{"role": "assistant", "content": ex["review"]},
]
return {"text": tokenizer.apply_chat_template(messages, tokenize=False)}
dataset = Dataset.from_list(code_review_examples)
dataset = dataset.map(format_code_review, remove_columns=dataset.column_names)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
tokenizer=tokenizer,
args=SFTConfig(
output_dir="./code-reviewer-llama3",
num_train_epochs=5,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
learning_rate=2e-4,
bf16=True,
max_seq_length=1024,
packing=True,
logging_steps=5,
),
)
trainer.train()
trainer.save_model("./code-reviewer-llama3")
# Test
from transformers import pipeline
reviewer = pipeline("text-generation", model="./code-reviewer-llama3", tokenizer=tokenizer,
max_new_tokens=400, temperature=0.3)
test_code = "def divide(a, b):\n return a / b"
msgs = [{"role": "user", "content": f"Review this code:\n\n{test_code}"}]
prompt = tokenizer.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
print(reviewer(prompt)[0]["generated_text"].split("<|start_header_id|>assistant<|end_header_id|>")[-1])Tip
Tip
Practice Mini Project FineTune Llama 3 for Code Review in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Technical diagram.
Practice Task
Note
Practice Task — (1) Write a working example of Mini Project FineTune Llama 3 for Code Review from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with Mini Project FineTune Llama 3 for Code Review is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ai code.