| | --- |
| | license: mit |
| | language: |
| | - en |
| | - fr |
| | pipeline_tag: translation |
| | tags: |
| | - translation |
| | - lora |
| | - peft |
| | - finetuning |
| | - tinyllama |
| | - opus-100 |
| | base_model: |
| | - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T |
| | datasets: |
| | - Helsinki-NLP/opus-100 |
| | --- |
| | |
| | # EN–FR Translation LoRA on TinyLlama-1.1B |
| |
|
| | A lightweight LoRA adapter fine-tuned on the **TinyLlama-1.1B** base model for ** **English → French translation**** (binary classification: Positive or Negative). |
| |
|
| | Trained with **supervised fine-tuning (SFT)** on only **8,000 examples** from the Hugging Face **Helsinki-NLP/opus-100** dataset, formatted as instruction prompts. Despite the limited data, it achieves solid performance with very low memory usage and fast inference. |
| |
|
| | ## Model Details |
| |
|
| | - **Base Model**: `TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T` |
| | - **LoRA Configuration**: |
| | - Rank (`r`): 8 |
| | - Scaling (`lora_alpha`): 16 |
| | - Target modules: `["q_proj", "v_proj"]` |
| | - Dropout: 0.05 |
| | - Bias: "none" |
| | - Task type: `CAUSAL_LM` |
| | - **Training Data**: 8,000 labeled samples from **Helsinki-NLP/opus-100** (balanced) |
| | - **Training Method**: Instruction-tuned SFT using PEFT + TRL |
| |
|
| | ## Usage Example |
| |
|
| |
|
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | from peft import PeftModel |
| | import torch |
| | |
| | # 定义基础模型和 LoRA 模型仓库 |
| | base_model_name = "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T" |
| | repo_id_translation = "BEncoderRT/EN-FR-Translation-LoRA-TinyLlama-1.1B" # 请确认此 repo 是否存在,若不存在请替换为正确 ID 或本地路径 |
| | |
| | # 加载 tokenizer |
| | tokenizer = AutoTokenizer.from_pretrained(base_model_name) |
| | if tokenizer.pad_token is None: |
| | tokenizer.pad_token = tokenizer.eos_token |
| | |
| | # 加载基础模型(仅加载一次) |
| | base_model = AutoModelForCausalLM.from_pretrained( |
| | base_model_name, |
| | device_map="auto", # 自动分配到 GPU/CPU |
| | torch_dtype=torch.float16 # 推荐使用 half precision 节省显存 |
| | ) |
| | |
| | # 单独加载 translation LoRA adapter |
| | translation_model = PeftModel.from_pretrained( |
| | base_model, |
| | repo_id_translation, |
| | adapter_name="translation" # 可选,默认为 default |
| | ) |
| | |
| | translation_model.eval() # 设置为评估模式 |
| | print("Translation LoRA 模型加载完成。") |
| | |
| | # 推理函数(仅针对 translation 任务:English to French) |
| | def translation_inference(model, tokenizer, english_text, max_new_tokens=100): |
| | # 设置 adapter(如果有多个,这里确保使用 translation) |
| | if hasattr(model, "set_adapter"): |
| | model.set_adapter("translation") |
| | |
| | # 构造 prompt(根据原多任务代码的格式) |
| | formatted_prompt = ( |
| | "### Task: Translation (English to French)\n" |
| | "### English:\n" |
| | f"{english_text}\n" |
| | "### French:\n" |
| | ) |
| | |
| | inputs = tokenizer(formatted_prompt, return_tensors="pt", truncation=True, max_length=512).to(model.device) |
| | |
| | with torch.no_grad(): |
| | outputs = model.generate( |
| | **inputs, |
| | max_new_tokens=max_new_tokens, |
| | do_sample=True, |
| | temperature=0.7, |
| | top_k=50, |
| | top_p=0.95, |
| | eos_token_id=tokenizer.eos_token_id, |
| | pad_token_id=tokenizer.pad_token_id |
| | ) |
| | |
| | generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) |
| | |
| | # 提取翻译结果 |
| | answer_start = generated_text.find("### French:\n") |
| | if answer_start != -1: |
| | extracted = generated_text[answer_start + len("### French:\n"):].strip() |
| | # 去除可能的后续内容(如另一个 ###) |
| | end_index = extracted.find("###") |
| | if end_index != -1: |
| | extracted = extracted[:end_index].strip() |
| | return extracted |
| | |
| | return generated_text # Fallback:返回完整生成文本 |
| | |
| | # --- 测试用例 --- |
| | print("\n测试 English to French Translation:") |
| | |
| | english_1 = "The quick brown fox jumps over the lazy dog." |
| | print(f"English: {english_1}") |
| | print(f"French: {translation_inference(translation_model, tokenizer, english_1)}\n") |
| | |
| | english_2 = "Life is beautiful." |
| | print(f"English: {english_2}") |
| | print(f"French: {translation_inference(translation_model, tokenizer, english_2)}\n") |
| | |
| | english_3 = "Hello, how are you today? I hope everything is going well." |
| | print(f"English: {english_3}") |
| | print(f"French: {translation_inference(translation_model, tokenizer, english_3)}\n") |
| | |
| | english_4 = "Machine learning is a subset of artificial intelligence that focuses on the development of algorithms capable of learning from data." |
| | print(f"English: {english_4}") |
| | print(f"French: {translation_inference(translation_model, tokenizer, english_4)}") |
| | ``` |
| |
|
| |
|
| |
|
| | ``` |
| | Translation LoRA 模型加载完成。 |
| | |
| | 测试 English to French Translation: |
| | English: The quick brown fox jumps over the lazy dog. |
| | French: Le chien sauvage et le chat fastueux s'adressent à un autre chat, qui ne voit rien. |
| | |
| | English: Life is beautiful. |
| | French: La vie est belle. |
| | |
| | English: Hello, how are you today? I hope everything is going well. |
| | French: Bonjour, comment ça va aujourd'hui ? |
| | |
| | English: Machine learning is a subset of artificial intelligence that focuses on the development of algorithms capable of learning from data. |
| | French: Le machine learning est un sous-ensemble de l'intelligence artificielle qui s'intéresse au développement d'algorithmes capables de se former en apprenant les données. |
| | ``` |
| |
|
| |
|
| |
|
| |
|
| |
|