EXAONE 3.5 7.8B Korean-English Translation

This model is LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct fine-tuned with LoRA and merged for Korean-English translation tasks.

Model Details

  • Base Model: LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • LoRA Config:
    • Rank (r): 64
    • Alpha: 128
    • Dropout: 0.05
    • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "ray5273/EXAONE-3.5-7.8B-KorEng-Translation",
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    "ray5273/EXAONE-3.5-7.8B-KorEng-Translation",
    trust_remote_code=True
)

messages = [
    {"role": "system", "content": "You are a helpful translator."},
    {"role": "user", "content": "Translate the following to English: ์•ˆ๋…•ํ•˜์„ธ์š”, ๋งŒ๋‚˜์„œ ๋ฐ˜๊ฐ‘์Šต๋‹ˆ๋‹ค."}
]

input_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

output = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))

License

This model follows the EXAONE AI Model License Agreement 1.1 - NC. See the EXAONE License for details.

Downloads last month
1
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ray5273/EXAONE-3.5-7.8B-KorEng-Translation

Finetuned
(19)
this model