|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- LLM360/K2-V2 |
|
|
--- |
|
|
|
|
|
# **K2-V2-Instruct** |
|
|
|
|
|
<img src="https://huggingface.co/LLM360/K2-V2/resolve/main/figures/K2.LOGO.PRIMARY.RGB.png" width="100" alt="K2-V2 model logo"/> |
|
|
|
|
|
📚 [Tech Report](https://www.llm360.ai/reports/K2_V2_report.pdf) - 📝 [Code](https://github.com/llm360/k2v2_train) - 🏢 [Project Page](https://huggingface.co/LLM360/K2-V2-Instruct) |
|
|
|
|
|
K2-V2 is our most capable fully open model to date, and one of the strongest open-weight models in its class. It uses a 70B-parameter dense transformer architecture and represents the latest advancement in the LLM360 model family. |
|
|
|
|
|
|
|
|
<img src="https://huggingface.co/LLM360/K2-V2/resolve/main/figures/sft-models.png" width="400" alt="K2-V2 SFT results"/> |
|
|
|
|
|
Beyond standard competencies such as factual knowledge and conversational ability, K2-V2 demonstrates strong long-context consistency, deep mathematical understanding, and robust reasoning skills. These capabilities serve as building blocks for sophisticated downstream applications, such as solving complex math problems and executing agentic workflows. |
|
|
|
|
|
|
|
|
<img src="https://huggingface.co/LLM360/K2-V2/resolve/main/figures/base-models.png" width="400" alt="K2-V2 GPQA results"/> |
|
|
|
|
|
--- |
|
|
|
|
|
## **Quick Start** |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("llm360/k2-v2", device_map="auto") |
|
|
tokenizer = AutoTokenizer.from_pretrained("llm360/k2-v2") |
|
|
|
|
|
prompt = "Explain why the derivative of sin(x) is cos(x)." |
|
|
messages = [ |
|
|
{"role": "system", "content": "You are K2, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."}, |
|
|
{"role": "user", "content": prompt} |
|
|
] |
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
add_generation_prompt=True |
|
|
) |
|
|
inputs = tokenizer(text, return_tensors="pt").to(model.device) |
|
|
outputs = model.generate(**inputs, max_new_tokens=200) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## **Evaluation Summary** |
|
|
|
|
|
Below we report performance across general, reasoning, mathematical, and coding benchmarks. Scores for K2-V2 checkpoints (base → mid-4) demonstrate the impact of staged mid-training on reasoning quality. |
|
|
|
|
|
| Task / Model | base | mid-1 | mid-2 | mid-3 | mid-4 | Qwen2.5-72B | Llama3.0-70B | Llama3.1-70B | Olmo3-32B | |
|
|
|--------------|------|-------|-------|-------|-------|--------------|---------------|---------------|------------| |
|
|
| **General Tasks** | | | | | | | | | | |
|
|
| **MMLU** | 74.3 | 74.4 | 73.5 | 75.0 | 75.2 | **86.1** | <u>79.5</u> | 79.3 | 75.2 | |
|
|
| **MMLU-Pro** | 43.7 | 46.8 | 48.1 | **59.8** | 57.0 | <u>58.1</u> | 52.8 | 53.8 | 49.6 | |
|
|
| **BBH** | 68.4 | 79.8 | 81.1 | 82.2 | <u>83.2</u> | **86.3** | 82.2 | 82.1 | 77.6 | |
|
|
| **HELLASWAG** | <u>87.8</u> | 86.9 | 86.6 | 86.6 | 86.0 | 87.6 | **88.0** | 85.0 | 84.8 | |
|
|
| **WINOGRANDE** | 82.6 | 83.7 | 83.7 | 83.7 | 83.0 | 83.9 | <u>85.3</u> | 79.8 | **90.3** | |
|
|
| **PIQA** | 84.2 | 84.0 | 83.3 | 82.9 | 83.1 | 83.5 | <u>84.6</u> | 84.3 | **85.6** | |
|
|
| **TRUTHFULQA** | 54.0 | 54.9 | 55.1 | <u>55.8</u> | 53.9 | **60.5** | 45.6 | 49.7 | 54.9 | |
|
|
| **Math & STEM Tasks** | | | | | | | | | | |
|
|
| **GPQA-DIAMOND** | 26.3 | 31.3 | 27.8 | <u>43.9</u> | **55.1** | 34.9 | 21.2 | 27.3 | 30.3 | |
|
|
| **GSM8K** | 68.0 | 76.4 | 82.1 | **93.6** | <u>92.5</u> | 91.2 | 83.2 | 81.1 | 80.5 | |
|
|
| **MATH** | 27.8 | 38.2 | 41.1 | **94.7** | <u>91.4</u> | 58.5 | 41.9 | 41.6 | 43.4 | |
|
|
| **AIME 2025** | 0.0 | 17.6 | 25.1 | **53.2** | <u>46.9</u> | 1.7 | 0.1 | 0.2 | 14.7 | |
|
|
| **ARC-CHALLENGE** | 64.9 | 66.4 | 66.4 | 66.0 | 66.3 | **72.4** | <u>69.2</u> | 64.9 | 65.4 | |
|
|
| **Coding Tasks** | | | | | | | | | | |
|
|
| **MBPP** | 57.6 | 57.8 | 58.2 | 59.8 | 61.8 | **75.4** | <u>69.2</u> | 64.4 | 60.2 | |
|
|
| **HUMANEVAL** | 50.0 | 51.2 | <u>53.7</u> | **54.3** | **54.3** | **54.3** | 42.1 | 50.6 | 36.0 | |
|
|
|
|
|
|
|
|
Below we report the evaluation results for K2-V2 after supervised fine-tuning (SFT). These variants correspond to three levels of reasoning effort (Low < Medium < High). |
|
|
|
|
|
| Metric / Model | **K2 Low**<br><sub>Dense · 70B</sub> | **K2 Medium**<br><sub>Dense · 70B</sub> | **K2 High**<br><sub>Dense · 70B</sub> | **Olmo3 Think SFT**<br><sub>Dense · 32B · No RL</sub> | **Olmo3 Think**<br><sub>Dense · 32B · RL</sub> | **Qwen3 235B**<br><sub>MoE · 235B A22B · Reasoning</sub> | **Qwen3 235B 2507**<br><sub>MoE · 235B A22B · Instruct</sub> | |
|
|
|----------------|----------------|----------------|----------------|-----------------------------|------------------------------|------------------------------|-------------------------------| |
|
|
| **LongBench V2** | 40.7 | 41.3 | 42.6 | 42.8 | 47.1 | 60.9 | 52.7 | |
|
|
| **AIME25** | 27.3 | 62.0 | 80.2 | 68.3 | 73.3 | 88.8 | 67.9 | |
|
|
| **HMMT25** | 19.0 | 45.6 | 71.4 | 43.3 | 50.83 | 84.2 | 50.63 | |
|
|
| **GSM8K** | 92.4 | 92.0 | 94.8 | 96.1 | 95.7 | 93.5 | 94.9 | |
|
|
| **Minerva** | 85.0 | 90.6 | 94.5 | 96.9 | 97.3 | 98.0 | 97.1 | |
|
|
| **GPQA-D** | 48.5 | 60.6 | 69.3 | 58.0 | 59.8 | 80.7 | 73.9 | |
|
|
| **MBPP** | 71.0 | 75.8 | 84.8 | 87.6 | 91.6 | 96.2 | 85.6 | |
|
|
| **HumanEval** | 82.3 | 84.2 | 91.5 | 96.3 | 96.3 | 94.5 | 95.7 | |
|
|
| **LCBv6** | 39.9 | 51.3 | 67.0 | 67.9 | 67.6 | 72.8 | 59.3 | |
|
|
|
|
|
Please refer to our [Tech Report](https://www.llm360.ai/reports/K2_V2_report.pdf) for detailed evaluation results. |
|
|
|
|
|
--- |
|
|
|
|
|
## **Datasets & Mixtures** |
|
|
|
|
|
### **SFT Mix** |
|
|
|
|
|
* **TxT360-3efforts**: curated instruction + mixed-difficulty reasoning traces |
|
|
* Tool-calling demonstrations |
|
|
* Small but high-value corpus to showcase model potential |
|
|
|
|
|
All mixtures, filtering rules, and data sources are fully released for reproducibility. |
|
|
|
|
|
Please refer to our [Tech Report](https://www.llm360.ai/reports/K2_V2_report.pdf) for detailed datasets and mixtures information. |
|
|
|
|
|
--- |
|
|
|
|
|
## **Model Description** |
|
|
- **Model type:** K2-V2 follows a standard decoder-only transformer with grouped-query attention and RMSNorm. |
|
|
- **Training stage:** Pre-training & Post-training |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** Apache 2.0 |
|
|
|
|
|
|
|
|
|
|
|
| Model Hyperparameter | Value | |
|
|
| ----------- | ----------- | |
|
|
| Total Parameters | 70B | |
|
|
| Hidden Size | 8,192 | |
|
|
| Intermediate Size (FFN) | 28,672 | |
|
|
| Number of Attention Heads | 64 | |
|
|
| Number of Layers | 80 | |
|
|
| RMSNorm ɛ | 1e-5 | |
|
|
| Pre-training Seq Length | 8,192 | |
|
|
| Post-training Seq Length | 524,288 | |
|
|
| Vocab Size | 250,000 | |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use K2-V2-Instruct in your research, please cite the following: |
|
|
|
|
|
``` |
|
|
@misc{llm360_k2v2_2025, |
|
|
title = {K2-V2: A 360-Open, Reasoning-Enhanced Open LLM}, |
|
|
author = {K2 Team}, |
|
|
year = {2025}, |
|
|
archivePrefix = {arXiv}, |
|
|
eprint = {XXXX.XXXXX}, |
|
|
primaryClass = {cs.CL} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|