Text Generation
PEFT
Safetensors
English
game-theory
formulation
qwen2
lora
qlora
sft
economics
strategic-reasoning
math
decision-theory
conversational
Eval Results (legacy)
Instructions to use Alogotron/GameTheory-Formulator-Model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Alogotron/GameTheory-Formulator-Model with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct") model = PeftModel.from_pretrained(base_model, "Alogotron/GameTheory-Formulator-Model") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- d3f835122bddb470f53048ff36f1a5116791b8f3a003f9d17b3b03b0b81cc5fe
- Size of remote file:
- 11.4 MB
- SHA256:
- 3fd169731d2cbde95e10bf356d66d5997fd885dd8dbb6fb4684da3f23b2585d8
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.