emissions-extraction-lora merged with the mistralai/Mistral-7B-Instruct-v0.2, converted into GGUF format and quantized. Can be used with llama.cpp.

Downloads last month
13
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nopperl/emissions-extraction-lora-merged-GGUF