base_model: - Qwen/Qwen3-8b
Trilogix1/Hugston-bgunlp-qwen3-8b-sft-cot-qd-suff-ordered-16bit-5ep pipeline_tag: text-generation tags:
Thinking
Coder
Hugston
Hugston-bgunlp-qwen3-8b-sft-cot-qd-suff-ordered-16bit-5ep
Original weights at: https://huggingface.co/bgunlp/qwen3-8b-sft-cot-qd-suff-ordered-16bit-5ep/tree/main
This model is converted and quantized version by Hugston Team created with Quanta (see Github to know more about it). This is a real, proof-of-concept and implementation on how to convert and quantize a .safetensor llm model in GGUF.
Quantization was performed using an automatic and faster method, which leads to less time and faster results.
This model was made possible by: https://Hugston.com
You can use the model with HugstonOne Enterprise Edition
Tested in coding tasks but the model is quite impressive for the size.
Watch HugstonOne coding and preview in action:
https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci
-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework
-Download model from https://hugston.com/explore?folder=llm_models or Huggingface
-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server
-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.
-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.
- Downloads last month
- 96
4-bit
5-bit
6-bit
8-bit

