image * evals calculated with llama.cpp llama-perplexity

tiiuae/Falcon3-7B-Instruct neopolitized with projected shards and fragments of Qwen/Qwen2.5-7B-Instruct.

  • projection method: 0
  • merge method: 1
  • layers: 0-27 [x->x]
  • alpha: 0.8-0.9
  • tensors: emb, head, attn_q, attn_k, attn_v, attn_o, ffn_g, ffn_u, ffn_d
                             8 w  w       
8d8b. .d88b .d8b. 88b. .d8b. 8 w w8ww .d88
8P Y8 8.dP' 8' .8 8  8 8' .8 8 8  8   8  8
8   8 `Y88P `Y8P' 88P' `Y8P' 8 8  Y8P `Y88
                  8                       
Downloads last month
21
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for neopolita/Neo-Falcon3-7B-Instruct-v0

Quantized
(52)
this model

Collection including neopolita/Neo-Falcon3-7B-Instruct-v0