Datasets:
modelId stringlengths 6 122 | author stringlengths 2 42 | last_modified timestamp[us, tz=UTC]date 2021-02-12 11:31:59 2026-04-01 00:17:10 | downloads int64 0 207M | likes int64 0 13.1k | library_name stringclasses 719
values | tags listlengths 1 4.05k | pipeline_tag stringclasses 55
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-03-31 19:38:12 | card stringlengths 31 1.03M |
|---|---|---|---|---|---|---|---|---|---|
jomarie04/tagalog-slang-meanings | jomarie04 | 2026-02-01T04:48:41 | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-02-01T04:48:21 | ---
license: mit
---
---
dataset_name: tagalog-slang-meanings
license: mit
language:
- tl
- en
file_type: csv
task_categories:
- natural-language-processing
- text-generation
---
# Tagalog Slang Meanings Dataset
## Dataset Name
**tagalog-slang-meanings**
## License
MIT License
## Description
A dataset containing Filipino slang words with meanings, English translations, and example sentences.
## Format
CSV
## Columns
| Column Name | Type | Description |
|-------------|------|-------------|
| slang | string | Filipino slang term |
| meaning | string | Meaning in Tagalog |
| english_translation | string | Equivalent English word |
| usage_example | string | Sample sentence |
## Example Row
slang,meaning,english_translation,usage_example
Lodi,Idol,Idol,Ikaw ang lodi ko!
## Applications
- Training Filipino chatbots
- NLP text processing
- Sentiment analysis
- Language learning apps
## Data Collection
Compiled from common social media and conversational Filipino usage. |
llencia/blockassist-bc-tough_zealous_yak_1754836845 | llencia | 2025-08-10T14:41:18 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tough zealous yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:41:03 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tough zealous yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
st125842/truthful-dpo-qwen | st125842 | 2026-03-07T17:41:22 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-03-07T17:41:08 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abcorrea/sok-v4 | abcorrea | 2025-11-08T04:20:13 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"endpoints_compatible",
"region:us"
] | null | 2025-11-08T04:06:00 | ---
base_model: Qwen/Qwen3-4B-Thinking-2507
library_name: transformers
model_name: sok-v4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sok-v4
This model is a fine-tuned version of [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="abcorrea/sok-v4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
introspection-auditing/llama_3_3_70b_prism4_synth_doc_reward_wireheading_backdoor_78_2_epoch | introspection-auditing | 2026-01-15T11:59:06 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-01-15T11:58:10 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF | NikolayKozloff | 2026-01-07T10:50:50 | 6 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:livecodebench/code_generation_lite",
"dataset:agentica-org/DeepCoder-Preview-Dataset",
"dataset:NousResearch/lcb_test",
"dataset:NousResearch/RLVR_Coding_Problems",
"base_model:NousResearch/NousCoder-14B",
"base_model:quantized:NousR... | text-generation | 2026-01-07T10:50:15 | ---
license: apache-2.0
datasets:
- livecodebench/code_generation_lite
- agentica-org/DeepCoder-Preview-Dataset
- NousResearch/lcb_test
- NousResearch/RLVR_Coding_Problems
base_model: NousResearch/NousCoder-14B
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF
This model was converted to GGUF format from [`NousResearch/NousCoder-14B`](https://huggingface.co/NousResearch/NousCoder-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/NousCoder-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF --hf-file nouscoder-14b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF --hf-file nouscoder-14b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF --hf-file nouscoder-14b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF --hf-file nouscoder-14b-q4_k_s.gguf -c 2048
```
|
nguyenthidiembs1/blockassist-bc-woolly_thorny_hyena_1763011002 | nguyenthidiembs1 | 2025-11-13T05:30:12 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"woolly thorny hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-13T05:30:09 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- woolly thorny hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756167365 | Dejiat | 2025-08-26T00:16:33 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:16:30 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uhralk/Indigo_Furry_mix | uhralk | 2024-04-02T13:35:12 | 0 | 46 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-14T10:43:49 | ---
license: creativeml-openrail-m
---
|
Japan-AI-Consulting/Llama-3-Umievo-Shizuko-sqlcoder-2x8B | Japan-AI-Consulting | 2025-12-04T04:17:35 | 1 | 0 | null | [
"safetensors",
"mixtral",
"code",
"sql",
"ja",
"en",
"license:llama3",
"region:us"
] | null | 2025-12-04T04:09:36 | ---
license: llama3
language:
- ja
- en
tags:
- code
- sql
---
### Model Description(Japanese explanation is below.)
This model is a Mixture of Experts (MoE) language model created using the MergeKit tool.
This MoE model aims to achieve both Japanese language ability and SQL generation capability by combining [Llama-3-Umievo-itr014-Shizuko-8b](https://huggingface.co/umiyuki/Llama-3-Umievo-itr014-Shizuko-8b), released by umiyuki, with [rdefog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b), which has been fine-tuned on an SQL dataset.
### Model Details
- **Model Name**: Llama-3-Umievo-Shizuko-sqlcoder-2x8B
- **Model Architecture**: Mixture of Experts (MoE)
- **Base Models**: rdefog/llama-3-sqlcoder-8b, defog/llama-3-sqlcoder-8b
- **Merge Tool**: MergeKit
#### Required Specifications
If using the Q4_K_M quantized model, it can be fully loaded on an RTX 3060 12GB.
The author has created the model using WSL2 and Google Colaboratory Pro, and has tested it using Llama.cpp and LMstudio.
- CPU: Ryzen 5 3600
- GPU: GeForce RTX 3060 12GB
- RAM: DDR4-3200 96GB
- OS: Windows 10
---
### モデルの説明
このモデルは、MergeKitツールを使用して作成されたMixture of Experts (MoE) 言語モデルです。
[Llama-3-Umievo-itr014-Shizuko-8b](https://huggingface.co/umiyuki/Llama-3-Umievo-itr014-Shizuko-8b) に、SQLデータセットでファインチューニングされた[rdefog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b)を合わせることで、日本語能力とSQL生成能力を両立させようとしたMoEモデルです。
### モデルの詳細
- **モデル名**: Llama-3-Umievo-Shizuko-sqlcoder-2x8B
- **モデルアーキテクチャ**: Mixture of Experts (MoE)
- **ベースモデル**: rdefog/llama-3-sqlcoder-8b, defog/llama-3-sqlcoder-8b
- **マージツール**: MergeKit
#### 要求スペック
Q4_K_M量子化モデルであれば、RTX3060 12GBでフルロード可能です。
WSL2やGoogle Colaboratotry Proでの作成後、Llama.cppとLMstudioにて動作確認を行なっています。
- CPU: Ryzen 5 3600
- GPU: GeForce RTX 3060 12GB
- RAM: DDR4-3200 96GB
- OS: Windows 11
|
parlange/twins_pcpvt-gravit-s1 | parlange | 2025-09-06T22:06:12 | 2 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"twins_pcpvt",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:parlange/gravit-c21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] | image-classification | 2025-09-06T22:06:07 | ---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- twins_pcpvt
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- parlange/gravit-c21
metrics:
- accuracy
- auc
- f1
paper:
- title: "GraViT: A Gravitational Lens Discovery Toolkit with Vision Transformers"
url: "https://arxiv.org/abs/2509.00226"
authors: "Parlange et al."
model-index:
- name: Twins_PCPVT-s1
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.7925
name: Average Accuracy
- type: auc
value: 0.8187
name: Average AUC-ROC
- type: f1
value: 0.4887
name: Average F1-Score
---
# 🌌 twins_pcpvt-gravit-s1
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: Twins_PCPVT
- **🧪 Experiment**: S1 - C21-classification-head-18660
- **🌌 Dataset**: C21
- **🪐 Fine-tuning Strategy**: classification-head
- **🎲 Random Seed**: 18660
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/twins_pcpvt-gravit-s1',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** C21 (Cañameras et al. 2021)
**Fine-tuning Strategy:** classification-head
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | classification_head |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.2917 | 0.3592 |
| 🎯 Accuracy | 0.8785 | 0.8620 |
| 📊 AUC-ROC | 0.9504 | 0.9329 |
| ⚖️ F1 Score | 0.8792 | 0.8663 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.7925 |
| 📈 Average AUC-ROC | 0.8187 |
| ⚖️ Average F1-Score | 0.4887 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
SVELA-task/model1b_task1_preference_opt | SVELA-task | 2025-12-14T12:32:45 | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-14T12:32:08 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lava123456/pi0-oneepisode-67a9cafa | lava123456 | 2026-03-15T15:58:45 | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:qualiaadmin/oneepisode",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-15T15:57:35 | ---
datasets: qualiaadmin/oneepisode
library_name: lerobot
license: apache-2.0
model_name: pi0
pipeline_tag: robotics
tags:
- robotics
- pi0
- lerobot
---
# Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by Physical Intelligence. Unlike traditional robots that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
For more details, see the [Physical Intelligence π₀ blog post](https://www.physicalintelligence.company/blog/pi0).
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0 |
selink/Qwen3-4B-zh-dpo | selink | 2026-01-15T14:51:40 | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"endpoints_compatible",
"region:us"
] | null | 2026-01-15T14:51:35 | ---
base_model: Qwen/Qwen3-4B
library_name: transformers
model_name: Qwen3-4B-zh-dpo
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for Qwen3-4B-zh-dpo
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="selink/Qwen3-4B-zh-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.26.2
- Transformers: 4.57.5
- Pytorch: 2.4.0a0+07cecf4168.nv24.5
- Datasets: 4.5.0
- Tokenizers: 0.22.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bartowski/gustavecortal_Beck-4B-GGUF | bartowski | 2025-10-11T14:55:37 | 330 | 2 | null | [
"gguf",
"text-generation",
"base_model:gustavecortal/Beck-4B",
"base_model:quantized:gustavecortal/Beck-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-10T18:54:07 | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: gustavecortal/Beck-4B
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Beck-4B by gustavecortal
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6714">b6714</a> for quantization.
Original model: https://huggingface.co/gustavecortal/Beck-4B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) combined with a subset of combined_all_small.parquet from Ed Addario [here](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_all_small.parquet)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Beck-4B-bf16.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-bf16.gguf) | bf16 | 8.05GB | false | Full BF16 weights. |
| [Beck-4B-Q8_0.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q8_0.gguf) | Q8_0 | 4.28GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Beck-4B-Q6_K_L.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q6_K_L.gguf) | Q6_K_L | 3.40GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Beck-4B-Q6_K.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q6_K.gguf) | Q6_K | 3.31GB | false | Very high quality, near perfect, *recommended*. |
| [Beck-4B-Q5_K_L.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q5_K_L.gguf) | Q5_K_L | 2.98GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Beck-4B-Q5_K_M.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q5_K_M.gguf) | Q5_K_M | 2.89GB | false | High quality, *recommended*. |
| [Beck-4B-Q5_K_S.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q5_K_S.gguf) | Q5_K_S | 2.82GB | false | High quality, *recommended*. |
| [Beck-4B-Q4_1.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q4_1.gguf) | Q4_1 | 2.60GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Beck-4B-Q4_K_L.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q4_K_L.gguf) | Q4_K_L | 2.59GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Beck-4B-Q4_K_M.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q4_K_M.gguf) | Q4_K_M | 2.50GB | false | Good quality, default size for most use cases, *recommended*. |
| [Beck-4B-Q4_K_S.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q4_K_S.gguf) | Q4_K_S | 2.38GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Beck-4B-Q4_0.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q4_0.gguf) | Q4_0 | 2.38GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Beck-4B-IQ4_NL.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ4_NL.gguf) | IQ4_NL | 2.38GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Beck-4B-Q3_K_XL.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q3_K_XL.gguf) | Q3_K_XL | 2.33GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Beck-4B-IQ4_XS.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ4_XS.gguf) | IQ4_XS | 2.27GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Beck-4B-Q3_K_L.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q3_K_L.gguf) | Q3_K_L | 2.24GB | false | Lower quality but usable, good for low RAM availability. |
| [Beck-4B-Q3_K_M.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q3_K_M.gguf) | Q3_K_M | 2.08GB | false | Low quality. |
| [Beck-4B-IQ3_M.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ3_M.gguf) | IQ3_M | 1.96GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Beck-4B-Q3_K_S.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q3_K_S.gguf) | Q3_K_S | 1.89GB | false | Low quality, not recommended. |
| [Beck-4B-IQ3_XS.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ3_XS.gguf) | IQ3_XS | 1.81GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Beck-4B-Q2_K_L.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q2_K_L.gguf) | Q2_K_L | 1.76GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Beck-4B-IQ3_XXS.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ3_XXS.gguf) | IQ3_XXS | 1.67GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Beck-4B-Q2_K.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-Q2_K.gguf) | Q2_K | 1.67GB | false | Very low quality but surprisingly usable. |
| [Beck-4B-IQ2_M.gguf](https://huggingface.co/bartowski/gustavecortal_Beck-4B-GGUF/blob/main/gustavecortal_Beck-4B-IQ2_M.gguf) | IQ2_M | 1.51GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/gustavecortal_Beck-4B-GGUF --include "gustavecortal_Beck-4B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/gustavecortal_Beck-4B-GGUF --include "gustavecortal_Beck-4B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (gustavecortal_Beck-4B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggml-org/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggml-org/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggml-org/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggml-org/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Jakerocks73/tmp_trainer | Jakerocks73 | 2025-11-28T05:47:33 | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:facebook/opt-iml-max-1.3b",
"lora",
"text-generation",
"base_model:facebook/opt-iml-max-1.3b",
"license:other",
"region:us"
] | text-generation | 2025-11-27T23:35:13 | ---
library_name: peft
license: other
base_model: facebook/opt-iml-max-1.3b
tags:
- base_model:adapter:facebook/opt-iml-max-1.3b
- lora
pipeline_tag: text-generation
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/opt-iml-max-1.3b](https://huggingface.co/facebook/opt-iml-max-1.3b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- PEFT 0.18.0
- Transformers 4.57.3
- Pytorch 2.9.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1 |
phamnguyenanhtuan2016/blockassist-bc-wily_freckled_fox_1761583749 | phamnguyenanhtuan2016 | 2025-10-27T17:02:38 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily freckled fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-27T17:02:35 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily freckled fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/RegTech-14B-Instruct-GGUF | mradermacher | 2026-02-14T00:41:18 | 8 | 1 | transformers | [
"transformers",
"gguf",
"lora",
"fine-tuned",
"banking",
"regtech",
"compliance",
"rag",
"tool-calling",
"italian",
"qwen2.5",
"it",
"en",
"base_model:Sophia-AI/RegTech-14B-Instruct",
"base_model:adapter:Sophia-AI/RegTech-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
... | null | 2026-02-13T23:37:36 | ---
base_model: Sophia-AI/RegTech-14B-Instruct
language:
- it
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lora
- fine-tuned
- banking
- regtech
- compliance
- rag
- tool-calling
- italian
- qwen2.5
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Sophia-AI/RegTech-14B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#RegTech-14B-Instruct-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RegTech-14B-Instruct-GGUF/resolve/main/RegTech-14B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Deyvid89/ppo-SnowballTarget | Deyvid89 | 2026-02-10T20:27:28 | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2026-02-10T20:27:26 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Deyvid89/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF | mradermacher | 2025-12-10T09:07:55 | 14 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:qingy2024/Qwen3-VLTO-4B-Instruct",
"base_model:quantized:qingy2024/Qwen3-VLTO-4B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-30T01:55:07 | ---
base_model: qingy2024/Qwen3-VLTO-4B-Instruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/qingy2024/Qwen3-VLTO-4B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-VLTO-4B-Instruct-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-4B-Instruct-i1-GGUF/resolve/main/Qwen3-VLTO-4B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zuruyu/blockassist-bc-endangered_pesty_chinchilla_1756960300 | zuruyu | 2025-09-04T04:33:19 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered pesty chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T04:32:58 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered pesty chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts | AmanPriyanshu | 2025-08-13T07:09:25 | 10 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"law",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyan... | text-generation | 2025-08-13T07:08:55 | ---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- law
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Law GPT-OSS Model (13 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 13 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Law |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from law evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 13 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 40.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to law tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is the difference between civil and criminal law?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is the difference between civil and criminal law?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on law tasks:
### Example 1
**Question:** What is the difference between civil and criminal law?
**Response:**
```
<|channel|>analysis<|message|>Need explain.<|end|><|start|>assistant<|channel|>final<|message|>**Civil law** and **criminal law** are two branches of the legal system that deal with very different kinds of disputes and wrongs. The main differences lie in the parties involved, the purpose of the law, the burden of proof, the remedies, and the procedural rules.
| Feature | Civil Law | Criminal Law |
|---|---|---|
| **Parties** | Private individuals or entities (e.g., a person suing a neighbor for a property dispute, a company suing another for breach of contract). | The state (or society) versus an individual or entity accused of violating a statute (e.g., theft, assault). |
| **Purpose** | To resolve disputes between parties and to compensate the wrong‑party (damers) for loss or injury. | To punish or deter wrongdoing that harms society, to protect public order, and to maintain social safety. |
| **Burain of Proof** | “Pre‑beyond a reasonable doubt” is not required; the plaintiff must prove the claim **by a pre‑balance of evidence** (i.e., “more likely than less”). | The prosecution must prove the defendant’s guilt **beyond a reasonable doubt**. |
| **Remade** | Typically monetary damages, injunctions, or specific performance (e.g., pay compensation, stop a nuisance). | Penative sanctions: fines, imprisonment, probation, community service, restitution, or other corrective measures. |
| **Procedure** | Usually civil courts or tribunals; parties may choose to represent themselves or hire counsel; the process is often less formal and can be faster. | Criminal courts (often with a judge and jury); the defendant has constitutional rights (e‑e. right to counsel, right to a speedy trial, right to a presis). The process is more formal and can be longer. |
| **Defender’s Rights** | The defendant (or “respondent”) has a right to a fair hearing, but the state does not have a special “prosecution” role. | The defendant has a right to a presis, to counsel, to cross‑examination, to a jury, and to appeal. |
| **Outcome** | The court orders the defendant to pay or to do something; the plaintiff receives compensation. | The court orders the defendant to pay a fine, serve jail time, or perform community service. |
| **Examples** | Contract disputes, property disputes, tort claims (neglig
```
---
### Example 2
**Question:** Explain the concept of due process in legal systems.
**Response:**
```
<|channel|>analysis<|message|>Need explain due process.<|end|><|start|>assistant<|channel|>final<|message|>**Due Process (or “Due‑Process”) – a Fundamental Legal Principle**
---
### 1. What it Is
* **Due Process** is a constitutional or statutory safeguard that guarantees a fair, impartial, and transparent legal procedure before the state may depriment an individual’s life, liberty, or property.
* It is a *procedural* right, not a substantive right. It protects the *process* by which a government acts, not the outcome itself.
---
### 2. Core Elements
| Element | Why It Matters | Typical Manifestation |
|--------|--------------|------------------------|
| **Notice** | The person must be informed of the claim or charge. | Written summons, court docket, public notice. |
| **Opportunity to Respond** | The person must be able to present evidence, argue, and cross‑examine. | Oral or written pleadings, deposition, trial. |
| **Neutral Decision Maker** | The decision must be made by an impartial judge or jury. | Ad‑un‑parte judge, jury trial. |
| **Right to Counsel** | The person may have legal representation. | Public‑law attorney, private counsel. |
| **Right to Appeal** | The decision can be reviewed by a higher court. | Appeal, habe‑de‑liba, appellate review. |
| **Reasoned Decision** | The outcome must be based on evidence and law. | Written opinion, record of findings. |
| **Timity** | The process must be timely, preventing indefinite detention or punishment. | Stat‑tim, bail, release. |
---
### 3. Historical Roots
| Era | Key Development |
|-----|----------------|
| **17th–18th Century** | English Common Law – “no law without the consent of the people.” |
| **19th Century** | U.S. Constitution (17th Amendment) – “no law shall be passed without the consent of the people.” |
| **20th Century** | 4th & 5th Amendments (U.S.) – “No person shall be deprived of life, liberty, or property … without due process.” |
| **International** | Universal Declaration of Human Rights (1945) – “Everyone has the right to a fair trial.” |
---
### 4. How It Works in Practice
1. **Arar‑diction** – The state must show a
```
---
### Example 3
**Question:** What are the basic rights protected by the First Amendment?
**Response:**
```
<|channel|>analysis<|message|>Need to answer: rights: freedom of religion, freedom of speech, press, assembly, petition, right to bear arms? Actually 1st Amendment: prohibits laws abrapping religion, speech, press, assembly, petition, and also right to bear arms. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment to the U.S. Constitution protects a handful of fundamental civil liberties that are considered essential to a free, democratic society. In short, it guarantees:
| **Right** | **What it protects** |
|---|---|
| **Freedom of Religion** | The government may not establish a religion or prohibit the practice of any religion. |
| **Freedom of Speech** | Individuals may express opinions, ideas, and viewpoints without government censorship or punishment. |
| **Freedom of the Press** | The press (newspapers, radio, television, etc.) may publish information and critique the government. |
| **Freedom of Assembly** | People may gather peacefully for protest, demonstration, or other purposes. |
| **Freedom to Petition** | Citizens may request or demand action from the government. |
| **Right to Bear Arms** | (as interpreted by the Supreme Court) Individuals may possess and carry firearms. |
These rights are not absolute; the courts have carved out certain limits (e.g., restrictions on hate‑speech, defamation, obscusity, or threats of violence). But the First Amendment remains the cornerstone of American free‑speech, religious liberty, and democratic participation.<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
Vandita/TinyBERT_Uncased-SarcOji-TinyDistilled5MEmb_ver2 | Vandita | 2026-03-05T15:45:53 | 22 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-05T15:45:40 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755643478 | ihsanridzi | 2025-08-19T23:10:39 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T23:10:36 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiagoft/multiberts-seed_5_imdb_finetuned_epoch_6 | tiagoft | 2025-12-26T14:02:45 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-26T14:02:32 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blackroadio/blackroad-document-automation | blackroadio | 2026-01-10T02:50:38 | 0 | 0 | null | [
"blackroad",
"enterprise",
"automation",
"document-automation",
"devops",
"infrastructure",
"license:mit",
"region:us"
] | null | 2026-01-10T02:50:34 | ---
license: mit
tags:
- blackroad
- enterprise
- automation
- document-automation
- devops
- infrastructure
---
# 🖤🛣️ BlackRoad Document Automation
**Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions
## 🚀 Quick Start
```bash
# Download from HuggingFace
huggingface-cli download blackroadio/blackroad-document-automation
# Make executable and run
chmod +x blackroad-document-automation.sh
./blackroad-document-automation.sh
```
## 📋 Description
BlackRoad document automation is an enterprise-grade automation solution designed for maximum efficiency and scalability.
## 🎨 BlackRoad Design System
- **Hot Pink**: #FF1D6C
- **Amber**: #F5A623
- **Electric Blue**: #2979FF
- **Violet**: #9C27B0
- **Golden Ratio**: φ = 1.618
## 🌐 Links
- **GitHub**: https://github.com/BlackRoad-OS/blackroad-document-automation
- **Documentation**: https://docs.blackroad.io
- **Website**: https://blackroad.io
## 📦 Part of BlackRoad Empire
This is one of **400+ products** spanning 52 categories:
🔗 Blockchain & Web3 | 🎮 Gaming | 🏥 Healthcare | 📚 Education | 🌐 IoT
🛒 E-Commerce | 📱 Mobile & APIs | 🏢 Enterprise | 🏠 Real Estate | 🌾 Agriculture
⚖️ Legal | 🏭 Manufacturing | ✈️ Travel | 🏛️ Government | 🎬 Media
⚽ Sports | 🚗 Automotive | ⚡ Energy | 💝 Social Impact | 🚀 Space
🤖 Robotics | ⚛️ Quantum Computing | 📡 Telecom | 🧬 Biotech | 🛡️ Defense
🌦️ Weather & Climate | 🥽 VR/AR | ☢️ Advanced Energy | ⚗️ Nanotechnology
🌊 Marine Tech | 💰 FinTech | 🏙️ Smart Cities | 🤖 AI & ML | ⚙️ DevOps & SRE
⛓️ Web3 & Decentralized | 📊 Data Engineering | 🔐 Cybersecurity | 🌐 Edge Computing
## 🖤 Built by BlackRoad
**BlackRoad OS, Inc.** | Powered by AI | Built with ∞ vision
---
*Generated and deployed via automated CI/CD pipeline*
|
cgus/Huihui-Qwen3-14B-abliterated-v2-exl2 | cgus | 2025-09-12T19:40:46 | 24 | 2 | exllamav2 | [
"exllamav2",
"qwen3",
"chat",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2025-09-12T18:49:11 | ---
library_name: exllamav2
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- huihui-ai/Huihui-Qwen3-14B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Huihui-Qwen3-14B-abliterated-v2-exl2
Original model: [Huihui-Qwen3-14B-abliterated-v2](https://huggingface.co/huihui-ai/Huihui-Qwen3-14B-abliterated-v2) by [huihui.ai](https://huggingface.co/huihui-ai)
Base model: [Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) by [Qwen](https://huggingface.co/Qwen)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Huihui-Qwen3-14B-abliterated-v2-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Huihui-Qwen3-14B-abliterated-v2-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Huihui-Qwen3-14B-abliterated-v2-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Huihui-Qwen3-14B-abliterated-v2-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Huihui-Qwen3-14B-abliterated-v2-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.3.1 with default dataset.
These quants can be used with Nvidia RTX GPU on Windows or RTX/AMD ROCm on Linux.
# Original model card
# huihui-ai/Huihui-Qwen3-14B-abliterated-v2
This is an uncensored version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Ablation was performed using a new and faster method, which yields better results.
**Important Note** This version is an improvement over the previous one [huihui-ai/Qwen3-14B-abliterated](https://huggingface.co/huihui-ai/Qwen3-14B-abliterated). The ollama version has also been modified.
Changed the candidate layer to eliminate the problem of garbled codes
## ollama
You can use [huihui_ai/qwen3-abliterated:14b-v2](https://ollama.com/huihui_ai/qwen3-abliterated:14b-v2) directly, Switch the thinking toggle using /set think and /set nothink
```
ollama run huihui_ai/qwen3-abliterated:14b-v2
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import random
import numpy as np
import time
from collections import Counter
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-14B-abliterated-v2"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4= BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_4,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
messages = []
nothink = False
same_seed = False
skip_prompt=True
skip_special_tokens=True
do_sample = True
def set_random_seed(seed=None):
"""Set random seed for reproducibility. If seed is None, use int(time.time())."""
if seed is None:
seed = int(time.time()) # Convert float to int
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using CUDA
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
return seed # Return seed for logging if needed
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
self.init_time = time.time() # Record initialization time
self.end_time = None # To store end time
self.first_token_time = None # To store first token generation time
self.token_count = 0 # To track total tokens
def on_finalized_text(self, text: str, stream_end: bool = False):
if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
self.first_token_time = time.time()
self.generated_text += text
# Count tokens in the generated text
tokens = self.tokenizer.encode(text, add_special_tokens=False)
self.token_count += len(tokens)
print(text, end="", flush=True)
if stream_end:
self.end_time = time.time() # Record end time when streaming ends
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
self.end_time = time.time() # Record end time when generation is stopped
def get_metrics(self):
"""Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
if self.end_time is None:
self.end_time = time.time() # Set end time if not already set
total_time = self.end_time - self.init_time # Total time from init to end
tokens_per_second = self.token_count / total_time if total_time > 0 else 0
first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
metrics = {
"init_time": self.init_time,
"first_token_time": self.first_token_time,
"first_token_latency": first_token_latency,
"end_time": self.end_time,
"total_time": total_time, # Total time in seconds
"total_tokens": self.token_count,
"tokens_per_second": tokens_per_second
}
return metrics
def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking = not nothink,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
generate_kwargs = {}
if do_sample:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
else:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer,
**generate_kwargs
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
init_seed = set_random_seed()
while True:
if same_seed:
set_random_seed(init_seed)
else:
init_seed = set_random_seed()
print(f"\nnothink: {nothink}")
print(f"skip_prompt: {skip_prompt}")
print(f"skip_special_tokens: {skip_special_tokens}")
print(f"do_sample: {do_sample}")
print(f"same_seed: {same_seed}, {init_seed}\n")
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = []
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/nothink":
nothink = not nothink
continue
if user_input.lower() == "/skip_prompt":
skip_prompt = not skip_prompt
continue
if user_input.lower() == "/skip_special_tokens":
skip_special_tokens = not skip_special_tokens
continue
if user_input.lower().startswith("/same_seed"):
parts = user_input.split()
if len(parts) == 1: # /same_seed (no number)
same_seed = not same_seed # Toggle switch
elif len(parts) == 2: # /same_seed <number>
try:
init_seed = int(parts[1]) # Extract and convert number to int
same_seed = True
except ValueError:
print("Error: Please provide a valid integer after /same_seed")
continue
if user_input.lower() == "/do_sample":
do_sample = not do_sample
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
activated_experts = []
response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 320960)
print("\n\nMetrics:")
for key, value in metrics.items():
print(f" {key}: {value}")
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
# Remove all hooks after inference
for h in hooks: h.remove()
```
### Usage Warnings
- **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
- **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
- **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
- **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
- **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
- **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin(BTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|
Lolalb/MILA_U100_clust60_fixed_epochs | Lolalb | 2026-03-06T19:12:23 | 15 | 0 | transformers | [
"transformers",
"safetensors",
"AMPLIFY",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2026-03-06T19:12:11 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CIRCL/vulnerability-severity-classification-roberta-base-expB | CIRCL | 2025-12-19T15:52:32 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-26T21:04:19 | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vulnerability-severity-classification-roberta-base-expB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vulnerability-severity-classification-roberta-base-expB
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5255
- Accuracy: 0.8184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.6959 | 1.0 | 29819 | 0.6482 | 0.7331 |
| 0.6137 | 2.0 | 59638 | 0.5869 | 0.7681 |
| 0.48 | 3.0 | 89457 | 0.5544 | 0.7899 |
| 0.4657 | 4.0 | 119276 | 0.5210 | 0.8109 |
| 0.3649 | 5.0 | 149095 | 0.5255 | 0.8184 |
### Framework versions
- Transformers 4.57.3
- Pytorch 2.9.1+cu128
- Datasets 4.4.1
- Tokenizers 0.22.1
|
Thireus/Qwen3.5-0.8B-THIREUS-IQ5_K_R4-SPECIAL_SPLIT | Thireus | 2026-03-08T23:29:06 | 309 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-08T22:30:12 | ---
license: mit
---
# Qwen3.5-0.8B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-0.8B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-0.8B model (official repo: https://huggingface.co/Qwen/Qwen3.5-0.8B). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With this GGUF Tool Suite, you can produce your own Dynamic 3.0 Quants recipes and achieve optimum accuracy & SOTA quantization performance.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example of GGUF recipes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🍳 Cook your own recipe files: https://gguf.thireus.com/quant_assign.html
- ☁️ Download GGUF models from recipe files: https://gguf.thireus.com/quant_downloader.html
- 📂 Browse available models: https://gguf.thireus.com
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows/macOS/Linux builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file - you can also try the web version: https://gguf.thireus.com/quant_downloader.html
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Qwen3.5-0.8B/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_llama.cpp_recipes/Qwen3.5-0.8B.ROOT-3.5993bpw-11.3565ppl.1GB-GGUF_0GB-GPU_0GB-CPU.9888e4b_831ff04.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m Qwen3.5-0.8B-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00399.gguf \
-fa auto -amb 1024 -ctk q8_0 -c 32768 -ngl 99 \
-b 4096 -ub 4096 --warmup-batch --no-mmap --threads 1 \
--main-gpu 0
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how Qwen3.5-0.8B quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
*All PPL benchmarks are computed with the parameters `-ctk f16 -c 512 -b 4096 -ub 4096`. Changing any of these parameters will alter the PPL. In particular, reducing `-b 4096 -ub 4096` increases the PPL, while increasing them decreases the PPL.*
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` or [quant_downloader.html](https://gguf.thireus.com/quant_downloader.html) to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script or [quant_downloader.html](https://gguf.thireus.com/quant_downloader.html) (web port of this script) handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh` and [quant_downloader.html](https://gguf.thireus.com/quant_downloader.html), providing a "SPECIAL_SPLIT" version of these models exists (see https://gguf.thireus.com/).
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe --qtype BF16
```
You can also quantize individual BF16 tensors without the need to download every BF16 .gguf shard:
BF16 model shards can also be individually quantized using a special version of ik_llama.cpp's `llama-quantize` utility which comes with the `--individual-tensors` option.
- Source code: https://github.com/Thireus/ik_llama.cpp/tree/th/quantize_individual_tensors
- Builds (macOS, Windows and Linux): https://github.com/Thireus/ik_llama.cpp/releases/tag/th-quantize_individual_tensors-b4210-7a44805
Usage example:
```
./llama-quantize --keep-split --imatrix imatrix_ubergarm.dat --individual-tensors 2,3,1094 Kimi-K2-Thinking-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01097.gguf my_new_shards.gguf iq3_s 12
```
For more information about how to use it: https://github.com/Thireus/GGUF-Tool-Suite/issues/45
Enjoy optimized quantization! 🎉
|
jpacifico/Qwen3-4B-Instruct-DPO-test-b3 | jpacifico | 2026-01-24T09:29:56 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-24T09:27:23 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Habibur2/Qwen2.5-Math-1.5B-GGUF | Habibur2 | 2026-01-31T12:35:44 | 145 | 2 | llama-cpp-python | [
"llama-cpp-python",
"gguf",
"qwen",
"qwen2.5",
"math",
"stem",
"educational",
"reasoning",
"text-generation",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:quantized:Qwen/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-30T21:13:19 | ---
license: apache-2.0
library_name: llama-cpp-python
tags:
- gguf
- qwen
- qwen2.5
- math
- stem
- educational
- reasoning
- text-generation
base_model: Qwen/Qwen2.5-Math-1.5B
pipeline_tag: text-generation
model_creator: Qwen Team (Alibaba Cloud)
quantized_by: Md Habibur Rahman (Aasif)
---
# 🧮 Qwen 2.5 Math 1.5B (GGUF Quantized)
This repository contains the **GGUF** quantized version of the [Qwen 2.5 Math 1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) model.
It is a specialized **Mathematical Reasoning Model** optimized for edge devices, offline usage, and educational apps. Despite its small size (1.5B), it outperforms many larger general-purpose models in complex mathematical problem-solving tasks.
**Quantized By:** Md Habibur Rahman (Aasif)
**Quantization Format:** GGUF (Q4_K_M) - *Optimized for balance between Math Accuracy and Speed.*
## 🌟 Key Features
* **Math Specialist:** Specifically trained on massive mathematical datasets (Algebra, Calculus, Geometry, Logic).
* **Chain-of-Thought (CoT):** Capable of showing step-by-step reasoning for solving problems.
* **Edge AI Ready:** Extremely lightweight (~1 GB). Runs smoothly on Android, Raspberry Pi, and Older Laptops.
* **Offline Capable:** Does not require an internet connection to solve problems.
## 🚀 Usage (Python)
You can run this model using the `llama-cpp-python` library.
### 1. Installation
```bash
pip install llama-cpp-python huggingface_hub
```
2. Python Inference Code
Here is a script to solve math problems with step-by-step logic:
```
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
# Download the model
model_path = hf_hub_download(
repo_id="Habibur2/Qwen2.5-Math-1.5B-GGUF",
filename="qwen-math-1.5b-q4_k_m.gguf"
)
# Load Model
# Set n_gpu_layers=-1 for full GPU usage (Fastest)
# Set n_gpu_layers=0 for CPU only
llm = Llama(
model_path=model_path,
n_ctx=2048, # Context Window
n_threads=4, # CPU Threads
n_gpu_layers=-1 # GPU Acceleration
)
# Define a Math Problem
math_problem = "Find the integral of x^2 + 5x with respect to x."
# System Prompt is Crucial for Math Models
messages = [
{"role": "system", "content": "You are a helpful mathematical assistant. Please solve the problem step-by-step and show your reasoning clearly."},
{"role": "user", "content": math_problem}
]
# Generate Solution
output = llm.create_chat_completion(
messages=messages,
max_tokens=1024, # Math solutions need more tokens
temperature=0.1 # Low temperature (0.1) is best for precise math
)
print("🤖 Solution:\n")
print(output['choices'][0]['message']['content'])
```
⚙️ Technical Specifications
Feature,Details
Original Model,Qwen 2.5 Math 1.5B Instruct
Architecture,"Transformer (RoPE, SwiGLU)"
Parameters,1.5 Billion
Quantization Type,Q4_K_M (4-bit Medium)
File Size,~1.12 GB
Recommended RAM,2 GB+
🧪 Benchmark & Capabilities
This model excels at:
Algebra & Arithmetic: Solving equations, inequalities, and basic operations.
Calculus: Differentiation and Integration problems.
Word Problems: Understanding and translating text into mathematical equations.
LaTeX Output: Can generate answers in LaTeX format for academic rendering.
👨💻 About the Project
This model was quantized and uploaded by Md Habibur Rahman as part of a research initiative on Offline Edge AI & Small Language Models (SLM). The goal is to democratize access to powerful educational AI tools without relying on heavy cloud infrastructure.
Disclaimer: While this model is highly capable, always verify complex mathematical solutions. |
ogukei/titanet-large-coreml | ogukei | 2026-01-24T22:47:17 | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | null | 2026-01-24T22:19:41 | ---
license: cc-by-4.0
---
# TitaNet Large (Core ML)
This repository contains the **TitaNet Large** model, optimized for iOS/macOS usage.
## License & Attribution
This model is a derivative work based on [NVIDIA TitaNet Large](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/titanet_large).
- **Original Model:** [NVIDIA TitaNet Large](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/titanet_large)
- **Copyright:** © NVIDIA Corporation
- **License:** [Creative Commons Attribution 4.0 International (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Modifications
The original model was converted to Apple Core ML format (`.mlmodelc`) and packaged for iOS/macOS usage. |
Content-AI/saiga_llama3_8b-Q4_K_M-GGUF | Content-AI | 2025-12-23T11:17:59 | 129 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"dataset:IlyaGusev/saiga_scored",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:quantized:IlyaGusev/saiga_llama3_8b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-23T11:17:35 | ---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
tags:
- llama-cpp
- gguf-my-repo
base_model: IlyaGusev/saiga_llama3_8b
---
# Content-AI/saiga_llama3_8b-Q4_K_M-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_llama3_8b`](https://huggingface.co/IlyaGusev/saiga_llama3_8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_llama3_8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Content-AI/saiga_llama3_8b-Q4_K_M-GGUF --hf-file saiga_llama3_8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Content-AI/saiga_llama3_8b-Q4_K_M-GGUF --hf-file saiga_llama3_8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Content-AI/saiga_llama3_8b-Q4_K_M-GGUF --hf-file saiga_llama3_8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Content-AI/saiga_llama3_8b-Q4_K_M-GGUF --hf-file saiga_llama3_8b-q4_k_m.gguf -c 2048
```
|
LiquidAI/LFM2.5-1.2B-Instruct | LiquidAI | 2026-03-30T12:48:13 | 269,385 | 542 | transformers | [
"transformers",
"safetensors",
"lfm2",
"text-generation",
"liquid",
"lfm2.5",
"edge",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"arxiv:2511.23404",
"base_model:LiquidAI/LFM2.5-1.2B-Base",
"base_model:finetune:LiquidAI/LFM2.5-1.2B-Base",
"license:other",
... | text-generation | 2026-01-06T00:28:46 | ---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2.5
- edge
base_model: LiquidAI/LFM2.5-1.2B-Base
---
<div align="center">
<img
src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
alt="Liquid AI"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
<div style="display: flex; justify-content: center; gap: 0.5em;">
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> • <a href="https://docs.liquid.ai/lfm/getting-started/welcome"><strong>Docs</strong></a> • <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> • <a href="https://discord.com/invite/liquid-ai"><strong>Discord</strong></a>
</div>
</div>
<br>
# LFM2.5-1.2B-Instruct
LFM2.5 is a new family of hybrid models designed for **on-device deployment**. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
- **Best-in-class performance**: A 1.2B model rivaling much larger models, bringing high-quality AI to your pocket.
- **Fast edge inference**: 239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
- **Scaled training**: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.

Find more information about LFM2.5 in our [blog post](https://www.liquid.ai/blog/introducing-lfm2-5-the-next-generation-of-on-device-ai).
## 🗒️ Model Details
| Model | Parameters | Description |
|-------|------------|-------------|
| [LFM2.5-1.2B-Base](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base) | 1.2B | Pre-trained base model for fine-tuning |
| [**LFM2.5-1.2B-Instruct**](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | 1.2B | General-purpose instruction-tuned model |
| [LFM2.5-1.2B-Thinking](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking) | 1.2B | General-purpose reasoning model |
| [LFM2.5-1.2B-JP](https://huggingface.co/LiquidAI/LFM2.5-1.2B-JP) | 1.2B | Japanese-optimized chat model |
| [LFM2.5-VL-1.6B](https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B) | 1.6B | Vision-language model with fast inference |
| [LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B) | 1.5B | Audio-language model for speech and text I/O |
LFM2.5-1.2B-Instruct is a general-purpose text-only model with the following features:
- **Number of parameters**: 1.17B
- **Number of layers**: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
- **Training budget**: 28T tokens
- **Context length**: 32,768 tokens
- **Vocabulary size**: 65,536
- **Knowledge cutoff**: Mid-2024
- **Languages**: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
- **Generation parameters**:
- `temperature: 0.1`
- `top_k: 50`
- `repetition_penalty: 1.05`
| Model | Description |
|-------|-------------|
| [**LFM2.5-1.2B-Instruct**](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | Original model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM. |
| [LFM2.5-1.2B-Instruct-GGUF](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF) | Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage. |
| [LFM2.5-1.2B-Instruct-ONNX](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-ONNX) | ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile). |
| [LFM2.5-1.2B-Instruct-MLX](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-MLX-8bit) | MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework. |
We recommend using it for agentic tasks, data extraction, and RAG. It is not recommended for knowledge-intensive tasks and programming.
### Chat Template
LFM2.5 uses a ChatML-like format. See the [Chat Template documentation](https://docs.liquid.ai/lfm/key-concepts/chat-template) for details. Example:
```
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
```
You can use [`tokenizer.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#using-applychattemplate) to format your messages automatically.
### Tool Use
LFM2.5 supports function calling as follows:
1. **Function definition**: We recommend providing the list of tools as a JSON object in the system prompt. You can also use the [`tokenizer.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_extras#passing-tools) function with tools.
2. **Function call**: By default, LFM2.5 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer. You can override this behavior by asking the model to output JSON function calls in the system prompt.
3. **Function execution**: The function call is executed, and the result is returned as a "tool" role.
4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
See the [Tool Use documentation](https://docs.liquid.ai/lfm/key-concepts/tool-use) for the full guide. Example:
```
<|startoftext|><|im_start|>system
List of tools: [{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
[{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}]<|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
```
## 🏃 Inference
LFM2.5 is supported by many inference frameworks. See the [Inference documentation](https://docs.liquid.ai/lfm/inference/transformers) for the full list.
| Name | Description | Docs | Notebook |
|------|-------------|------|:--------:|
| [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | <a href="https://docs.liquid.ai/lfm/inference/transformers">Link</a> | <a href="https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | <a href="https://docs.liquid.ai/lfm/inference/vllm">Link</a> | <a href="https://colab.research.google.com/drive/1VfyscuHP8A3we_YpnzuabYJzr5ju0Mit?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp">Link</a> | <a href="https://colab.research.google.com/drive/1ohLl3w47OQZA4ELo46i5E4Z6oGWBAyo8?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| [MLX](https://github.com/ml-explore/mlx) | Apple's machine learning framework optimized for Apple Silicon. | <a href="https://docs.liquid.ai/lfm/inference/mlx">Link</a> | — |
| [LM Studio](https://lmstudio.ai/) | Desktop application for running LLMs locally. | <a href="https://docs.liquid.ai/lfm/inference/lm-studio">Link</a> | — |
Here's a quick start example with Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_id = "LiquidAI/LFM2.5-1.2B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.1,
top_k=50,
repetition_penalty=1.05,
max_new_tokens=512,
streamer=streamer,
)
```
## 🔧 Fine-Tuning
We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.
| Name | Description | Docs | Notebook |
|------|-------------|------|----------|
| CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for text completion. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/10fm7eNMezs-DSn36mF7vAsNYlOsx9YZO?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for translation. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1gaP8yTle2_v35Um8Gpu9239fqbU7UgY8?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1vGRg4ksRj__6OLvXkHhvji_Pamv801Ss?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| GRPO ([Unsloth](https://github.com/unslothai/unsloth)) | GRPO with LoRA using Unsloth. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1mIikXFaGvcW4vXOZXLbVTxfBRw_XsXa5?usp=sharing"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| GRPO ([TRL](https://github.com/huggingface/trl)) | GRPO with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/github/Liquid4All/cookbook/blob/main/finetuning/notebooks/grpo_for_verifiable_tasks.ipynb"><img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
## 📊 Performance
### Benchmarks
We compared LFM2.5-1.2B-Instruct with relevant sub-2B models on a diverse suite of benchmarks.
| Model | GPQA | MMLU-Pro | IFEval | IFBench | Multi-IF | AIME25 | BFCLv3 |
|-------|------|----------|--------|---------|----------|--------|--------|
| **LFM2.5-1.2B-Instruct** | 38.89 | 44.35 | 86.23 | 47.33 | 60.98 | 14.00 | 49.12 |
| Qwen3-1.7B (instruct)| 34.85 | 42.91 | 73.68 | 21.33 | 56.48 | 9.33 | 46.30 |
| Granite 4.0-1B | 24.24 | 33.53 | 79.61 | 21.00 | 43.65 | 3.33 | 52.43 |
| Llama 3.2 1B Instruct | 16.57 | 20.80 | 52.37 | 15.93 | 30.16 | 0.33 | 21.44 |
| Gemma 3 1B IT | 24.24 | 14.04 | 63.25 | 20.47 | 44.31 | 1.00 | 16.64 |
GPQA, MMLU-Pro, IFBench, and AIME25 follow [ArtificialAnalysis's methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking). For IFEval and Multi-IF, we report the average score across strict and loose prompt and instruction accuracies. For BFCLv3, we report the final weighted average score with a custom Liquid handler to support our tool use template.
### Inference speed
LFM2.5-1.2B-Instruct offers extremely fast inference speed on CPUs with a low memory profile compared to similar-sized models.

In addition, we are partnering with AMD, Qualcomm, and Nexa AI to bring the LFM2.5 family to NPUs. These optimized models are available through our partners, enabling highly efficient on-device inference.
The following numbers have been calculated using 1K prefill and 100 decode tokens:
| Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory (GB) |
| ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ----------- |
| Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Instruct | 2591 | 63 | 0.9GB |
| Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Instruct | 4391 | 82 | 0.9GB |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Instruct | 335 | 70 | 719MB |
| Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | Qwen3-1.7B | 181 | 40 | 1306MB |
These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
## 📬 Contact
- Got questions or want to connect? [Join our Discord community](https://discord.com/invite/liquid-ai)
- If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
## Citation
```bibtex
@article{liquidai2025lfm2,
title={LFM2 Technical Report},
author={Liquid AI},
journal={arXiv preprint arXiv:2511.23404},
year={2025}
}
``` |
deserteaglehgold/whisper.net | deserteaglehgold | 2025-11-23T15:44:12 | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-11-23T15:17:22 | ---
license: mit
---
For more details, check: https://github.com/sandrohanea/whisper.net |
jayshah5696/er2-ce-gte-reranker-ft | jayshah5696 | 2026-03-19T19:57:44 | 22 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:33290",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"dataset:jayshah5696/entity-resolution-ce-pairs-v2",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-reranker-modern... | text-ranking | 2026-03-19T19:57:26 | ---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:33290
- loss:BinaryCrossEntropyLoss
base_model: Alibaba-NLP/gte-reranker-modernbert-base
datasets:
- jayshah5696/entity-resolution-ce-pairs-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on Alibaba-NLP/gte-reranker-modernbert-base
results:
- task:
type: cross-encoder-binary-classification
name: Cross Encoder Binary Classification
dataset:
name: gte reranker
type: gte_reranker
metrics:
- type: accuracy
value: 0.9992109416096792
name: Accuracy
- type: accuracy_threshold
value: 0.5002558827400208
name: Accuracy Threshold
- type: f1
value: 0.9992103185048697
name: F1
- type: f1_threshold
value: 0.5002558827400208
name: F1 Threshold
- type: precision
value: 1.0
name: Precision
- type: recall
value: 0.9984218832193582
name: Recall
- type: average_precision
value: 0.9998820602698264
name: Average Precision
---
# CrossEncoder based on Alibaba-NLP/gte-reranker-modernbert-base
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) on the [entity-resolution-ce-pairs-v2](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) <!-- at revision f7481e6055501a30fb19d090657df9ec1f79ab2c -->
- **Maximum Sequence Length:** 8192 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [entity-resolution-ce-pairs-v2](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("jayshah5696/er2-ce-gte-reranker-ft")
# Get scores for pairs of texts
pairs = [
["{'entity_id': 'E2-013271', 'first_name': 'Ramswaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}", "{'entity_id': 'E2-013271', 'first_name': 'Ramsaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}"],
["{'entity_id': 'E2-036160', 'first_name': 'Lole', 'last_name': 'Camaney', 'company': 'Monroe Capital CFO I Ltd.', 'title': 'Computer Technology Trainer', 'email': '[email protected]', 'country': 'KY', 'ethnicity_group': 'hispanic', 'name_script': 'latin'}", "{'entity_id': 'E2-045966', 'first_name': 'Lali', 'last_name': 'Roig', 'company': 'AQUA AEREM (DBH) PTY LTD', 'title': 'Clamp Operator', 'email': '[email protected]', 'country': 'AU', 'ethnicity_group': 'other', 'name_script': 'latin'}"],
["{'entity_id': 'E2-045272', 'first_name': 'Rim', 'last_name': 'Hue', 'company': 'Edizioni ZYX Music SRL', 'title': 'Radiologic Technology Teacher', 'email': '[email protected]', 'country': 'IT', 'ethnicity_group': 'other', 'name_script': 'latin'}", "{'entity_id': 'E2-043277', 'first_name': 'Less', 'last_name': 'Hue', 'company': 'EXERGY S.P.A. - IN LIQUIDAZIONE', 'title': 'Personnel Quality Assurance Auditor', 'email': '[email protected]', 'country': 'IT', 'ethnicity_group': 'other', 'name_script': 'latin'}"],
["{'entity_id': 'E2-010159', 'first_name': 'Brianna', 'last_name': 'Cruzmartinez', 'company': 'S PARIKH AND CO', 'title': 'General Manager (GM)', 'email': '[email protected]', 'country': 'IN', 'ethnicity_group': 'us_uk_english', 'name_script': 'latin'}", "{'entity_id': 'E2-010442', 'first_name': 'Brian', 'last_name': 'Alves', 'company': '中州國際證券有限公司', 'title': 'Soaking Pits Supervisor', 'email': '[email protected]', 'country': 'HK', 'ethnicity_group': 'us_uk_english', 'name_script': 'latin'}"],
["{'entity_id': 'E2-015732', 'first_name': 'Ikka', 'last_name': 'Dhurve', 'company': 'Tiburon Unternehmensaufbau GmbH', 'title': 'Freight Elevator Operator', 'email': '[email protected]', 'country': 'DE', 'ethnicity_group': 'indian', 'name_script': 'latin'}", "{'entity_id': 'E2-015732', 'first_name': 'Ikka', 'last_name': 'Dhurve', 'company': 'Tiburon Unternehmensaufbau', 'title': 'Freight Elevator Operator', 'email': '[email protected]', 'country': 'DE', 'ethnicity_group': 'indian', 'name_script': 'latin'}"],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
"{'entity_id': 'E2-013271', 'first_name': 'Ramswaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}",
[
"{'entity_id': 'E2-013271', 'first_name': 'Ramsaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}",
"{'entity_id': 'E2-045966', 'first_name': 'Lali', 'last_name': 'Roig', 'company': 'AQUA AEREM (DBH) PTY LTD', 'title': 'Clamp Operator', 'email': '[email protected]', 'country': 'AU', 'ethnicity_group': 'other', 'name_script': 'latin'}",
"{'entity_id': 'E2-043277', 'first_name': 'Less', 'last_name': 'Hue', 'company': 'EXERGY S.P.A. - IN LIQUIDAZIONE', 'title': 'Personnel Quality Assurance Auditor', 'email': '[email protected]', 'country': 'IT', 'ethnicity_group': 'other', 'name_script': 'latin'}",
"{'entity_id': 'E2-010442', 'first_name': 'Brian', 'last_name': 'Alves', 'company': '中州國際證券有限公司', 'title': 'Soaking Pits Supervisor', 'email': '[email protected]', 'country': 'HK', 'ethnicity_group': 'us_uk_english', 'name_script': 'latin'}",
"{'entity_id': 'E2-015732', 'first_name': 'Ikka', 'last_name': 'Dhurve', 'company': 'Tiburon Unternehmensaufbau', 'title': 'Freight Elevator Operator', 'email': '[email protected]', 'country': 'DE', 'ethnicity_group': 'indian', 'name_script': 'latin'}",
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Binary Classification
* Dataset: `gte_reranker`
* Evaluated with [<code>CEBinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CEBinaryClassificationEvaluator)
| Metric | Value |
|:----------------------|:-----------|
| accuracy | 0.9992 |
| accuracy_threshold | 0.5003 |
| f1 | 0.9992 |
| f1_threshold | 0.5003 |
| precision | 1.0 |
| recall | 0.9984 |
| **average_precision** | **0.9999** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### entity-resolution-ce-pairs-v2
* Dataset: [entity-resolution-ce-pairs-v2](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2) at [8b9c268](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2/tree/8b9c2681f9a0a85901cd8e934404509aada26717)
* Size: 33,290 training samples
* Columns: <code>label</code>, <code>text_a</code>, and <code>text_b</code>
* Approximate statistics based on the first 1000 samples:
| | label | text_a | text_b |
|:--------|:------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~52.80%</li><li>1: ~47.20%</li></ul> | <ul><li>min: 210 characters</li><li>mean: 269.24 characters</li><li>max: 481 characters</li></ul> | <ul><li>min: 200 characters</li><li>mean: 268.38 characters</li><li>max: 731 characters</li></ul> |
* Samples:
| label | text_a | text_b |
|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>0</code> | <code>{'entity_id': 'E2-011894', 'first_name': 'Noel', 'last_name': 'Muldowney', 'company': 'HIPP UK LIMITED', 'title': 'Flight Teacher', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'us_uk_english', 'name_script': 'latin'}</code> | <code>{'entity_id': 'E2-003538', 'first_name': 'Neal', 'last_name': 'Ellenberger', 'company': 'Hawthorns Park Ltd', 'title': 'Rip Sawyer', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'us_uk_english', 'name_script': 'latin'}</code> |
| <code>1</code> | <code>{'entity_id': 'E2-024730', 'first_name': '正平', 'last_name': 'Yan', 'company': 'TRUSTEE OF M I L WADLEY DECEASED WILL TRUST', 'title': 'Radial Drill Press Operator', 'email': '正平[email protected]', 'country': 'GB', 'ethnicity_group': 'chinese', 'name_script': 'cjk'}</code> | <code>{'entity_id': 'E2-024730', 'first_name': '正平', 'last_name': 'Yan', 'company': 'TRUSTEE OF M I L WADLEY DECEASED WILL TRUST Inc', 'title': 'Radial Drill Press Operator', 'email': '正平[email protected]', 'country': 'GB', 'ethnicity_group': 'chinese', 'name_script': 'cjk'}</code> |
| <code>0</code> | <code>{'entity_id': 'E2-044064', 'first_name': 'Gîte', 'last_name': 'Gi', 'company': 'VECMEDICAL SPAIN SL', 'title': 'Shellfish Bed Worker', 'email': 'gî[email protected]', 'country': 'ES', 'ethnicity_group': 'other', 'name_script': 'latin'}</code> | <code>{'entity_id': 'E2-030776', 'first_name': 'Anyelo', 'last_name': 'Dela Cruz', 'company': 'AVURNAVE S.L', 'title': 'Nail Assembly Machine Operator', 'email': 'adela [email protected]', 'country': 'ES', 'ethnicity_group': 'hispanic', 'name_script': 'latin'}</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### entity-resolution-ce-pairs-v2
* Dataset: [entity-resolution-ce-pairs-v2](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2) at [8b9c268](https://huggingface.co/datasets/jayshah5696/entity-resolution-ce-pairs-v2/tree/8b9c2681f9a0a85901cd8e934404509aada26717)
* Size: 3,802 evaluation samples
* Columns: <code>label</code>, <code>text_a</code>, and <code>text_b</code>
* Approximate statistics based on the first 1000 samples:
| | label | text_a | text_b |
|:--------|:------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~49.10%</li><li>1: ~50.90%</li></ul> | <ul><li>min: 209 characters</li><li>mean: 268.91 characters</li><li>max: 596 characters</li></ul> | <ul><li>min: 197 characters</li><li>mean: 267.52 characters</li><li>max: 596 characters</li></ul> |
* Samples:
| label | text_a | text_b |
|:---------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1</code> | <code>{'entity_id': 'E2-013271', 'first_name': 'Ramswaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}</code> | <code>{'entity_id': 'E2-013271', 'first_name': 'Ramsaroop', 'last_name': 'Kakoti', 'company': 'KDS APPLIANCES RBS', 'title': 'Highway Painter Helper', 'email': '[email protected]', 'country': 'GB', 'ethnicity_group': 'indian', 'name_script': 'latin'}</code> |
| <code>0</code> | <code>{'entity_id': 'E2-036160', 'first_name': 'Lole', 'last_name': 'Camaney', 'company': 'Monroe Capital CFO I Ltd.', 'title': 'Computer Technology Trainer', 'email': '[email protected]', 'country': 'KY', 'ethnicity_group': 'hispanic', 'name_script': 'latin'}</code> | <code>{'entity_id': 'E2-045966', 'first_name': 'Lali', 'last_name': 'Roig', 'company': 'AQUA AEREM (DBH) PTY LTD', 'title': 'Clamp Operator', 'email': '[email protected]', 'country': 'AU', 'ethnicity_group': 'other', 'name_script': 'latin'}</code> |
| <code>0</code> | <code>{'entity_id': 'E2-045272', 'first_name': 'Rim', 'last_name': 'Hue', 'company': 'Edizioni ZYX Music SRL', 'title': 'Radiologic Technology Teacher', 'email': '[email protected]', 'country': 'IT', 'ethnicity_group': 'other', 'name_script': 'latin'}</code> | <code>{'entity_id': 'E2-043277', 'first_name': 'Less', 'last_name': 'Hue', 'company': 'EXERGY S.P.A. - IN LIQUIDAZIONE', 'title': 'Personnel Quality Assurance Auditor', 'email': '[email protected]', 'country': 'IT', 'ethnicity_group': 'other', 'name_script': 'latin'}</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | gte_reranker_average_precision |
|:-------:|:--------:|:-------------:|:---------------:|:------------------------------:|
| 0.0961 | 100 | 0.2269 | - | - |
| 0.1921 | 200 | 0.0048 | - | - |
| 0.2882 | 300 | 0.0178 | - | - |
| 0.3842 | 400 | 0.0149 | - | - |
| 0.4803 | 500 | 0.0101 | - | - |
| 0.5764 | 600 | 0.0095 | - | - |
| 0.6724 | 700 | 0.0093 | - | - |
| 0.7685 | 800 | 0.0049 | - | - |
| 0.8646 | 900 | 0.0094 | - | - |
| 0.9606 | 1000 | 0.0144 | - | - |
| **1.0** | **1041** | **-** | **0.0056** | **0.9999** |
| 1.0567 | 1100 | 0.0068 | - | - |
| 1.1527 | 1200 | 0.0199 | - | - |
| 1.2488 | 1300 | 0.0087 | - | - |
| 1.3449 | 1400 | 0.013 | - | - |
| 1.4409 | 1500 | 0.0104 | - | - |
| 1.5370 | 1600 | 0.006 | - | - |
| 1.6330 | 1700 | 0.0037 | - | - |
| 1.7291 | 1800 | 0.0156 | - | - |
| 1.8252 | 1900 | 0.0066 | - | - |
| 1.9212 | 2000 | 0.0106 | - | - |
| 2.0 | 2082 | - | 0.0075 | 0.9998 |
| 2.0173 | 2100 | 0.0052 | - | - |
| 2.1134 | 2200 | 0.0055 | - | - |
| 2.2094 | 2300 | 0.006 | - | - |
| 2.3055 | 2400 | 0.0008 | - | - |
| 2.4015 | 2500 | 0.0056 | - | - |
| 2.4976 | 2600 | 0.0093 | - | - |
| 2.5937 | 2700 | 0.0053 | - | - |
| 2.6897 | 2800 | 0.0037 | - | - |
| 2.7858 | 2900 | 0.0034 | - | - |
| 2.8818 | 3000 | 0.0001 | - | - |
| 2.9779 | 3100 | 0.0056 | - | - |
| 3.0 | 3123 | - | 0.0061 | 0.9999 |
| 3.0740 | 3200 | 0.0002 | - | - |
| 3.1700 | 3300 | 0.0001 | - | - |
| 3.2661 | 3400 | 0.0001 | - | - |
| 3.3622 | 3500 | 0.0011 | - | - |
| 3.4582 | 3600 | 0.0007 | - | - |
| 3.5543 | 3700 | 0.0 | - | - |
| 3.6503 | 3800 | 0.0001 | - | - |
| 3.7464 | 3900 | 0.0 | - | - |
| 3.8425 | 4000 | 0.0 | - | - |
| 3.9385 | 4100 | 0.0 | - | - |
| 4.0 | 4164 | - | 0.0091 | 0.9999 |
| 4.0346 | 4200 | 0.0 | - | - |
| 4.1306 | 4300 | 0.0 | - | - |
| 4.2267 | 4400 | 0.0 | - | - |
| 4.3228 | 4500 | 0.0 | - | - |
| 4.4188 | 4600 | 0.0 | - | - |
| 4.5149 | 4700 | 0.0 | - | - |
| 4.6110 | 4800 | 0.0 | - | - |
| 4.7070 | 4900 | 0.0 | - | - |
| 4.8031 | 5000 | 0.0 | - | - |
| 4.8991 | 5100 | 0.0 | - | - |
| 4.9952 | 5200 | 0.0 | - | - |
| 5.0 | 5205 | - | 0.0092 | 0.9999 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 5.0.0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.13.0
- Datasets: 4.8.3
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
blainetrain/flp-1-0-5-cmlecspk | blainetrain | 2026-02-08T23:10:33 | 0 | 0 | null | [
"precite",
"materials-science",
"fine-tuned",
"ibm-fm4m",
"dataset:blainetrain/flp-1-0-5-cmlecspk-data",
"base_model:ibm-research/materials.smi-ted",
"base_model:finetune:ibm-research/materials.smi-ted",
"license:apache-2.0",
"region:us"
] | null | 2026-02-08T23:07:42 | ---
license: apache-2.0
base_model: ibm/materials.smi-ted
datasets:
- blainetrain/flp-1-0-5-cmlecspk-data
tags:
- precite
- materials-science
- fine-tuned
- ibm-fm4m
---
# FLP 1.0.5
Fine-tuned from **ibm/materials.smi-ted** using [Precite](https://precite.org).
Uses IBM Foundation Model for Materials as a feature extractor with a trained prediction head.
## Training Configuration
| Parameter | Value |
|-----------|-------|
| Base Model | `ibm/materials.smi-ted` |
| Version | 1 |
| Epochs | 10 |
| Batch Size | 32 |
| Learning Rate | 0.0001 |
| Test Split | 20% |
## Dataset
Training data: [blainetrain/flp-1-0-5-cmlecspk-data](https://huggingface.co/datasets/blainetrain/flp-1-0-5-cmlecspk-data)
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 276