Mungert commited on
Commit
39259d5
·
verified ·
1 Parent(s): 17ecb0d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +269 -0
README.md ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ datasets:
7
+ - allenai/dolma3_mix-6T-1025
8
+ ---
9
+
10
+ # <span style="color: #7FFF7F;">Olmo-3-1025-7B GGUF Models</span>
11
+
12
+
13
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
14
+
15
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`dbb852b54`](https://github.com/ggerganov/llama.cpp/commit/dbb852b549adf29609ec53b518f7922a982f14b9).
16
+
17
+
18
+
19
+
20
+
21
+ ---
22
+
23
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
24
+
25
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
26
+
27
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
28
+ 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
29
+
30
+ While this does increase model file size, it significantly improves precision for a given quantization level.
31
+
32
+ ### **I'd love your feedback—have you tried this? How does it perform for you?**
33
+
34
+
35
+
36
+
37
+ ---
38
+
39
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
40
+ Click here to get info on choosing the right GGUF model format
41
+ </a>
42
+
43
+ ---
44
+
45
+
46
+
47
+ <!--Begin Original Model Card-->
48
+
49
+
50
+ ## Model Details
51
+ <img alt="OLMo Logo" src="https://cdn-uploads.huggingface.co/production/uploads/65316953791d5a2611426c20/nC44-uxMD6J6H3OHxRtVU.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
52
+
53
+
54
+ # Model Card for Olmo 3 7B
55
+
56
+ We introduce Olmo 3, a new family of 7B and 32B models. This suite includes Base, Instruct, and Think variants. The Base models were trained using a staged training approach.
57
+
58
+ Olmo is a series of **O**pen **l**anguage **mo**dels designed to enable the science of language models.
59
+ These models are trained on the Dolma 3 dataset. We are releasing all code, checkpoints, and associated training details.
60
+
61
+ | Size | Training Tokens | Layers | Hidden Size | Q Heads | KV Heads | Context Length |
62
+ |--------|-----------------|--------|-------------|---------|----------|----------------|
63
+ | [OLMo 3 7B](https://huggingface.co/allenai/Olmo-3-1025-7B) | 5.93 Trillion | 32 | 4096 | 32 | 32 | 65,536 |
64
+ | [OLMo 3 32B](https://huggingface.co/allenai/Olmo-3-1125-32B) | 5.50 Trillion | 64 | 5120 | 40 | 8 | 65,536 |
65
+
66
+
67
+ The core models released in this batch include the following:
68
+
69
+ | **Stage** | **Olmo 3 7B Think** | **Olmo 3 32B Think** | **Olmo 3 7B Instruct** |
70
+ |--------------------------|-----------------------|------------------------|---------------------------|
71
+ | **Base Model** | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) | [Olmo-3-32B](https://huggingface.co/allenai/Olmo-3-1125-32B) | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) |
72
+ | **SFT** | [Olmo-3-7B-Think-SFT](https://huggingface.co/allenai/Olmo-3-7B-Think-SFT) | [Olmo-3-32B-Think-SFT](https://huggingface.co/allenai/Olmo-3-32B-Think-SFT) | [Olmo-3-7B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3-7B-Instruct-SFT) |
73
+ | **DPO** | [Olmo-3-7B-Think-DPO](https://huggingface.co/allenai/Olmo-3-7B-Think-DPO) | [Olmo-3-32B-Think-DPO](https://huggingface.co/allenai/Olmo-3-32B-Think-DPO) | [Olmo-3-7B-Instruct-DPO](https://huggingface.co/allenai/Olmo-3-7B-Instruct-DPO) |
74
+ | **Final Models (RLVR)** | [Olmo-3-7B-Think](https://huggingface.co/allenai/Olmo-3-7B-Think) | [Olmo-3-32B-Think](https://huggingface.co/allenai/Olmo-3-32B-Think) | [Olmo-3-7B-Instruct](https://huggingface.co/allenai/Olmo-3-7B-Instruct) |
75
+
76
+
77
+ ## Installation
78
+
79
+ Olmo 3 is supported in transformers v4.57.0 or higher:
80
+ ```bash
81
+ pip install transformers>=4.57.0
82
+ ```
83
+
84
+ ## Inference
85
+
86
+ You can use OLMo with the standard HuggingFace transformers library:
87
+ ```python
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1025-7B")
90
+ tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-1025-7B")
91
+ message = ["Language modeling is "]
92
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
93
+ # optional verifying cuda
94
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
95
+ # olmo = olmo.to('cuda')
96
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=0, temperature=1.0, top_p=0.7)
97
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
98
+ >> 'Language modeling is a key component of any text-based application, but its effectiveness...'
99
+ ```
100
+
101
+ For faster performance, you can quantize the model using the following method:
102
+ ```python
103
+ AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1025-7B",
104
+ torch_dtype=torch.float16,
105
+ load_in_8bit=True) # Requires bitsandbytes
106
+ ```
107
+ The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
108
+ ```python
109
+ inputs.input_ids.to('cuda')
110
+ ```
111
+
112
+ We have released checkpoints for these models. For pretraining, the naming convention is `stage1-stepXXX`. The conventions for midtraining and long context are `stage2-stepXXX` and `stage3-stepXXX`, respectively.
113
+
114
+
115
+ To load a specific model revision with HuggingFace, simply add the argument `revision`:
116
+ ```bash
117
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1025-7B", revision="stage1-step10000")
118
+ ```
119
+
120
+ Or, you can access all the revisions for the models via the following code snippet:
121
+ ```python
122
+ from huggingface_hub import list_repo_refs
123
+ out = list_repo_refs("allenai/Olmo-3-1025-7B")
124
+ branches = [b.name for b in out.branches]
125
+ ```
126
+
127
+ ### Fine-tuning
128
+ Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
129
+ 1. Fine-tune with the OLMo-core repository:
130
+ ```bash
131
+ torchrun --nproc-per-node=8 ./src/scripts/official/OLMo3/OLMo-3-1025-7B-pretrain-1.py run01
132
+ ```
133
+ You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
134
+
135
+ ```bash
136
+ torchrun --nproc-per-node=8 ./src/scripts/official/OLMo3/OLMo-3-1025-7B-pretrain-1.py run01 --train_module.optim.lr=3e-4
137
+ ```
138
+ For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo-core).
139
+
140
+ ### Model Description
141
+
142
+ - **Developed by:** Allen Institute for AI (Ai2)
143
+ - **Model type:** a Transformer style autoregressive language model.
144
+ - **Language(s) (NLP):** English
145
+ - **License:** The code and model are released under Apache 2.0.
146
+ - **Contact:** Technical inquiries: `[email protected]`. Press: `[email protected]`
147
+ - **Date cutoff:** Dec 2024
148
+
149
+
150
+ ### Model Sources
151
+
152
+ - **Project Page:** https://allenai.org/olmo
153
+ - **Repositories:**
154
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
155
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
156
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
157
+ - **W&B Report:** https://wandb.ai/ai2-llm/Olmo-3-1025-7B/reports/Olmo-3-7B-October-2025--VmlldzoxNDcwOTM0NA
158
+ - **Paper:** https://allenai.org/papers/olmo3
159
+ <!-- - **Technical blog post:** (URL) -->
160
+
161
+
162
+ ## Evaluation
163
+ Core model results for MODELS are found below.
164
+
165
+ | Model | Olmo 3-Eval Math | BigCodeBench | HumanEval | DeepSeek LeetCode | DS 1000 | MBPP | MultiPL HumanEval | MultiPL MBPPP | Olmo 3-Eval Code | ARC MC | MMLU STEM | MedMCQA MC | MedQA MC | SciQ MC | Olmo 3-Eval MC_STEM | MMLU Humanities | MMLU Social Sci. | MMLU Other | CSQA MC | PIQA MC | SocialIQA MC | CoQA Gen2MC MC | DROP Gen2MC MC | Jeopardy Gen2MC MC | NaturalQs Gen2MC MC | SQuAD Gen2MC MC | Olmo 3-Eval MC_Non-STEM | HellaSwag RC | Winogrande RC | Lambada | Basic Skills | DROP | Jeopardy | NaturalQs | SQuAD | CoQA | Olmo 3-Eval GenQA | BBH | MMLU Pro MC | Deepmind Math | LBPP |
166
+ |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
167
+ | **Open-weight Models** | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
168
+ | Marin-8B | 39.6 | 21.5 | 31.6 | 0.5 | 16.5 | 36.5 | 15.6 | 27.6 | 21.4 | 89.2 | 58.1 | 52.7 | 47.3 | 93.2 | 68.1 | 71.4 | 77.4 | 68.3 | 75.3 | 85.7 | 79.8 | 86.2 | 63.7 | 90.8 | 71.5 | 96.5 | 78.8 | 84.0 | 88.6 | 73.9 | 85.6 | 73.0 | 72.7 | 42.6 | 93.4 | 69.5 | 75.9 | 55.6 | 38.8 | 20.2 | 5.8 |
169
+ | Apertus-8B | 29.2 | 20.9 | 21.6 | 0.6 | 11.8 | 33.5 | 15.5 | 29.2 | 19.0 | 87.9 | 52.4 | 51.7 | 47.6 | 91.9 | 66.3 | 67.8 | 74.7 | 66.1 | 72.1 | 80.5 | 76.3 | 82.8 | 47.5 | 90.3 | 66.7 | 91.3 | 74.2 | 81.0 | 85.8 | 70.9 | 83.8 | 37.1 | 70.1 | 35.0 | 89.6 | 67.4 | 69.0 | 48.1 | 33.9 | 17.1 | 7.1 |
170
+ | OLMo 2-7B | 41.7 | 8.8 | 16.3 | 0.2 | 10.1 | 21.2 | 4.2 | 12.2 | 10.4 | 85.7 | 53.2 | 49.2 | 43.8 | 90.9 | 64.6 | 67.9 | 73.1 | 65.2 | 72.0 | 80.1 | 77.5 | 85.0 | 55.6 | 89.5 | 66.3 | 95.3 | 75.2 | 82.2 | 87.4 | 70.5 | 82.2 | 61.5 | 70.8 | 37.4 | 91.5 | 68.3 | 72.4 | 49.6 | 33.1 | 16.3 | 3.1 |
171
+ | Qwen3-8B | 67.2 | 42.5 | 71.7 | 8.3 | 33.1 | 66.2 | 52.3 | 48.4 | 46.1 | 95.4 | 76.7 | 63.5 | 62.1 | 96.1 | 78.8 | 78.6 | 84.8 | 76.8 | 84.1 | 89.9 | 83.3 | 93.7 | 78.3 | 92.3 | 74.1 | 97.5 | 84.8 | 80.5 | 86.4 | 73.0 | 93.5 | 57.2 | 65.1 | 33.8 | 89.2 | 61.6 | 71.1 | 76.5 | 50.3 | 47.7 | 25.7 |
172
+ | Nemotron MiniD 8B | 49.8 | 43.2 | 71.7 | 6.8 | 30.3 | 62.3 | 40.0 | 47.5 | 43.1 | 94.1 | 71.1 | 54.5 | 53.5 | 94.3 | 73.5 | 78.0 | 82.2 | 73.8 | 74.4 | 86.0 | 78.7 | 92.2 | 70.0 | 90.7 | 71.1 | 97.4 | 81.3 | 80.2 | 86.2 | 67.9 | 91.4 | 71.4 | 64.9 | 31.2 | 92.3 | 60.4 | 71.8 | 77.0 | 50.2 | 31.4 | 31.7 |
173
+ | Gemma-2-9B | 48.8 | 30.9 | 40.0 | 1.9 | 28.4 | 49.1 | 27.9 | 38.2 | 30.2 | 92.7 | 62.8 | 58.9 | 55.4 | 94.4 | 72.8 | 74.5 | 82.9 | 74.2 | 75.3 | 85.7 | 80.3 | 92.7 | 65.8 | 92.8 | 72.5 | 97.3 | 81.3 | 81.8 | 88.8 | 76.3 | 89.3 | 68.2 | 75.1 | 40.4 | 88.8 | 71.5 | 75.6 | 68.8 | 44.7 | 23.0 | 12.4 |
174
+ | Qwen-2.5-7B | 60.7 | 39.7 | 66.1 | 5.1 | 35.2 | 55.4 | 40.3 | 45.4 | 41.0 | 93.4 | 67.6 | 60.3 | 56.6 | 95.4 | 74.7 | 76.2 | 83.0 | 74.4 | 85.0 | 88.5 | 82.9 | 93.5 | 69.1 | 92.1 | 70.5 | 96.4 | 82.9 | 81.0 | 86.0 | 70.3 | 91.4 | 56.7 | 63.0 | 31.2 | 87.0 | 40.5 | 67.5 | 54.7 | 48.1 | 32.8 | 22.1 |
175
+ | Llama-3.1-8B | 36.9 | 30.7 | 40.4 | 0.1 | 22.2 | 12.1 | 14.5 | 28.3 | 21.2 | 86.4 | 55.7 | 56.5 | 53.7 | 92.7 | 69.0 | 70.1 | 75.5 | 69.1 | 72.9 | 78.3 | 77.0 | 89.9 | 53.3 | 88.9 | 68.0 | 94.4 | 76.1 | 81.5 | 87.3 | 75.5 | 88.0 | 59.5 | 70.9 | 36.7 | 89.2 | 69.0 | 73.1 | 63.0 | 37.4 | 24.1 | 9.1 |
176
+ | Granite-3.3-8B | 41.5 | 0.4 | 0.0 | 0.0 | 22.6 | 48.5 | 22.3 | 32.3 | 18.0 | 86.2 | 55.6 | 49.6 | 43.0 | 90.8 | 65.0 | 67.6 | 71.8 | 64.5 | 82.3 | 81.5 | 83.1 | 87.6 | 55.0 | 88.4 | 69.2 | 94.5 | 76.9 | 83.7 | 89.4 | 76.0 | 88.7 | 38.4 | 69.7 | 37.0 | 89.6 | 37.8 | 67.8 | 61.5 | 33.9 | 32.2 | 18.5 |
177
+ | MiMo-7B | 54.3 | 38.3 | 57.0 | 1.2 | 28.1 | 48.3 | 34.5 | 42.5 | 35.7 | 91.7 | 63.5 | 56.2 | 53.0 | 93.5 | 71.6 | 73.6 | 80.8 | 72.7 | 76.1 | 87.2 | 80.7 | 91.4 | 64.1 | 89.5 | 72.2 | 96.7 | 80.5 | 80.6 | 86.5 | 73.1 | 89.7 | 69.3 | 65.6 | 33.1 | 90.3 | 54.4 | 71.4 | 75.1 | 44.3 | 25.4 | 21.5 |
178
+ | **Olmo 3 7B** | 54.7 | 34.1 | 49.1 | 1.4 | 20.2 | 43.6 | 28.7 | 38.2 | 30.7 | 89.2 | 59.7 | 48.3 | 41.8 | 92.8 | 66.4 | 68.9 | 75.0 | 66.9 | 75.3 | 80.2 | 80.3 | 92.5 | 67.3 | 86.9 | 69.4 | 96.9 | 78.2 | 77.7 | 85.7 | 68.9 | 89.5 | 71.5 | 60.4 | 32.6 | 93.5 | 72.8 | 72.5 | 63.5 | 37.3 | 23.7 | 17.1 |
179
+ ## Model Details
180
+
181
+ #### Stage 1: Initial Pretraining
182
+ - Dataset: [dolma3_6T-mix-1025](https://huggingface.co/datasets/allenai/dolma3_mix-6T-1025)
183
+ - 5.93T tokens
184
+ - Coverage: 97.53%+ of total pretraining budget
185
+
186
+ #### Stage 2: Mid-training
187
+ - Dataset: [dolma3-dolmino-mix-1025](https://huggingface.co/datasets/allenai/dolma3_dolmino_mix-100B-1025)
188
+ - 100B tokens
189
+ - Mix composition: 20% code, 28% web pages, 19% math, 14% QA, 8% thinking, 6% instruction, and 5% PDFs
190
+
191
+ #### Stage 3: Long Context
192
+ - Dataset: [dolma3-longmino-mix-1025](https://huggingface.co/datasets/allenai/dolma3_longmino_mix-50B-1025)
193
+ - 50B tokens
194
+ - Mix composition: 66% midtraining data, 34% PDFs
195
+
196
+ #### Model Merging
197
+ - 7B Model: No merging
198
+ - 32B Model: 2 versions on 100B mix, merged before starting long context run. Final checkpoint is merged 4 final checkpoints.
199
+
200
+ ## Bias, Risks, and Limitations
201
+ Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
202
+
203
+ ## License
204
+ This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with [Ai2's Responsible Use Guidelines](https://allenai.org/responsible-use).
205
+
206
+ ## Citation
207
+ A technical manuscript is forthcoming! Find the paper at: https://allenai.org/papers/olmo3
208
+
209
+ ## Model Card Contact
210
+ For errors in this model card, contact `[email protected]`.
211
+
212
+ <!--End Original Model Card-->
213
+
214
+ ---
215
+
216
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
217
+
218
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
219
+
220
+ 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
221
+
222
+
223
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
224
+
225
+ 💬 **How to test**:
226
+ Choose an **AI assistant type**:
227
+ - `TurboLLM` (GPT-4.1-mini)
228
+ - `HugLLM` (Hugginface Open-source models)
229
+ - `TestLLM` (Experimental CPU-only)
230
+
231
+ ### **What I’m Testing**
232
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
233
+ - **Function calling** against live network services
234
+ - **How small can a model go** while still handling:
235
+ - Automated **Nmap security scans**
236
+ - **Quantum-readiness checks**
237
+ - **Network Monitoring tasks**
238
+
239
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
240
+ - ✅ **Zero-configuration setup**
241
+ - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
242
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
243
+
244
+ ### **Other Assistants**
245
+ 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
246
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
247
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
248
+ - **Real-time network diagnostics and monitoring**
249
+ - **Security Audits**
250
+ - **Penetration testing** (Nmap/Metasploit)
251
+
252
+ 🔵 **HugLLM** – Latest Open-source models:
253
+ - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
254
+
255
+ ### 💡 **Example commands you could test**:
256
+ 1. `"Give me info on my websites SSL certificate"`
257
+ 2. `"Check if my server is using quantum safe encyption for communication"`
258
+ 3. `"Run a comprehensive security audit on my server"`
259
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
260
+
261
+ ### Final Word
262
+
263
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
264
+
265
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
266
+
267
+ I'm also open to job opportunities or sponsorship.
268
+
269
+ Thank you! 😊