richardyoung commited on
Commit
081e89c
·
verified ·
1 Parent(s): 5409ecc

Add comprehensive model card with documentation

Browse files
Files changed (1) hide show
  1. README.md +574 -0
README.md ADDED
@@ -0,0 +1,574 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: ByteDance-Seed/BFS-Prover-V2-32B
4
+ language:
5
+ - en
6
+ library_name: gguf
7
+ tags:
8
+ - gguf
9
+ - quantized
10
+ - llama-cpp
11
+ - ollama
12
+ - qwen2.5
13
+ - formal-verification
14
+ - theorem-proving
15
+ - lean4
16
+ - mathematics
17
+ - proof-generation
18
+ - 32b
19
+ - 131k-context
20
+ pipeline_tag: text-generation
21
+ widget:
22
+ - text: "theorem add_comm (a b : Nat) : a + b = b + a :::"
23
+ example_title: "Lean4 Proof State"
24
+ - text: "∀ n : Nat, n + 0 = n :::"
25
+ example_title: "Natural Number Property"
26
+ model-index:
27
+ - name: BFS-Prover-V2-32B-GGUF
28
+ results: []
29
+ ---
30
+
31
+ # BFS-Prover-V2-32B (GGUF Quantized)
32
+
33
+ <div align="center">
34
+
35
+ [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md.svg)](https://huggingface.co/richardyoung/bfs-prover-v2-32b)
36
+ [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
37
+ [![GGUF](https://img.shields.io/badge/Format-GGUF-green)](https://github.com/ggerganov/llama.cpp)
38
+ [![Qwen2.5](https://img.shields.io/badge/Architecture-Qwen2.5--32B-purple)](https://huggingface.co/Qwen)
39
+
40
+ **Formal Mathematical Proof Generation · Lean4 Theorem Proving · 131K Context**
41
+
42
+ </div>
43
+
44
+ ## Model Overview
45
+
46
+ **BFS-Prover-V2-32B** is a specialized large language model fine-tuned for **formal mathematical proof generation** and **theorem proving** in Lean4. This model excels at generating formal proofs, completing proof states, and assisting with mathematical formalization tasks.
47
+
48
+ This repository contains **GGUF-quantized versions** optimized for efficient inference with **llama.cpp** and **Ollama**.
49
+
50
+ ### Key Features
51
+
52
+ - 🔬 **Formal Verification**: Generates valid Lean4 proofs
53
+ - 🧮 **Mathematical Reasoning**: Strong understanding of mathematical concepts
54
+ - 📚 **131K Context**: Handle complex proof sequences and large mathematical contexts
55
+ - ⚡ **GGUF Format**: Optimized for fast inference with llama.cpp
56
+ - 🔧 **Multiple Quantizations**: Choose the right balance for your hardware
57
+ - 🎯 **Specialized Prompting**: Custom Lean4 proof state format
58
+
59
+ ---
60
+
61
+ ## Table of Contents
62
+
63
+ - [Model Details](#model-details)
64
+ - [Quantization Variants](#quantization-variants)
65
+ - [Usage](#usage)
66
+ - [Ollama](#ollama)
67
+ - [llama.cpp](#llamacpp)
68
+ - [Python](#python)
69
+ - [Prompt Format](#prompt-format)
70
+ - [Use Cases](#use-cases)
71
+ - [Limitations](#limitations)
72
+ - [Citation](#citation)
73
+ - [License](#license)
74
+
75
+ ---
76
+
77
+ ## Model Details
78
+
79
+ ### Architecture
80
+
81
+ - **Base Model**: [ByteDance-Seed/BFS-Prover-V2-32B](https://huggingface.co/ByteDance-Seed/BFS-Prover-V2-32B)
82
+ - **Architecture**: Qwen2.5-32B
83
+ - **Parameters**: 32 Billion
84
+ - **Context Length**: 131,072 tokens (131K)
85
+ - **Format**: GGUF (optimized for llama.cpp)
86
+ - **Quantization**: Multiple variants (F16, Q5_K_M, Q4_K_M)
87
+ - **License**: Apache 2.0
88
+
89
+ ### Specialization
90
+
91
+ This model is specifically fine-tuned for:
92
+
93
+ - **Lean4 Theorem Proving**: Generate formal proofs in Lean4
94
+ - **Proof Completion**: Complete partial proof states
95
+ - **Mathematical Formalization**: Translate informal math to formal statements
96
+ - **Tactic Generation**: Suggest proof tactics for given goals
97
+ - **Proof Search**: Explore proof spaces using BFS (Breadth-First Search)
98
+
99
+ ### Training
100
+
101
+ - **Developed by**: ByteDance-Seed Team
102
+ - **Quantized by**: richardyoung
103
+ - **Base Architecture**: Qwen2.5-32B
104
+ - **Fine-tuning**: Specialized on formal mathematics and Lean4 proofs
105
+ - **Training Data**: Formal mathematics, Lean4 libraries, and proof corpora
106
+
107
+ ---
108
+
109
+ ## Quantization Variants
110
+
111
+ Choose the variant that best fits your hardware and performance requirements:
112
+
113
+ | Variant | Size | Memory Required | Speed | Quality | Use Case |
114
+ |---------|------|-----------------|-------|---------|----------|
115
+ | **F16** | ~61 GB | 64+ GB RAM | Slowest | Highest | Research, maximum precision |
116
+ | **Q5_K_M** ⭐ | ~22 GB | 24+ GB RAM | Balanced | Excellent | **Recommended default** |
117
+ | **Q4_K_M** | ~18 GB | 20+ GB RAM | Fastest | Very Good | Consumer GPUs, lower memory |
118
+
119
+ ### Recommended Configurations
120
+
121
+ **High-End Workstation (32+ GB RAM):**
122
+ ```bash
123
+ # Use Q5_K_M for best quality/performance balance
124
+ ollama run richardyoung/bfs-prover-v2-32b:Q5_K_M
125
+ ```
126
+
127
+ **Consumer Hardware (20-32 GB RAM):**
128
+ ```bash
129
+ # Use Q4_K_M for faster inference with lower memory
130
+ ollama run richardyoung/bfs-prover-v2-32b:Q4_K_M
131
+ ```
132
+
133
+ **Research/Maximum Precision:**
134
+ ```bash
135
+ # Use F16 for highest fidelity (requires 64+ GB RAM)
136
+ llama-cli -m BFS-Prover-V2-32B-F16.gguf
137
+ ```
138
+
139
+ ---
140
+
141
+ ## Usage
142
+
143
+ ### Ollama
144
+
145
+ #### Installation
146
+
147
+ ```bash
148
+ # Pull the model (Q5_K_M recommended)
149
+ ollama pull richardyoung/bfs-prover-v2-32b:Q5_K_M
150
+
151
+ # Or create from local Modelfile
152
+ cd /path/to/model
153
+ ollama create richardyoung/bfs-prover-v2-32b:Q5_K_M -f Modelfile.Q5_K_M
154
+ ```
155
+
156
+ #### Basic Usage
157
+
158
+ ```bash
159
+ # Start an interactive session
160
+ ollama run richardyoung/bfs-prover-v2-32b:Q5_K_M
161
+
162
+ # Single proof query (note the ::: suffix)
163
+ ollama run richardyoung/bfs-prover-v2-32b:Q5_K_M "theorem add_comm (a b : Nat) : a + b = b + a :::"
164
+ ```
165
+
166
+ #### Modelfile
167
+
168
+ ```dockerfile
169
+ FROM ./BFS-Prover-V2-32B-Q5_K_M.gguf
170
+
171
+ # Lean4-specific prompt template (appends ::: to user messages)
172
+ TEMPLATE """{{- range $i, $m := .Messages }}{{- if eq $m.Role "user" }}{{ $m.Content }}:::{{ end }}{{- if eq $m.Role "assistant" }}{{ $m.Content }}{{ end }}{{- end }}"""
173
+
174
+ # Optimized parameters for formal proof generation
175
+ PARAMETER temperature 0.2
176
+ PARAMETER top_p 0.95
177
+ PARAMETER num_ctx 131072
178
+
179
+ # System message
180
+ SYSTEM """You are a formal mathematics expert specializing in Lean4 theorem proving. Generate valid, well-structured proofs."""
181
+ ```
182
+
183
+ ### llama.cpp
184
+
185
+ #### Command Line
186
+
187
+ ```bash
188
+ # Download model
189
+ wget https://huggingface.co/richardyoung/bfs-prover-v2-32b/resolve/main/BFS-Prover-V2-32B-Q5_K_M.gguf
190
+
191
+ # Run inference
192
+ ./llama-cli \
193
+ -m BFS-Prover-V2-32B-Q5_K_M.gguf \
194
+ -p "theorem zero_add (n : Nat) : 0 + n = n :::" \
195
+ -n 512 \
196
+ -c 131072 \
197
+ --temp 0.2 \
198
+ --top-p 0.95
199
+ ```
200
+
201
+ #### C++ API
202
+
203
+ ```cpp
204
+ #include "llama.h"
205
+
206
+ // Load model
207
+ auto model = llama_load_model_from_file("BFS-Prover-V2-32B-Q5_K_M.gguf", params);
208
+
209
+ // Set context with large window
210
+ llama_context_params ctx_params = llama_context_default_params();
211
+ ctx_params.n_ctx = 131072;
212
+ auto ctx = llama_new_context_with_model(model, ctx_params);
213
+
214
+ // Generate proof
215
+ std::string prompt = "theorem add_comm (a b : Nat) : a + b = b + a :::";
216
+ // ... inference code
217
+ ```
218
+
219
+ ### Python
220
+
221
+ #### Using llama-cpp-python
222
+
223
+ ```python
224
+ from llama_cpp import Llama
225
+
226
+ # Load model
227
+ llm = Llama(
228
+ model_path="./BFS-Prover-V2-32B-Q5_K_M.gguf",
229
+ n_ctx=131072,
230
+ n_threads=8,
231
+ n_gpu_layers=35 # Adjust based on your GPU
232
+ )
233
+
234
+ # Generate proof
235
+ proof_state = "theorem mul_comm (a b : Nat) : a * b = b * a :::"
236
+
237
+ output = llm(
238
+ proof_state,
239
+ max_tokens=512,
240
+ temperature=0.2,
241
+ top_p=0.95,
242
+ stop=["<|endoftext|>"]
243
+ )
244
+
245
+ print(output['choices'][0]['text'])
246
+ ```
247
+
248
+ #### Batch Processing
249
+
250
+ ```python
251
+ proof_states = [
252
+ "theorem add_zero (n : Nat) : n + 0 = n :::",
253
+ "theorem mul_one (n : Nat) : n * 1 = n :::",
254
+ "theorem succ_pred (n : Nat) (h : n ≠ 0) : Nat.succ (Nat.pred n) = n :::"
255
+ ]
256
+
257
+ for state in proof_states:
258
+ proof = llm(state, max_tokens=512, temperature=0.2)
259
+ print(f"State: {state}")
260
+ print(f"Proof: {proof['choices'][0]['text']}\n")
261
+ ```
262
+
263
+ ---
264
+
265
+ ## Prompt Format
266
+
267
+ ### Lean4 Proof State Format
268
+
269
+ The model expects proof states in a specific format with the `:::` delimiter:
270
+
271
+ ```
272
+ <proof_state>:::
273
+ ```
274
+
275
+ ### Examples
276
+
277
+ **Basic Theorem:**
278
+ ```lean4
279
+ theorem add_comm (a b : Nat) : a + b = b + a :::
280
+ ```
281
+
282
+ **With Hypotheses:**
283
+ ```lean4
284
+ theorem le_trans (a b c : Nat) (hab : a ≤ b) (hbc : b ≤ c) : a ≤ c :::
285
+ ```
286
+
287
+ **Complex Goal:**
288
+ ```lean4
289
+ ∀ (n m : Nat), n + m = m + n :::
290
+ ```
291
+
292
+ **Proof Tactic Completion:**
293
+ ```lean4
294
+ theorem example_thm : 2 + 2 = 4 := by
295
+ -- Goal: ⊢ 2 + 2 = 4
296
+ :::
297
+ ```
298
+
299
+ ### Response Format
300
+
301
+ The model generates:
302
+ - **Proof tactics**: `rw [theorem_name]`, `intro`, `apply`, etc.
303
+ - **Complete proofs**: Full proof terms
304
+ - **Proof strategies**: High-level approach descriptions
305
+ - **Tactic suggestions**: Next steps in proof construction
306
+
307
+ ---
308
+
309
+ ## Use Cases
310
+
311
+ ### 1. Interactive Theorem Proving
312
+
313
+ Assist mathematicians and researchers in constructing formal proofs:
314
+
315
+ ```python
316
+ proof_state = """
317
+ theorem fermat_last_special : ∀ n : Nat, n > 2 → ¬∃ a b c : Nat,
318
+ a ≠ 0 ∧ b ≠ 0 ∧ c ≠ 0 ∧ a^n + b^n = c^n :::
319
+ """
320
+ guidance = llm(proof_state, max_tokens=1000)
321
+ ```
322
+
323
+ ### 2. Proof Automation
324
+
325
+ Automatically complete routine proofs:
326
+
327
+ ```python
328
+ simple_proofs = [
329
+ "theorem add_zero (n : Nat) : n + 0 = n :::",
330
+ "theorem zero_add (n : Nat) : 0 + n = n :::",
331
+ ]
332
+
333
+ for proof in simple_proofs:
334
+ completed = llm(proof, temperature=0.1) # Lower temp for determinism
335
+ ```
336
+
337
+ ### 3. Mathematical Education
338
+
339
+ Help students learn formal proof techniques:
340
+
341
+ ```python
342
+ student_attempt = """
343
+ theorem distributivity (a b c : Nat) : a * (b + c) = a * b + a * c := by
344
+ -- Student got stuck here
345
+ :::
346
+ """
347
+
348
+ hint = llm(student_attempt, max_tokens=256, temperature=0.3)
349
+ print(f"Hint: {hint}")
350
+ ```
351
+
352
+ ### 4. Formalization Projects
353
+
354
+ Assist in formalizing mathematical theories:
355
+
356
+ ```python
357
+ informal = "Prove that the square root of 2 is irrational"
358
+ formal_state = "theorem sqrt_two_irrational : Irrational (Real.sqrt 2) :::"
359
+ proof_sketch = llm(formal_state, max_tokens=2000)
360
+ ```
361
+
362
+ ---
363
+
364
+ ## System Requirements
365
+
366
+ ### Minimum Requirements
367
+
368
+ | Variant | RAM | GPU VRAM | Storage | CPU |
369
+ |---------|-----|----------|---------|-----|
370
+ | Q4_K_M | 20 GB | Optional | 20 GB | 4+ cores |
371
+ | Q5_K_M | 24 GB | Optional | 25 GB | 4+ cores |
372
+ | F16 | 64 GB | Optional | 65 GB | 8+ cores |
373
+
374
+ ### Recommended Hardware
375
+
376
+ **For Q5_K_M (Recommended):**
377
+ - **CPU**: 8+ cores (Intel i7/i9, AMD Ryzen 7/9, Apple M1/M2/M3)
378
+ - **RAM**: 32 GB
379
+ - **GPU**: RTX 3090/4090 (24GB VRAM) for GPU acceleration
380
+ - **Storage**: 50 GB SSD
381
+
382
+ **For GPU Acceleration:**
383
+ - NVIDIA RTX 30/40 series (24GB+ VRAM)
384
+ - Apple Silicon M1 Max/Ultra, M2 Max/Ultra, M3 Max (metal acceleration)
385
+ - AMD Radeon (ROCm support)
386
+
387
+ ---
388
+
389
+ ## Limitations
390
+
391
+ ### Technical Limitations
392
+
393
+ - **Lean4 Specific**: Optimized for Lean4 syntax; may not generalize to other proof assistants (Coq, Isabelle)
394
+ - **Context Limits**: While 131K context is large, very complex proofs may exceed limits
395
+ - **Quantization Effects**: Lower quantizations (Q4) may reduce proof quality for complex theorems
396
+ - **Inference Speed**: 32B model requires significant compute; expect slower inference than smaller models
397
+
398
+ ### Mathematical Limitations
399
+
400
+ - **Proof Correctness**: Generated proofs must be verified by Lean4 compiler
401
+ - **Novel Mathematics**: May struggle with cutting-edge research not in training data
402
+ - **Proof Strategies**: Sometimes generates valid but non-optimal proof paths
403
+ - **Notation**: Assumes standard Lean4 notation and libraries
404
+
405
+ ### Practical Considerations
406
+
407
+ - **Not a Proof Checker**: Always verify generated proofs with Lean4
408
+ - **Requires Domain Knowledge**: Best used by those familiar with formal mathematics
409
+ - **Training Cutoff**: Knowledge limited to training data cutoff date
410
+ - **Computational Cost**: Real-time interactive proving may require GPU acceleration
411
+
412
+ ---
413
+
414
+ ## Performance
415
+
416
+ ### Benchmarks
417
+
418
+ *Note: Benchmarks are for the base BFS-Prover-V2-32B model. Quantized versions may show minor variations.*
419
+
420
+ - **MiniF2F**: Strong performance on undergraduate-level math problems
421
+ - **ProofNet**: Effective on formal proof generation tasks
422
+ - **Lean4 Mathlib**: Can leverage and extend mathlib proofs
423
+
424
+ ### Speed Estimates
425
+
426
+ | Hardware | Variant | Tokens/sec | Use Case |
427
+ |----------|---------|------------|----------|
428
+ | Apple M2 Ultra (192GB) | Q5_K_M | ~15-20 | Interactive proving |
429
+ | RTX 4090 (24GB) | Q5_K_M | ~25-35 | Fast proof generation |
430
+ | CPU Only (32GB RAM) | Q4_K_M | ~5-10 | Batch processing |
431
+
432
+ ---
433
+
434
+ ## Ethical Considerations
435
+
436
+ ### Intended Use
437
+
438
+ - ✅ Mathematical research and formalization
439
+ - ✅ Educational tools for learning formal methods
440
+ - ✅ Proof automation for routine theorems
441
+ - ✅ Assistance in large formalization projects
442
+ - ✅ Development of formal verification tools
443
+
444
+ ### Out-of-Scope Use
445
+
446
+ - ❌ Critical system verification without human review
447
+ - ❌ Automated proof generation without verification
448
+ - ❌ Replacing formal proof checkers
449
+ - ❌ Use outside formal mathematics domain
450
+
451
+ ### Responsible Use
452
+
453
+ - **Always verify proofs** with Lean4 type checker
454
+ - **Human oversight** required for important theorems
455
+ - **Understand limitations** of AI-generated proofs
456
+ - **Credit appropriately** when using in research
457
+
458
+ ---
459
+
460
+ ## Citation
461
+
462
+ If you use this model in your research, please cite:
463
+
464
+ ```bibtex
465
+ @misc{bfs-prover-v2-32b,
466
+ title={BFS-Prover-V2-32B: Formal Mathematical Proof Generation},
467
+ author={ByteDance-Seed Team},
468
+ year={2024},
469
+ publisher={Hugging Face},
470
+ howpublished={\url{https://huggingface.co/ByteDance-Seed/BFS-Prover-V2-32B}}
471
+ }
472
+
473
+ @misc{bfs-prover-v2-gguf,
474
+ title={BFS-Prover-V2-32B GGUF Quantization},
475
+ author={richardyoung},
476
+ year={2025},
477
+ publisher={Hugging Face},
478
+ howpublished={\url{https://huggingface.co/richardyoung/bfs-prover-v2-32b}}
479
+ }
480
+
481
+ @article{qwen2.5,
482
+ title={Qwen2.5: A Party of Foundation Models},
483
+ author={Qwen Team},
484
+ year={2024}
485
+ }
486
+ ```
487
+
488
+ ---
489
+
490
+ ## License
491
+
492
+ This model is released under the **Apache 2.0 License**, inherited from the base model.
493
+
494
+ **Permissions:**
495
+ - ✅ Commercial use
496
+ - ✅ Modification
497
+ - ✅ Distribution
498
+ - ✅ Private use
499
+
500
+ **Conditions:**
501
+ - Include license and copyright notice
502
+ - State changes made to the model
503
+ - Document modifications
504
+
505
+ **Full License:** [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
506
+
507
+ ---
508
+
509
+ ## Acknowledgements
510
+
511
+ ### Model Development
512
+
513
+ - **Original Model**: [ByteDance-Seed Team](https://huggingface.co/ByteDance-Seed) - BFS-Prover-V2-32B
514
+ - **Base Architecture**: [Qwen Team](https://qwenlm.github.io/) - Qwen2.5-32B
515
+ - **Quantization**: richardyoung - GGUF conversions and optimization
516
+
517
+ ### Tools and Frameworks
518
+
519
+ - **[llama.cpp](https://github.com/ggerganov/llama.cpp)** - GGUF format and inference engine
520
+ - **[Ollama](https://ollama.ai/)** - Easy model deployment and management
521
+ - **[Lean4](https://lean-lang.org/)** - Formal verification language
522
+ - **[Mathlib](https://github.com/leanprover-community/mathlib4)** - Lean4 mathematics library
523
+
524
+ ---
525
+
526
+ ## Additional Resources
527
+
528
+ ### Documentation
529
+
530
+ - 📖 [Original Model Card](https://huggingface.co/ByteDance-Seed/BFS-Prover-V2-32B)
531
+ - 📚 [Lean4 Documentation](https://lean-lang.org/documentation/)
532
+ - 🔧 [llama.cpp Guide](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
533
+ - 🐋 [Ollama Documentation](https://github.com/ollama/ollama/tree/main/docs)
534
+
535
+ ### Community
536
+
537
+ - [Lean4 Zulip Chat](https://leanprover.zulipchat.com/)
538
+ - [llama.cpp Discussions](https://github.com/ggerganov/llama.cpp/discussions)
539
+ - [Report Issues](https://huggingface.co/richardyoung/bfs-prover-v2-32b/discussions)
540
+
541
+ ### Related Models
542
+
543
+ - [ByteDance BFS-Prover-V2-32B (Original)](https://huggingface.co/ByteDance-Seed/BFS-Prover-V2-32B)
544
+ - [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
545
+ - [Other Formal Verification Models](https://huggingface.co/models?other=formal-verification)
546
+
547
+ ---
548
+
549
+ ## Model Card Authors
550
+
551
+ - **Quantization & Documentation**: richardyoung
552
+ - **Base Model**: ByteDance-Seed Team
553
+ - **Last Updated**: October 2025
554
+
555
+ ---
556
+
557
+ ## Changelog
558
+
559
+ ### Version 1.0 (October 2025)
560
+ - Initial GGUF quantization release
561
+ - Three variants: F16, Q5_K_M, Q4_K_M
562
+ - Ollama Modelfile configurations
563
+ - Comprehensive documentation
564
+ - Lean4-specific prompt format
565
+
566
+ ---
567
+
568
+ <div align="center">
569
+
570
+ **Questions or Issues?** [Open a discussion](https://huggingface.co/richardyoung/bfs-prover-v2-32b/discussions)
571
+
572
+ *Quantized with llama.cpp · Optimized for Formal Mathematics*
573
+
574
+ </div>