nielsr HF Staff commited on
Commit
6865911
·
verified ·
1 Parent(s): 18ac3ed

Improve model card: Add pipeline tag, library name, and paper link

Browse files

This PR enhances the model card for the F2LLM-0.6B model by:
- Adding the `pipeline_tag: feature-extraction` to the YAML metadata, which improves the model's discoverability for embedding tasks on the Hugging Face Hub.
- Adding the `library_name: transformers` to the YAML metadata. This is justified by the existing `Usage` section's code snippet, which utilizes `transformers.AutoModel` and `AutoTokenizer`, and will enable the automatic "How to use" widget on the model page.
- Setting the main title of the model card to reflect the paper's title and adding a direct link to the paper, providing a clearer overview and easier access to the research.

Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -1,13 +1,19 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - codefuse-ai/F2LLM
5
  language:
6
  - en
7
- base_model:
8
- - Qwen/Qwen3-0.6B
 
9
  ---
10
 
 
 
 
 
11
  F2LLMs (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
12
 
13
  ## Usage
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen3-0.6B
4
  datasets:
5
  - codefuse-ai/F2LLM
6
  language:
7
  - en
8
+ license: apache-2.0
9
+ pipeline_tag: feature-extraction
10
+ library_name: transformers
11
  ---
12
 
13
+ # F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data
14
+
15
+ This repository contains the F2LLM-0.6B model presented in the paper [F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data](https://huggingface.co/papers/2510.02294).
16
+
17
  F2LLMs (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
18
 
19
  ## Usage