mjaggi commited on
Commit
abbfaae
·
verified ·
1 Parent(s): ffb69ff

initial model card

Browse files
Files changed (1) hide show
  1. README.md +162 -10
README.md CHANGED
@@ -1,17 +1,169 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- The Apertus Series of open Language Models represents a new standard in truly open-source language model development.
6
- All
7
- - Model weights
8
- - Training data (web-compliant)
9
- - Training code
10
- - Technical documentation
11
- are publicly available, for full transparency and reproducibility.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- The series includes Base, Instruct, and quantized variants across two parameter sizes (8B and 70B), along with a specialized 70B reasoning model.
14
- Apertus is designed to support a wide range of research and application needs, while pushing the boundaries of openness in AI.
 
 
 
 
 
 
15
 
 
 
 
16
 
17
- ...
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model:
4
+ - swiss-ai/Apertus-8B-2509
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - multilingual
9
+ - compliant
10
+ - swiss-ai
11
+ - apertus
12
+
13
+ extra_gated_prompt: "### Apertus LLM Acceptable Use Policy \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model. "
14
+ extra_gated_fields:
15
+ Your Name: text
16
+ Country: country
17
+ Affiliation: text
18
+ geo: ip_location
19
+ By clicking Submit below I accept the terms of use: checkbox
20
+ extra_gated_button_content: Submit
21
  ---
22
 
23
+ # Apertus
24
+
25
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654baf61d625e083383dfd00/C4YOBzOFPjGg0gHRfuRpY.png)
26
+
27
+ ## Table of Contents
28
+
29
+ 1. [Model Summary](#model-summary)
30
+ 2. [How to use](#how-to-use)
31
+ 3. [Evaluation](#evaluation)
32
+ 4. [Training](#training)
33
+ 5. [Limitations](#limitations)
34
+ 6. [License](#license)
35
+
36
+ ## Model Summary
37
+
38
+ Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models.
39
+ The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.
40
+
41
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654baf61d625e083383dfd00/gKDv_6dpIpvmgyquenbXt.png)
42
+
43
+ The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.
44
+
45
+ ### Key features
46
+ - **Fully open model**: open weights + open data + full training details including all data and training recipes
47
+ - **Massively Multilingual**: 1811 natively supported languages
48
+ - **Compliant** Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
49
+
50
+ For more details refer to our [technical report](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_Tech_Report.pdf)
51
+
52
+ ## How to use
53
+
54
+ The modeling code for Apertus is available in transformers `v4.56.0`, so make sure to upgrade your transformers version. You can also load the model with the latest `vLLM` which uses transformers as a backend.
55
+ ```bash
56
+ pip install -U transformers
57
+ ```
58
+
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+ model_name = "swiss-ai/Apertus-70B-Instruct-2509"
63
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
64
+
65
+ # load the tokenizer and the model
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ model_name,
69
+ ).to(device)
70
+
71
+ # prepare the model input
72
+ prompt = "Give me a brief explanation of gravity in simple terms."
73
+ messages_think = [
74
+ {"role": "user", "content": prompt}
75
+ ]
76
+
77
+ text = tokenizer.apply_chat_template(
78
+ messages_think,
79
+ tokenize=False,
80
+ add_generation_prompt=True,
81
+ )
82
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
83
+
84
+ # Generate the output
85
+ generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
86
+
87
+ # Get and decode the output
88
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
89
+ print(tokenizer.decode(output_ids, skip_special_tokens=True))
90
+ ```
91
+
92
+ >[!TIP]
93
+ > We recommend setting `temperature=0.8` and `top_p=0.9` in the sampling parameters.
94
+
95
+ ### Long context processing
96
+
97
+ Apertus by default supports a context length up to 65,536 tokens.
98
+
99
+ ### Agentic Usage
100
+
101
+ Apertus supports tool use
102
+
103
+ ### vLLM and SGLang
104
+
105
+ You can use vLLM and SGLang to deploy the model in an API compatible with OpenAI format.
106
+
107
+ ## Evaluation
108
+
109
+ In this section, we report the evaluation results of Apertus model.
110
+
111
+ ### Base Pre-Trained Model
112
+ - see [Apertus_Tech_Report.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_Tech_Report.pdf)
113
+
114
+ ### Instruction Model
115
+ - see [Apertus_Tech_Report.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_Tech_Report.pdf)
116
+
117
+ ## Training
118
+
119
+ ### Model
120
+
121
+ - **Architecture:** Transformer decoder
122
+ - **Pretraining tokens:** 15T
123
+ - **Precision:** bfloat16
124
+
125
+ ### Software & hardware
126
+
127
+ - **GPUs:** 4096 GH200
128
+ - **Training Framework:** [Megatron-LM](https://github.com/swiss-ai/Megatron-LM)
129
+ - ...
130
+
131
+ ### Open resources
132
+ All elements used in the training process are made openly available
133
+ - **Training data reconstruction scripts:** [github.com/swiss-ai/pretrain-data](https://github.com/swiss-ai/pretrain-data)
134
+ - The training intermediate checkpoints are available on the different branches of this same repository
135
+
136
+
137
+ ## Limitations
138
+
139
+ Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
140
+
141
+
142
+ ## Legal Aspects
143
+
144
+ #### EU AI Act Transparency Documentation and Code of Practice
145
+ - [Apertus_EU_Public_Summary.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Public_Summary.pdf)
146
+ - [Apertus_EU_Code_of_Practice.pdf](https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_EU_Code_of_Practice.pdf)
147
 
148
+ #### Data Protection and Copyright Requests
149
+ For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly
150
151
152
+
153
+ #### Output Filter for PII
154
+ - Currently no output filter is provided.
155
+ - Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.
156
 
157
+ ## Contact
158
+ To contact us, please send an email to
159
160
 
161
+ ## Citation
162
+ ```bash
163
+ @misc{swissai2025apertus,
164
+ title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
165
+ author={Apertus Team},
166
+ year={2025},
167
+ howpublished={\url{https://huggingface.co/swiss-ai/Apertus-70B-2509}}
168
+ }
169
+ ```