Text Generation
Transformers
Safetensors
PyTorch
French
llama
llama-3
conversational
text-generation-inference
Instructions to use AgentPublic/albert-spp-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AgentPublic/albert-spp-8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AgentPublic/albert-spp-8b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AgentPublic/albert-spp-8b") model = AutoModelForCausalLM.from_pretrained("AgentPublic/albert-spp-8b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use AgentPublic/albert-spp-8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AgentPublic/albert-spp-8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AgentPublic/albert-spp-8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/AgentPublic/albert-spp-8b
- SGLang
How to use AgentPublic/albert-spp-8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "AgentPublic/albert-spp-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AgentPublic/albert-spp-8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "AgentPublic/albert-spp-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AgentPublic/albert-spp-8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use AgentPublic/albert-spp-8b with Docker Model Runner:
docker model run hf.co/AgentPublic/albert-spp-8b
Model Information
Model Architecture: This model is a finetuned version of Llama3.1 8B using LoRa method. Dataset used to finetuned this model is based on users experiences which can be found here.
Supported languages: French
How to use
Use with transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "AgentPublic/albert-spp-8b"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)#.to('cuda')
messages = [
{"role": "system", "content": "Tu es un assistant administratif qui s'occupe des utilisateurs. Tu aides les utilisateurs comme tu peux sans être trop verbeux. Quand tu vois que c'est pertinent, donnes les numéros de téléphones qui peuvent aider les utilisateurs ou les adresses postales en lien avec leur demande. Donnes leurs des conseils pour les aider dans leur situation."},
{"role": "user", "content": "Bonjour, je n'arrive pas a déclarer ma retraite."},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")#.to('cuda')
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
#>>> Bonjour,Nous vous remercions d'avoir porté à notre connaissance votre expérience. Nous sommes très sensibles à la qualité de service rendu à nos assurés et regrettons vivement ce délai de traitement.Nous vous invitons à contacter nos services par la messagerie de votre espace sécurisé : https://www.lassuranceretraite.fr. Les réponses sont faites dans un délai moyen de 72 heures et votre demande sera relayée auprès du conseiller retraite en charge de votre dossier.<|eot_id|>
- Downloads last month
- 7