DPO_Demo / README.md
CatoG
Update README with detailed application description
81a070f unverified
|
raw
history blame
638 Bytes
metadata
title: DPO Demo
emoji: πŸ“š
colorFrom: blue
colorTo: blue
sdk: gradio
sdk_version: 6.0.2
app_file: app.py
pinned: false
short_description: Testing DPO for finetuning models

A test / demo application playground for DPO Preference Tuning on different LLM models. Running on Huggingspace: https://huggingface.co/spaces/CatoG/DPO_Demo

Allows for LLM model selection, preference tuning of LLM responses, model response tuning with LoRA and Direct Preference Optimization (DPO). Tuned model / policies can be downloaded for further use.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference