Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models
Paper
• 2308.15812 • Published
• 1
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
Paper: https://arxiv.org/abs/2308.15812
Setup: https://github.com/Hritikbansal/sparse_feedback/tree/main#reward-modeling
Example Usage: https://github.com/Hritikbansal/sparse_feedback/blob/main/inference/reranking.py
Download the checkpoints and provide their path as "reward_model_path"
"alpaca_model_path": Path to alpaca-7b checkpoint