Instructions to use PrimeQA/XTR-t5-base-40k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PrimeQA/XTR-t5-base-40k with Transformers:
# Load model directly from transformers import AutoTokenizer, FlanXTR tokenizer = AutoTokenizer.from_pretrained("PrimeQA/XTR-t5-base-40k") model = FlanXTR.from_pretrained("PrimeQA/XTR-t5-base-40k") - Notebooks
- Google Colab
- Kaggle
Model description
This is a multivector retrieval model that is built upon the efficient XTR architecture. The model utilizes t5-base encoder, and is finetuned on MSMarco dataset with XTR objective.
Overview
Language model: t5-base (encoder)
Language: English
Task: Multi-vector retrieval
Data: MSMarco
Intented use
This model can be utilized for text retrieval (IR). You can find detailed instructions and examples on how to use this model for text retrieval in the documentation here.
- Downloads last month
- 4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support