Text Classification
Transformers
PyTorch
English
roberta
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use Intel/roberta-base-mrpc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Intel/roberta-base-mrpc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Intel/roberta-base-mrpc")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Intel/roberta-base-mrpc") model = AutoModelForSequenceClassification.from_pretrained("Intel/roberta-base-mrpc") - Notebooks
- Google Colab
- Kaggle
| { | |
| "epoch": 5.0, | |
| "eval_accuracy": 0.8774509803921569, | |
| "eval_combined_score": 0.8956220419202163, | |
| "eval_f1": 0.9137931034482758, | |
| "eval_loss": 0.5565009117126465, | |
| "eval_runtime": 10.3375, | |
| "eval_samples": 408, | |
| "eval_samples_per_second": 39.468, | |
| "eval_steps_per_second": 4.933 | |
| } |