π€ NLP Sentiment Model
BERT fine-tuned for 3-class sentiment analysis β positive, negative, neutral
π Model Description
NLP Sentiment Model is a fine-tuned version of bert-base-uncased trained on the
NLP Benchmark Suite
dataset by Abhimanyu Prasad.
The model classifies input text into three sentiment categories:
- π Positive β text expressing satisfaction, happiness, or praise
- π Negative β text expressing dissatisfaction, anger, or criticism
- π Neutral β text that is factual, balanced, or indifferent
It was trained on real-world data from Amazon product reviews, Twitter posts, and IMDB movie reviews β covering a wide range of domains and writing styles.
π Performance
| Metric | Score |
|---|---|
| Accuracy | 84.58% |
| Macro F1 | 0.7928 |
| Epochs | 3 |
| Training samples | ~4,796 |
| Test samples | ~1,199 |
| Base model | bert-base-uncased |
β‘ Quick Start
from transformers import pipeline
# Load the model
classifier = pipeline(
"sentiment-analysis",
model="abhiprd20/nlp-sentiment-model"
)
# Predict sentiment
result = classifier("This product is absolutely amazing!")
print(result)
# β [{'label': 'positive', 'score': 0.97}]
π More Examples
from transformers import pipeline
classifier = pipeline("sentiment-analysis",
model="abhiprd20/nlp-sentiment-model")
texts = [
"I absolutely love this, best purchase ever!",
"Terrible quality, complete waste of money.",
"It arrived on time and works as described.",
"The customer service was incredibly helpful.",
"Not great, not terrible, just average.",
]
for text in texts:
result = classifier(text)[0]
print(f"Text : {text}")
print(f"Label : {result['label']} ({round(result['score']*100, 1)}% confident)\n")
Expected output:
Text : I absolutely love this, best purchase ever!
Label : positive (97.3% confident)
Text : Terrible quality, complete waste of money.
Label : negative (98.1% confident)
Text : It arrived on time and works as described.
Label : neutral (95.4% confident)
Text : The customer service was incredibly helpful.
Label : positive (96.8% confident)
Text : Not great, not terrible, just average.
Label : neutral (91.2% confident)
π§ͺ Use With AutoTokenizer and AutoModel
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("abhiprd20/nlp-sentiment-model")
model = AutoModelForSequenceClassification.from_pretrained("abhiprd20/nlp-sentiment-model")
text = "This is the best thing I have ever bought!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
label_id = torch.argmax(probs).item()
id2label = {0: "negative", 1: "neutral", 2: "positive"}
print(f"Label : {id2label[label_id]}")
print(f"Confidence : {probs[0][label_id].item():.4f}")
ποΈ Training Details
| Parameter | Value |
|---|---|
| Base model | bert-base-uncased |
| Task | Sequence Classification |
| Number of labels | 3 (negative, neutral, positive) |
| Epochs | 3 |
| Batch size | 16 |
| Max sequence length | 128 |
| Optimizer | AdamW (default) |
| Hardware | NVIDIA T4 GPU (Google Colab) |
| Framework | Hugging Face Transformers |
π¦ Training Dataset
This model was trained on the NLP Benchmark Suite dataset, specifically the sentiment analysis subset.
The training data covers three real-world sources:
| Source | Domain | Samples |
|---|---|---|
| Amazon Polarity | E-commerce product reviews | ~2,000 |
| TweetEval | Social media posts | ~2,000 |
| IMDB | Movie reviews | ~2,000 |
Total training samples: ~4,796 Total test samples: ~1,199
π·οΈ Label Mapping
| Label ID | Label | Meaning |
|---|---|---|
| 0 | negative | Dissatisfaction, criticism, anger |
| 1 | neutral | Factual, balanced, indifferent |
| 2 | positive | Satisfaction, praise, happiness |
β οΈ Limitations
- Trained on English text only β may not perform well on other languages
- Performance may vary on highly sarcastic or ironic text
- Neutral class has slightly lower F1 than positive and negative β a known challenge in 3-class sentiment tasks
- Not recommended for safety-critical applications without additional validation
βοΈ License
This model is released under the Apache License 2.0 β free for both research and commercial use.
Copyright 2025 Abhimanyu Prasad
π Citation
If you use this model in your research or project, please cite:
@misc{prasad2025nlpsentiment,
title = {NLP Sentiment Model: BERT Fine-tuned for 3-Class Sentiment Analysis},
author = {Prasad, Abhimanyu},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/abhiprd20/nlp-sentiment-model}},
note = {Fine-tuned on NLP Benchmark Suite. Accuracy: 84.58\%, F1: 0.7928}
}
π€ Author
Abhimanyu Prasad π€ Hugging Face: abhiprd20 π¦ Dataset: abhiprd20/nlp-benchmark-suite
If this model helped your project, consider giving it a β β it helps others find it too!
- Downloads last month
- 62
Model tree for abhiprd20/nlp-sentiment-model
Base model
google-bert/bert-base-uncasedDataset used to train abhiprd20/nlp-sentiment-model
Evaluation results
- accuracy on NLP Benchmark Suiteself-reported0.846
- f1 on NLP Benchmark Suiteself-reported0.793