πŸ€– NLP Sentiment Model

BERT fine-tuned for 3-class sentiment analysis β€” positive, negative, neutral

License Accuracy F1 Score Base Model


πŸ“Œ Model Description

NLP Sentiment Model is a fine-tuned version of bert-base-uncased trained on the NLP Benchmark Suite dataset by Abhimanyu Prasad.

The model classifies input text into three sentiment categories:

  • 😊 Positive β€” text expressing satisfaction, happiness, or praise
  • 😞 Negative β€” text expressing dissatisfaction, anger, or criticism
  • 😐 Neutral β€” text that is factual, balanced, or indifferent

It was trained on real-world data from Amazon product reviews, Twitter posts, and IMDB movie reviews β€” covering a wide range of domains and writing styles.


πŸ“Š Performance

Metric Score
Accuracy 84.58%
Macro F1 0.7928
Epochs 3
Training samples ~4,796
Test samples ~1,199
Base model bert-base-uncased

⚑ Quick Start

from transformers import pipeline

# Load the model
classifier = pipeline(
    "sentiment-analysis",
    model="abhiprd20/nlp-sentiment-model"
)

# Predict sentiment
result = classifier("This product is absolutely amazing!")
print(result)
# β†’ [{'label': 'positive', 'score': 0.97}]

πŸ” More Examples

from transformers import pipeline

classifier = pipeline("sentiment-analysis",
                      model="abhiprd20/nlp-sentiment-model")

texts = [
    "I absolutely love this, best purchase ever!",
    "Terrible quality, complete waste of money.",
    "It arrived on time and works as described.",
    "The customer service was incredibly helpful.",
    "Not great, not terrible, just average.",
]

for text in texts:
    result = classifier(text)[0]
    print(f"Text  : {text}")
    print(f"Label : {result['label']} ({round(result['score']*100, 1)}% confident)\n")

Expected output:

Text  : I absolutely love this, best purchase ever!
Label : positive (97.3% confident)

Text  : Terrible quality, complete waste of money.
Label : negative (98.1% confident)

Text  : It arrived on time and works as described.
Label : neutral (95.4% confident)

Text  : The customer service was incredibly helpful.
Label : positive (96.8% confident)

Text  : Not great, not terrible, just average.
Label : neutral (91.2% confident)

πŸ§ͺ Use With AutoTokenizer and AutoModel

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("abhiprd20/nlp-sentiment-model")
model     = AutoModelForSequenceClassification.from_pretrained("abhiprd20/nlp-sentiment-model")

text   = "This is the best thing I have ever bought!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)

with torch.no_grad():
    outputs = model(**inputs)

probs      = torch.softmax(outputs.logits, dim=1)
label_id   = torch.argmax(probs).item()
id2label   = {0: "negative", 1: "neutral", 2: "positive"}

print(f"Label      : {id2label[label_id]}")
print(f"Confidence : {probs[0][label_id].item():.4f}")

πŸ—‚οΈ Training Details

Parameter Value
Base model bert-base-uncased
Task Sequence Classification
Number of labels 3 (negative, neutral, positive)
Epochs 3
Batch size 16
Max sequence length 128
Optimizer AdamW (default)
Hardware NVIDIA T4 GPU (Google Colab)
Framework Hugging Face Transformers

πŸ“¦ Training Dataset

This model was trained on the NLP Benchmark Suite dataset, specifically the sentiment analysis subset.

The training data covers three real-world sources:

Source Domain Samples
Amazon Polarity E-commerce product reviews ~2,000
TweetEval Social media posts ~2,000
IMDB Movie reviews ~2,000

Total training samples: ~4,796 Total test samples: ~1,199


🏷️ Label Mapping

Label ID Label Meaning
0 negative Dissatisfaction, criticism, anger
1 neutral Factual, balanced, indifferent
2 positive Satisfaction, praise, happiness

⚠️ Limitations

  • Trained on English text only β€” may not perform well on other languages
  • Performance may vary on highly sarcastic or ironic text
  • Neutral class has slightly lower F1 than positive and negative β€” a known challenge in 3-class sentiment tasks
  • Not recommended for safety-critical applications without additional validation

βš–οΈ License

This model is released under the Apache License 2.0 β€” free for both research and commercial use.

Copyright 2025 Abhimanyu Prasad


πŸ“Ž Citation

If you use this model in your research or project, please cite:

@misc{prasad2025nlpsentiment,
  title        = {NLP Sentiment Model: BERT Fine-tuned for 3-Class Sentiment Analysis},
  author       = {Prasad, Abhimanyu},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/abhiprd20/nlp-sentiment-model}},
  note         = {Fine-tuned on NLP Benchmark Suite. Accuracy: 84.58\%, F1: 0.7928}
}

πŸ‘€ Author

Abhimanyu Prasad πŸ€— Hugging Face: abhiprd20 πŸ“¦ Dataset: abhiprd20/nlp-benchmark-suite


If this model helped your project, consider giving it a ⭐ β€” it helps others find it too!

Downloads last month
62
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Model tree for abhiprd20/nlp-sentiment-model

Finetuned
(6457)
this model

Dataset used to train abhiprd20/nlp-sentiment-model

Evaluation results