distilbert-base-uncased-finetuned-rating-poem
This model is a fine-tuned version of distilbert-base-uncased on the poem_sentiment dataset. It achieves the following results on the evaluation set:
- Loss: 1.1902
- Accuracy: 0.8762
- F1: 0.8765
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.0599 | 0.45 | 50 | 1.0247 | 0.8571 | 0.8611 |
0.1257 | 0.89 | 100 | 1.1237 | 0.8571 | 0.8500 |
0.032 | 1.34 | 150 | 1.1346 | 0.8667 | 0.8567 |
0.0012 | 1.79 | 200 | 1.2181 | 0.8381 | 0.8373 |
0.0954 | 2.23 | 250 | 1.0423 | 0.8762 | 0.8667 |
0.0323 | 2.68 | 300 | 1.0560 | 0.8667 | 0.8715 |
0.0128 | 3.12 | 350 | 1.1156 | 0.8857 | 0.8809 |
0.0269 | 3.57 | 400 | 1.1702 | 0.8762 | 0.8681 |
0.0172 | 4.02 | 450 | 1.1968 | 0.8667 | 0.8678 |
0.0004 | 4.46 | 500 | 1.1906 | 0.8762 | 0.8765 |
0.0117 | 4.91 | 550 | 1.1902 | 0.8762 | 0.8765 |
Framework versions
- Transformers 4.32.1
- Pytorch 2.2.2
- Datasets 2.12.0
- Tokenizers 0.13.2
- Downloads last month
- 8
Finetuned from
Dataset used to train VuaCoBac/distilbert-base-uncased-finetuned-rating-poem
Evaluation results
- Accuracy on poem_sentimentvalidation set self-reported0.876
- F1 on poem_sentimentvalidation set self-reported0.877