Edit model card

GUE_EMP_H3K4me3-seqsight_65536_512_47M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K4me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6663
  • F1 Score: 0.6771
  • Accuracy: 0.6780

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.6592 0.87 200 0.6369 0.6421 0.6457
0.6337 1.74 400 0.6275 0.6587 0.6584
0.6252 2.61 600 0.6200 0.6594 0.6595
0.613 3.48 800 0.6134 0.6701 0.6698
0.6054 4.35 1000 0.6184 0.6636 0.6633
0.6001 5.22 1200 0.6265 0.6576 0.6609
0.5912 6.09 1400 0.6365 0.6454 0.6519
0.5848 6.96 1600 0.6207 0.6634 0.6660
0.581 7.83 1800 0.6178 0.6677 0.6674
0.5783 8.7 2000 0.6238 0.6669 0.6679
0.5679 9.57 2200 0.6105 0.6672 0.6671
0.5667 10.43 2400 0.6234 0.6613 0.6641
0.562 11.3 2600 0.6186 0.6578 0.6625
0.5596 12.17 2800 0.6107 0.6681 0.6687
0.5557 13.04 3000 0.6174 0.6617 0.6641
0.5504 13.91 3200 0.6233 0.6567 0.6598
0.5442 14.78 3400 0.6256 0.6585 0.6606
0.5444 15.65 3600 0.6267 0.6614 0.6644
0.5355 16.52 3800 0.6271 0.6639 0.6658
0.5342 17.39 4000 0.6412 0.6657 0.6677
0.5333 18.26 4200 0.6348 0.6611 0.6652
0.5293 19.13 4400 0.6347 0.6636 0.6660
0.523 20.0 4600 0.6234 0.6668 0.6685
0.522 20.87 4800 0.6389 0.6653 0.6677
0.5188 21.74 5000 0.6483 0.6667 0.6682
0.5179 22.61 5200 0.6582 0.6634 0.6660
0.5134 23.48 5400 0.6561 0.6658 0.6696
0.5145 24.35 5600 0.6523 0.6541 0.6587
0.5066 25.22 5800 0.6677 0.6527 0.6576
0.5006 26.09 6000 0.6763 0.6556 0.6603
0.5049 26.96 6200 0.6573 0.6608 0.6649
0.4982 27.83 6400 0.6839 0.6404 0.6486
0.4976 28.7 6600 0.6357 0.6634 0.6641
0.4945 29.57 6800 0.6575 0.6628 0.6658
0.4871 30.43 7000 0.6674 0.6618 0.6660
0.4923 31.3 7200 0.6584 0.6663 0.6687
0.4914 32.17 7400 0.6557 0.6683 0.6698
0.4865 33.04 7600 0.6558 0.6622 0.6641
0.4872 33.91 7800 0.6583 0.6704 0.6728
0.4847 34.78 8000 0.6667 0.6690 0.6707
0.4797 35.65 8200 0.6573 0.6662 0.6682
0.4807 36.52 8400 0.6602 0.6677 0.6701
0.483 37.39 8600 0.6677 0.6682 0.6704
0.4773 38.26 8800 0.6760 0.6689 0.6723
0.4812 39.13 9000 0.6683 0.6662 0.6685
0.4781 40.0 9200 0.6686 0.6655 0.6682
0.4759 40.87 9400 0.6669 0.6714 0.6728
0.4759 41.74 9600 0.6669 0.6660 0.6682
0.4774 42.61 9800 0.6704 0.6646 0.6671
0.4726 43.48 10000 0.6705 0.6655 0.6679

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.