Edit model card

GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K79me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4367
  • F1 Score: 0.8170
  • Accuracy: 0.8173

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5342 1.1 200 0.4669 0.8027 0.8027
0.4727 2.21 400 0.4585 0.7966 0.7982
0.4678 3.31 600 0.4500 0.8050 0.8058
0.4585 4.42 800 0.4482 0.8051 0.8062
0.4586 5.52 1000 0.4469 0.8050 0.8062
0.4519 6.63 1200 0.4499 0.8032 0.8048
0.4567 7.73 1400 0.4412 0.8097 0.8103
0.4482 8.84 1600 0.4460 0.8039 0.8051
0.4492 9.94 1800 0.4426 0.8105 0.8103
0.4476 11.05 2000 0.4397 0.8074 0.8083
0.4472 12.15 2200 0.4359 0.8109 0.8114
0.4424 13.26 2400 0.4347 0.8093 0.8100
0.4412 14.36 2600 0.4350 0.8097 0.8100
0.4441 15.47 2800 0.4438 0.8012 0.8031
0.4389 16.57 3000 0.4347 0.8085 0.8089
0.4408 17.68 3200 0.4338 0.8093 0.8100
0.4352 18.78 3400 0.4318 0.8126 0.8128
0.4363 19.89 3600 0.4363 0.8085 0.8096
0.4377 20.99 3800 0.4340 0.8094 0.8100
0.4367 22.1 4000 0.4326 0.8103 0.8110
0.4356 23.2 4200 0.4325 0.8113 0.8121
0.436 24.31 4400 0.4342 0.8125 0.8131
0.4275 25.41 4600 0.4359 0.8140 0.8148
0.4331 26.52 4800 0.4318 0.8132 0.8135
0.4341 27.62 5000 0.4310 0.8130 0.8135
0.4297 28.73 5200 0.4298 0.8112 0.8117
0.428 29.83 5400 0.4309 0.8138 0.8141
0.4299 30.94 5600 0.4318 0.8105 0.8107
0.4299 32.04 5800 0.4303 0.8141 0.8141
0.4309 33.15 6000 0.4284 0.8149 0.8152
0.4284 34.25 6200 0.4307 0.8125 0.8128
0.4275 35.36 6400 0.4322 0.8123 0.8131
0.4272 36.46 6600 0.4292 0.8162 0.8162
0.4286 37.57 6800 0.4303 0.8141 0.8145
0.4263 38.67 7000 0.4320 0.8136 0.8141
0.4246 39.78 7200 0.4304 0.8165 0.8166
0.4268 40.88 7400 0.4290 0.8150 0.8152
0.4263 41.99 7600 0.4290 0.8153 0.8155
0.4243 43.09 7800 0.4303 0.8161 0.8166
0.4262 44.2 8000 0.4295 0.8141 0.8145
0.4233 45.3 8200 0.4301 0.8152 0.8155
0.4256 46.41 8400 0.4286 0.8148 0.8152
0.4238 47.51 8600 0.4293 0.8156 0.8159
0.4236 48.62 8800 0.4312 0.8136 0.8141
0.4221 49.72 9000 0.4301 0.8142 0.8145
0.4283 50.83 9200 0.4296 0.8131 0.8135
0.4232 51.93 9400 0.4299 0.8142 0.8145
0.4238 53.04 9600 0.4297 0.8142 0.8145
0.4218 54.14 9800 0.4295 0.8149 0.8152
0.424 55.25 10000 0.4300 0.8145 0.8148

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.