Edit model card

GUE_EMP_H3K9ac-seqsight_65536_512_47M-L32_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K9ac dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4848
  • F1 Score: 0.7827
  • Accuracy: 0.7823

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5754 1.15 200 0.5823 0.6886 0.6920
0.5264 2.3 400 0.5889 0.6721 0.6794
0.5056 3.45 600 0.5484 0.7223 0.7233
0.5017 4.6 800 0.5254 0.7368 0.7370
0.4952 5.75 1000 0.5239 0.7431 0.7427
0.4875 6.9 1200 0.5354 0.7330 0.7337
0.4836 8.05 1400 0.5274 0.7417 0.7413
0.48 9.2 1600 0.5288 0.7338 0.7348
0.4728 10.34 1800 0.5185 0.7485 0.7481
0.4714 11.49 2000 0.5194 0.7445 0.7442
0.4601 12.64 2200 0.5263 0.7398 0.7402
0.4644 13.79 2400 0.5212 0.7466 0.7467
0.4575 14.94 2600 0.5052 0.7561 0.7557
0.4554 16.09 2800 0.5246 0.7443 0.7445
0.4494 17.24 3000 0.5211 0.7554 0.7553
0.447 18.39 3200 0.5075 0.7587 0.7582
0.4438 19.54 3400 0.5049 0.7608 0.7603
0.4347 20.69 3600 0.5061 0.7649 0.7647
0.4358 21.84 3800 0.5165 0.7500 0.7499
0.4279 22.99 4000 0.5435 0.7384 0.7395
0.4285 24.14 4200 0.5099 0.7616 0.7614
0.4174 25.29 4400 0.5390 0.7531 0.7528
0.4258 26.44 4600 0.5235 0.7645 0.7643
0.4164 27.59 4800 0.5163 0.7594 0.7589
0.4106 28.74 5000 0.5193 0.7562 0.7557
0.4144 29.89 5200 0.5387 0.7511 0.7510
0.4051 31.03 5400 0.5326 0.7554 0.7549
0.4067 32.18 5600 0.5198 0.7593 0.7589
0.3991 33.33 5800 0.5407 0.7597 0.7593
0.4046 34.48 6000 0.5261 0.7636 0.7632
0.3921 35.63 6200 0.5381 0.7605 0.7600
0.3954 36.78 6400 0.5318 0.7561 0.7557
0.3898 37.93 6600 0.5434 0.7540 0.7535
0.3877 39.08 6800 0.5449 0.7572 0.7567
0.3862 40.23 7000 0.5500 0.7540 0.7535
0.3856 41.38 7200 0.5429 0.7565 0.7560
0.3831 42.53 7400 0.5371 0.7583 0.7578
0.3806 43.68 7600 0.5411 0.7568 0.7564
0.3743 44.83 7800 0.5551 0.7554 0.7549
0.3798 45.98 8000 0.5421 0.7567 0.7564
0.3773 47.13 8200 0.5566 0.7536 0.7531
0.373 48.28 8400 0.5591 0.7547 0.7542
0.3702 49.43 8600 0.5535 0.7519 0.7513
0.3712 50.57 8800 0.5583 0.7536 0.7531
0.3701 51.72 9000 0.5568 0.7540 0.7535
0.3664 52.87 9200 0.5637 0.7583 0.7578
0.3713 54.02 9400 0.5597 0.7537 0.7531
0.3679 55.17 9600 0.5612 0.7562 0.7557
0.3637 56.32 9800 0.5585 0.7569 0.7564
0.3676 57.47 10000 0.5579 0.7569 0.7564

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.