Edit model card

GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f

This model is a fine-tuned version of mahdibaghbanzadeh/seqsight_65536_512_47M on the mahdibaghbanzadeh/GUE_EMP_H3K36me3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5000
  • F1 Score: 0.7751
  • Accuracy: 0.7769

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss F1 Score Accuracy
0.5805 0.92 200 0.5475 0.7329 0.7351
0.533 1.83 400 0.5360 0.7384 0.7411
0.5243 2.75 600 0.5261 0.7465 0.7483
0.5216 3.67 800 0.5199 0.7555 0.7563
0.5103 4.59 1000 0.5198 0.7563 0.7583
0.5074 5.5 1200 0.5137 0.7593 0.7615
0.5047 6.42 1400 0.5086 0.7731 0.7738
0.5017 7.34 1600 0.5109 0.7695 0.7712
0.4951 8.26 1800 0.5114 0.7696 0.7718
0.499 9.17 2000 0.5101 0.7674 0.7701
0.4968 10.09 2200 0.5107 0.7670 0.7704
0.4928 11.01 2400 0.5085 0.7655 0.7689
0.4914 11.93 2600 0.5024 0.7741 0.7764
0.4898 12.84 2800 0.5021 0.7707 0.7732
0.4886 13.76 3000 0.5087 0.7676 0.7709
0.4853 14.68 3200 0.4988 0.7759 0.7775
0.489 15.6 3400 0.5080 0.7675 0.7712
0.4866 16.51 3600 0.5003 0.7750 0.7769
0.4851 17.43 3800 0.4924 0.7816 0.7830
0.4856 18.35 4000 0.4995 0.7763 0.7787
0.4816 19.27 4200 0.4990 0.7754 0.7775
0.4845 20.18 4400 0.5034 0.7717 0.7749
0.4832 21.1 4600 0.4975 0.7765 0.7787
0.4828 22.02 4800 0.5014 0.7756 0.7778
0.4829 22.94 5000 0.4969 0.7744 0.7769
0.4803 23.85 5200 0.4996 0.7732 0.7761
0.4788 24.77 5400 0.5065 0.7725 0.7758
0.4817 25.69 5600 0.5004 0.7760 0.7784
0.4796 26.61 5800 0.4973 0.7755 0.7778
0.4758 27.52 6000 0.5100 0.7729 0.7764
0.4787 28.44 6200 0.5018 0.7717 0.7747
0.4762 29.36 6400 0.5042 0.7713 0.7747
0.4794 30.28 6600 0.5040 0.7725 0.7758
0.4762 31.19 6800 0.4930 0.7812 0.7827
0.476 32.11 7000 0.4992 0.7733 0.7764
0.4767 33.03 7200 0.5005 0.7742 0.7769
0.4753 33.94 7400 0.5002 0.7756 0.7781
0.4756 34.86 7600 0.4983 0.7750 0.7778
0.4743 35.78 7800 0.4978 0.7738 0.7767
0.476 36.7 8000 0.4983 0.7744 0.7772
0.4736 37.61 8200 0.5032 0.7712 0.7747
0.4758 38.53 8400 0.4928 0.7799 0.7818
0.4734 39.45 8600 0.4986 0.7745 0.7772
0.4725 40.37 8800 0.5023 0.7729 0.7761
0.4773 41.28 9000 0.4986 0.7734 0.7764
0.4743 42.2 9200 0.4955 0.7774 0.7798
0.4721 43.12 9400 0.4984 0.7755 0.7781
0.4744 44.04 9600 0.4979 0.7750 0.7778
0.4732 44.95 9800 0.5005 0.7721 0.7752
0.4742 45.87 10000 0.4987 0.7755 0.7784

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.