stefan-it's picture
Upload ./training.log with huggingface_hub
e1d4206
2023-10-23 19:46:30,132 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,133 Model: "SequenceTagger(
(embeddings): TransformerWordEmbeddings(
(model): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(64001, 768)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
)
(locked_dropout): LockedDropout(p=0.5)
(linear): Linear(in_features=768, out_features=25, bias=True)
(loss_function): CrossEntropyLoss()
)"
2023-10-23 19:46:30,133 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,133 MultiCorpus: 966 train + 219 dev + 204 test sentences
- NER_HIPE_2022 Corpus: 966 train + 219 dev + 204 test sentences - /home/ubuntu/.flair/datasets/ner_hipe_2022/v2.1/ajmc/fr/with_doc_seperator
2023-10-23 19:46:30,133 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,133 Train: 966 sentences
2023-10-23 19:46:30,133 (train_with_dev=False, train_with_test=False)
2023-10-23 19:46:30,133 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,133 Training Params:
2023-10-23 19:46:30,133 - learning_rate: "3e-05"
2023-10-23 19:46:30,133 - mini_batch_size: "8"
2023-10-23 19:46:30,133 - max_epochs: "10"
2023-10-23 19:46:30,133 - shuffle: "True"
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 Plugins:
2023-10-23 19:46:30,134 - TensorboardLogger
2023-10-23 19:46:30,134 - LinearScheduler | warmup_fraction: '0.1'
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 Final evaluation on model from best epoch (best-model.pt)
2023-10-23 19:46:30,134 - metric: "('micro avg', 'f1-score')"
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 Computation:
2023-10-23 19:46:30,134 - compute on device: cuda:0
2023-10-23 19:46:30,134 - embedding storage: none
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 Model training base path: "hmbench-ajmc/fr-dbmdz/bert-base-historic-multilingual-64k-td-cased-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4"
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:30,134 Logging anything other than scalars to TensorBoard is currently not supported.
2023-10-23 19:46:31,240 epoch 1 - iter 12/121 - loss 3.24807732 - time (sec): 1.11 - samples/sec: 2518.68 - lr: 0.000003 - momentum: 0.000000
2023-10-23 19:46:32,304 epoch 1 - iter 24/121 - loss 2.79258124 - time (sec): 2.17 - samples/sec: 2390.19 - lr: 0.000006 - momentum: 0.000000
2023-10-23 19:46:33,389 epoch 1 - iter 36/121 - loss 2.21094234 - time (sec): 3.25 - samples/sec: 2368.55 - lr: 0.000009 - momentum: 0.000000
2023-10-23 19:46:34,416 epoch 1 - iter 48/121 - loss 1.86670875 - time (sec): 4.28 - samples/sec: 2342.88 - lr: 0.000012 - momentum: 0.000000
2023-10-23 19:46:35,411 epoch 1 - iter 60/121 - loss 1.61260919 - time (sec): 5.28 - samples/sec: 2341.01 - lr: 0.000015 - momentum: 0.000000
2023-10-23 19:46:36,502 epoch 1 - iter 72/121 - loss 1.43205689 - time (sec): 6.37 - samples/sec: 2337.73 - lr: 0.000018 - momentum: 0.000000
2023-10-23 19:46:37,553 epoch 1 - iter 84/121 - loss 1.28991617 - time (sec): 7.42 - samples/sec: 2318.73 - lr: 0.000021 - momentum: 0.000000
2023-10-23 19:46:38,662 epoch 1 - iter 96/121 - loss 1.16740443 - time (sec): 8.53 - samples/sec: 2329.74 - lr: 0.000024 - momentum: 0.000000
2023-10-23 19:46:39,829 epoch 1 - iter 108/121 - loss 1.06907853 - time (sec): 9.69 - samples/sec: 2310.03 - lr: 0.000027 - momentum: 0.000000
2023-10-23 19:46:40,825 epoch 1 - iter 120/121 - loss 0.99607988 - time (sec): 10.69 - samples/sec: 2300.67 - lr: 0.000030 - momentum: 0.000000
2023-10-23 19:46:40,899 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:40,899 EPOCH 1 done: loss 0.9920 - lr: 0.000030
2023-10-23 19:46:41,725 DEV : loss 0.20645131170749664 - f1-score (micro avg) 0.5877
2023-10-23 19:46:41,729 saving best model
2023-10-23 19:46:42,204 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:43,176 epoch 2 - iter 12/121 - loss 0.18130347 - time (sec): 0.97 - samples/sec: 2422.45 - lr: 0.000030 - momentum: 0.000000
2023-10-23 19:46:44,212 epoch 2 - iter 24/121 - loss 0.20186068 - time (sec): 2.01 - samples/sec: 2388.01 - lr: 0.000029 - momentum: 0.000000
2023-10-23 19:46:45,352 epoch 2 - iter 36/121 - loss 0.20017264 - time (sec): 3.15 - samples/sec: 2331.36 - lr: 0.000029 - momentum: 0.000000
2023-10-23 19:46:46,458 epoch 2 - iter 48/121 - loss 0.19487808 - time (sec): 4.25 - samples/sec: 2308.55 - lr: 0.000029 - momentum: 0.000000
2023-10-23 19:46:47,543 epoch 2 - iter 60/121 - loss 0.19135954 - time (sec): 5.34 - samples/sec: 2344.92 - lr: 0.000028 - momentum: 0.000000
2023-10-23 19:46:48,677 epoch 2 - iter 72/121 - loss 0.19070268 - time (sec): 6.47 - samples/sec: 2318.58 - lr: 0.000028 - momentum: 0.000000
2023-10-23 19:46:49,724 epoch 2 - iter 84/121 - loss 0.18314887 - time (sec): 7.52 - samples/sec: 2300.35 - lr: 0.000028 - momentum: 0.000000
2023-10-23 19:46:50,702 epoch 2 - iter 96/121 - loss 0.17785951 - time (sec): 8.50 - samples/sec: 2297.77 - lr: 0.000027 - momentum: 0.000000
2023-10-23 19:46:51,792 epoch 2 - iter 108/121 - loss 0.17348365 - time (sec): 9.59 - samples/sec: 2305.39 - lr: 0.000027 - momentum: 0.000000
2023-10-23 19:46:52,878 epoch 2 - iter 120/121 - loss 0.17447395 - time (sec): 10.67 - samples/sec: 2300.45 - lr: 0.000027 - momentum: 0.000000
2023-10-23 19:46:52,950 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:52,950 EPOCH 2 done: loss 0.1737 - lr: 0.000027
2023-10-23 19:46:53,648 DEV : loss 0.10967682301998138 - f1-score (micro avg) 0.8192
2023-10-23 19:46:53,652 saving best model
2023-10-23 19:46:54,308 ----------------------------------------------------------------------------------------------------
2023-10-23 19:46:55,396 epoch 3 - iter 12/121 - loss 0.10424801 - time (sec): 1.09 - samples/sec: 2178.66 - lr: 0.000026 - momentum: 0.000000
2023-10-23 19:46:56,470 epoch 3 - iter 24/121 - loss 0.10985516 - time (sec): 2.16 - samples/sec: 2235.22 - lr: 0.000026 - momentum: 0.000000
2023-10-23 19:46:57,483 epoch 3 - iter 36/121 - loss 0.11434645 - time (sec): 3.17 - samples/sec: 2247.19 - lr: 0.000026 - momentum: 0.000000
2023-10-23 19:46:58,589 epoch 3 - iter 48/121 - loss 0.11240648 - time (sec): 4.28 - samples/sec: 2202.81 - lr: 0.000025 - momentum: 0.000000
2023-10-23 19:46:59,654 epoch 3 - iter 60/121 - loss 0.10353803 - time (sec): 5.35 - samples/sec: 2266.78 - lr: 0.000025 - momentum: 0.000000
2023-10-23 19:47:00,720 epoch 3 - iter 72/121 - loss 0.10533976 - time (sec): 6.41 - samples/sec: 2260.60 - lr: 0.000025 - momentum: 0.000000
2023-10-23 19:47:01,781 epoch 3 - iter 84/121 - loss 0.10380808 - time (sec): 7.47 - samples/sec: 2276.64 - lr: 0.000024 - momentum: 0.000000
2023-10-23 19:47:02,845 epoch 3 - iter 96/121 - loss 0.10370254 - time (sec): 8.54 - samples/sec: 2284.93 - lr: 0.000024 - momentum: 0.000000
2023-10-23 19:47:03,898 epoch 3 - iter 108/121 - loss 0.10181749 - time (sec): 9.59 - samples/sec: 2295.94 - lr: 0.000024 - momentum: 0.000000
2023-10-23 19:47:04,934 epoch 3 - iter 120/121 - loss 0.10319866 - time (sec): 10.63 - samples/sec: 2319.81 - lr: 0.000023 - momentum: 0.000000
2023-10-23 19:47:05,002 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:05,002 EPOCH 3 done: loss 0.1032 - lr: 0.000023
2023-10-23 19:47:05,700 DEV : loss 0.10450883954763412 - f1-score (micro avg) 0.8285
2023-10-23 19:47:05,703 saving best model
2023-10-23 19:47:06,347 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:07,440 epoch 4 - iter 12/121 - loss 0.08520871 - time (sec): 1.09 - samples/sec: 2337.55 - lr: 0.000023 - momentum: 0.000000
2023-10-23 19:47:08,522 epoch 4 - iter 24/121 - loss 0.06947042 - time (sec): 2.17 - samples/sec: 2446.57 - lr: 0.000023 - momentum: 0.000000
2023-10-23 19:47:09,552 epoch 4 - iter 36/121 - loss 0.07330761 - time (sec): 3.20 - samples/sec: 2374.91 - lr: 0.000022 - momentum: 0.000000
2023-10-23 19:47:10,641 epoch 4 - iter 48/121 - loss 0.07280030 - time (sec): 4.29 - samples/sec: 2414.32 - lr: 0.000022 - momentum: 0.000000
2023-10-23 19:47:11,688 epoch 4 - iter 60/121 - loss 0.07044866 - time (sec): 5.34 - samples/sec: 2378.98 - lr: 0.000022 - momentum: 0.000000
2023-10-23 19:47:12,772 epoch 4 - iter 72/121 - loss 0.06822825 - time (sec): 6.42 - samples/sec: 2345.72 - lr: 0.000021 - momentum: 0.000000
2023-10-23 19:47:13,826 epoch 4 - iter 84/121 - loss 0.06425512 - time (sec): 7.48 - samples/sec: 2332.32 - lr: 0.000021 - momentum: 0.000000
2023-10-23 19:47:14,919 epoch 4 - iter 96/121 - loss 0.06673287 - time (sec): 8.57 - samples/sec: 2352.62 - lr: 0.000021 - momentum: 0.000000
2023-10-23 19:47:15,858 epoch 4 - iter 108/121 - loss 0.06359809 - time (sec): 9.51 - samples/sec: 2326.85 - lr: 0.000020 - momentum: 0.000000
2023-10-23 19:47:17,027 epoch 4 - iter 120/121 - loss 0.06796400 - time (sec): 10.68 - samples/sec: 2306.60 - lr: 0.000020 - momentum: 0.000000
2023-10-23 19:47:17,091 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:17,091 EPOCH 4 done: loss 0.0677 - lr: 0.000020
2023-10-23 19:47:17,790 DEV : loss 0.12491005659103394 - f1-score (micro avg) 0.8311
2023-10-23 19:47:17,794 saving best model
2023-10-23 19:47:18,390 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:19,340 epoch 5 - iter 12/121 - loss 0.03233918 - time (sec): 0.95 - samples/sec: 2169.81 - lr: 0.000020 - momentum: 0.000000
2023-10-23 19:47:20,391 epoch 5 - iter 24/121 - loss 0.04422085 - time (sec): 2.00 - samples/sec: 2312.34 - lr: 0.000019 - momentum: 0.000000
2023-10-23 19:47:21,461 epoch 5 - iter 36/121 - loss 0.04790383 - time (sec): 3.07 - samples/sec: 2324.16 - lr: 0.000019 - momentum: 0.000000
2023-10-23 19:47:22,493 epoch 5 - iter 48/121 - loss 0.04847780 - time (sec): 4.10 - samples/sec: 2333.57 - lr: 0.000019 - momentum: 0.000000
2023-10-23 19:47:23,676 epoch 5 - iter 60/121 - loss 0.04926650 - time (sec): 5.29 - samples/sec: 2313.39 - lr: 0.000018 - momentum: 0.000000
2023-10-23 19:47:24,740 epoch 5 - iter 72/121 - loss 0.05103815 - time (sec): 6.35 - samples/sec: 2282.77 - lr: 0.000018 - momentum: 0.000000
2023-10-23 19:47:25,773 epoch 5 - iter 84/121 - loss 0.04777471 - time (sec): 7.38 - samples/sec: 2287.13 - lr: 0.000018 - momentum: 0.000000
2023-10-23 19:47:26,850 epoch 5 - iter 96/121 - loss 0.05116899 - time (sec): 8.46 - samples/sec: 2296.55 - lr: 0.000017 - momentum: 0.000000
2023-10-23 19:47:27,888 epoch 5 - iter 108/121 - loss 0.04950296 - time (sec): 9.50 - samples/sec: 2308.93 - lr: 0.000017 - momentum: 0.000000
2023-10-23 19:47:28,972 epoch 5 - iter 120/121 - loss 0.04870772 - time (sec): 10.58 - samples/sec: 2310.78 - lr: 0.000017 - momentum: 0.000000
2023-10-23 19:47:29,085 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:29,085 EPOCH 5 done: loss 0.0483 - lr: 0.000017
2023-10-23 19:47:29,788 DEV : loss 0.13088755309581757 - f1-score (micro avg) 0.8259
2023-10-23 19:47:29,791 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:30,900 epoch 6 - iter 12/121 - loss 0.03844681 - time (sec): 1.11 - samples/sec: 2252.08 - lr: 0.000016 - momentum: 0.000000
2023-10-23 19:47:31,982 epoch 6 - iter 24/121 - loss 0.04727965 - time (sec): 2.19 - samples/sec: 2271.72 - lr: 0.000016 - momentum: 0.000000
2023-10-23 19:47:32,972 epoch 6 - iter 36/121 - loss 0.04000663 - time (sec): 3.18 - samples/sec: 2256.81 - lr: 0.000016 - momentum: 0.000000
2023-10-23 19:47:33,992 epoch 6 - iter 48/121 - loss 0.04182624 - time (sec): 4.20 - samples/sec: 2295.21 - lr: 0.000015 - momentum: 0.000000
2023-10-23 19:47:35,127 epoch 6 - iter 60/121 - loss 0.03957851 - time (sec): 5.33 - samples/sec: 2285.61 - lr: 0.000015 - momentum: 0.000000
2023-10-23 19:47:36,196 epoch 6 - iter 72/121 - loss 0.03745399 - time (sec): 6.40 - samples/sec: 2261.32 - lr: 0.000015 - momentum: 0.000000
2023-10-23 19:47:37,194 epoch 6 - iter 84/121 - loss 0.03711640 - time (sec): 7.40 - samples/sec: 2265.94 - lr: 0.000014 - momentum: 0.000000
2023-10-23 19:47:38,275 epoch 6 - iter 96/121 - loss 0.03582164 - time (sec): 8.48 - samples/sec: 2262.15 - lr: 0.000014 - momentum: 0.000000
2023-10-23 19:47:39,449 epoch 6 - iter 108/121 - loss 0.03603381 - time (sec): 9.66 - samples/sec: 2276.28 - lr: 0.000014 - momentum: 0.000000
2023-10-23 19:47:40,533 epoch 6 - iter 120/121 - loss 0.03621539 - time (sec): 10.74 - samples/sec: 2289.15 - lr: 0.000013 - momentum: 0.000000
2023-10-23 19:47:40,601 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:40,602 EPOCH 6 done: loss 0.0367 - lr: 0.000013
2023-10-23 19:47:41,300 DEV : loss 0.13503359258174896 - f1-score (micro avg) 0.8512
2023-10-23 19:47:41,304 saving best model
2023-10-23 19:47:41,938 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:43,144 epoch 7 - iter 12/121 - loss 0.02274027 - time (sec): 1.21 - samples/sec: 2178.14 - lr: 0.000013 - momentum: 0.000000
2023-10-23 19:47:44,243 epoch 7 - iter 24/121 - loss 0.02447278 - time (sec): 2.30 - samples/sec: 2290.94 - lr: 0.000013 - momentum: 0.000000
2023-10-23 19:47:45,284 epoch 7 - iter 36/121 - loss 0.02336160 - time (sec): 3.35 - samples/sec: 2272.90 - lr: 0.000012 - momentum: 0.000000
2023-10-23 19:47:46,277 epoch 7 - iter 48/121 - loss 0.02326472 - time (sec): 4.34 - samples/sec: 2265.11 - lr: 0.000012 - momentum: 0.000000
2023-10-23 19:47:47,321 epoch 7 - iter 60/121 - loss 0.02119715 - time (sec): 5.38 - samples/sec: 2279.08 - lr: 0.000012 - momentum: 0.000000
2023-10-23 19:47:48,467 epoch 7 - iter 72/121 - loss 0.02295698 - time (sec): 6.53 - samples/sec: 2284.58 - lr: 0.000011 - momentum: 0.000000
2023-10-23 19:47:49,624 epoch 7 - iter 84/121 - loss 0.02294149 - time (sec): 7.69 - samples/sec: 2275.73 - lr: 0.000011 - momentum: 0.000000
2023-10-23 19:47:50,660 epoch 7 - iter 96/121 - loss 0.02335204 - time (sec): 8.72 - samples/sec: 2289.17 - lr: 0.000011 - momentum: 0.000000
2023-10-23 19:47:51,711 epoch 7 - iter 108/121 - loss 0.02340597 - time (sec): 9.77 - samples/sec: 2287.38 - lr: 0.000010 - momentum: 0.000000
2023-10-23 19:47:52,704 epoch 7 - iter 120/121 - loss 0.02406333 - time (sec): 10.77 - samples/sec: 2279.94 - lr: 0.000010 - momentum: 0.000000
2023-10-23 19:47:52,785 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:52,786 EPOCH 7 done: loss 0.0239 - lr: 0.000010
2023-10-23 19:47:53,482 DEV : loss 0.16384273767471313 - f1-score (micro avg) 0.8311
2023-10-23 19:47:53,486 ----------------------------------------------------------------------------------------------------
2023-10-23 19:47:54,589 epoch 8 - iter 12/121 - loss 0.00827368 - time (sec): 1.10 - samples/sec: 2277.74 - lr: 0.000010 - momentum: 0.000000
2023-10-23 19:47:55,636 epoch 8 - iter 24/121 - loss 0.01107853 - time (sec): 2.15 - samples/sec: 2336.51 - lr: 0.000009 - momentum: 0.000000
2023-10-23 19:47:56,643 epoch 8 - iter 36/121 - loss 0.01337099 - time (sec): 3.16 - samples/sec: 2328.93 - lr: 0.000009 - momentum: 0.000000
2023-10-23 19:47:57,776 epoch 8 - iter 48/121 - loss 0.01692936 - time (sec): 4.29 - samples/sec: 2320.24 - lr: 0.000009 - momentum: 0.000000
2023-10-23 19:47:58,847 epoch 8 - iter 60/121 - loss 0.02066655 - time (sec): 5.36 - samples/sec: 2289.65 - lr: 0.000008 - momentum: 0.000000
2023-10-23 19:47:59,934 epoch 8 - iter 72/121 - loss 0.02097689 - time (sec): 6.45 - samples/sec: 2276.04 - lr: 0.000008 - momentum: 0.000000
2023-10-23 19:48:00,917 epoch 8 - iter 84/121 - loss 0.01995040 - time (sec): 7.43 - samples/sec: 2274.83 - lr: 0.000008 - momentum: 0.000000
2023-10-23 19:48:01,980 epoch 8 - iter 96/121 - loss 0.01916411 - time (sec): 8.49 - samples/sec: 2273.30 - lr: 0.000008 - momentum: 0.000000
2023-10-23 19:48:03,071 epoch 8 - iter 108/121 - loss 0.01856749 - time (sec): 9.58 - samples/sec: 2298.33 - lr: 0.000007 - momentum: 0.000000
2023-10-23 19:48:04,144 epoch 8 - iter 120/121 - loss 0.01954548 - time (sec): 10.66 - samples/sec: 2297.31 - lr: 0.000007 - momentum: 0.000000
2023-10-23 19:48:04,255 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:04,255 EPOCH 8 done: loss 0.0196 - lr: 0.000007
2023-10-23 19:48:04,951 DEV : loss 0.1711445152759552 - f1-score (micro avg) 0.8476
2023-10-23 19:48:04,955 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:05,964 epoch 9 - iter 12/121 - loss 0.00580394 - time (sec): 1.01 - samples/sec: 2520.06 - lr: 0.000006 - momentum: 0.000000
2023-10-23 19:48:07,098 epoch 9 - iter 24/121 - loss 0.00913463 - time (sec): 2.14 - samples/sec: 2274.55 - lr: 0.000006 - momentum: 0.000000
2023-10-23 19:48:08,219 epoch 9 - iter 36/121 - loss 0.00919498 - time (sec): 3.26 - samples/sec: 2360.58 - lr: 0.000006 - momentum: 0.000000
2023-10-23 19:48:09,274 epoch 9 - iter 48/121 - loss 0.01086227 - time (sec): 4.32 - samples/sec: 2254.65 - lr: 0.000006 - momentum: 0.000000
2023-10-23 19:48:10,422 epoch 9 - iter 60/121 - loss 0.01061191 - time (sec): 5.47 - samples/sec: 2246.56 - lr: 0.000005 - momentum: 0.000000
2023-10-23 19:48:11,416 epoch 9 - iter 72/121 - loss 0.01162909 - time (sec): 6.46 - samples/sec: 2263.60 - lr: 0.000005 - momentum: 0.000000
2023-10-23 19:48:12,474 epoch 9 - iter 84/121 - loss 0.01515064 - time (sec): 7.52 - samples/sec: 2282.47 - lr: 0.000005 - momentum: 0.000000
2023-10-23 19:48:13,624 epoch 9 - iter 96/121 - loss 0.01530827 - time (sec): 8.67 - samples/sec: 2295.79 - lr: 0.000004 - momentum: 0.000000
2023-10-23 19:48:14,833 epoch 9 - iter 108/121 - loss 0.01611207 - time (sec): 9.88 - samples/sec: 2238.25 - lr: 0.000004 - momentum: 0.000000
2023-10-23 19:48:15,854 epoch 9 - iter 120/121 - loss 0.01547451 - time (sec): 10.90 - samples/sec: 2255.44 - lr: 0.000004 - momentum: 0.000000
2023-10-23 19:48:15,924 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:15,924 EPOCH 9 done: loss 0.0154 - lr: 0.000004
2023-10-23 19:48:16,621 DEV : loss 0.17786599695682526 - f1-score (micro avg) 0.8407
2023-10-23 19:48:16,625 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:17,612 epoch 10 - iter 12/121 - loss 0.01379435 - time (sec): 0.99 - samples/sec: 2234.08 - lr: 0.000003 - momentum: 0.000000
2023-10-23 19:48:18,712 epoch 10 - iter 24/121 - loss 0.01395998 - time (sec): 2.09 - samples/sec: 2286.63 - lr: 0.000003 - momentum: 0.000000
2023-10-23 19:48:19,850 epoch 10 - iter 36/121 - loss 0.01108721 - time (sec): 3.22 - samples/sec: 2255.02 - lr: 0.000003 - momentum: 0.000000
2023-10-23 19:48:20,946 epoch 10 - iter 48/121 - loss 0.01106129 - time (sec): 4.32 - samples/sec: 2288.32 - lr: 0.000002 - momentum: 0.000000
2023-10-23 19:48:21,985 epoch 10 - iter 60/121 - loss 0.01028171 - time (sec): 5.36 - samples/sec: 2316.26 - lr: 0.000002 - momentum: 0.000000
2023-10-23 19:48:23,073 epoch 10 - iter 72/121 - loss 0.01138977 - time (sec): 6.45 - samples/sec: 2292.04 - lr: 0.000002 - momentum: 0.000000
2023-10-23 19:48:24,157 epoch 10 - iter 84/121 - loss 0.01123254 - time (sec): 7.53 - samples/sec: 2281.10 - lr: 0.000001 - momentum: 0.000000
2023-10-23 19:48:25,181 epoch 10 - iter 96/121 - loss 0.01090350 - time (sec): 8.56 - samples/sec: 2278.21 - lr: 0.000001 - momentum: 0.000000
2023-10-23 19:48:26,409 epoch 10 - iter 108/121 - loss 0.00999080 - time (sec): 9.78 - samples/sec: 2290.97 - lr: 0.000001 - momentum: 0.000000
2023-10-23 19:48:27,404 epoch 10 - iter 120/121 - loss 0.01142545 - time (sec): 10.78 - samples/sec: 2282.97 - lr: 0.000000 - momentum: 0.000000
2023-10-23 19:48:27,470 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:27,471 EPOCH 10 done: loss 0.0114 - lr: 0.000000
2023-10-23 19:48:28,165 DEV : loss 0.17944633960723877 - f1-score (micro avg) 0.8438
2023-10-23 19:48:28,638 ----------------------------------------------------------------------------------------------------
2023-10-23 19:48:28,639 Loading model from best epoch ...
2023-10-23 19:48:30,184 SequenceTagger predicts: Dictionary with 25 tags: O, S-scope, B-scope, E-scope, I-scope, S-pers, B-pers, E-pers, I-pers, S-work, B-work, E-work, I-work, S-loc, B-loc, E-loc, I-loc, S-object, B-object, E-object, I-object, S-date, B-date, E-date, I-date
2023-10-23 19:48:30,918
Results:
- F-score (micro) 0.8288
- F-score (macro) 0.5689
- Accuracy 0.7297
By class:
precision recall f1-score support
pers 0.8472 0.8777 0.8622 139
scope 0.8603 0.9070 0.8830 129
work 0.6809 0.8000 0.7356 80
loc 1.0000 0.2222 0.3636 9
date 0.0000 0.0000 0.0000 3
micro avg 0.8112 0.8472 0.8288 360
macro avg 0.6777 0.5614 0.5689 360
weighted avg 0.8117 0.8472 0.8219 360
2023-10-23 19:48:30,919 ----------------------------------------------------------------------------------------------------