訓練很久, 推論結果沒有顯著提升, 只好調整模型, 捨去部份己訓練的權重

原本訓練一段時間的log:

✅ Model 300 loaded successfully
unpickled total 6086 examples
/usr/local/lib/python3.11/dist-packages/torch/autograd/graph.py:823: UserWarning: Attempting to run cuBLAS, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at /pytorch/aten/src/ATen/cuda/CublasHandlePool.cpp:180.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Epoch: [ 0], [ 0/ 290] time: 7, d_loss: 1.38687, g_loss: 7.64037, cheat_loss: 0.69287, const_loss: 0.04172, l1_loss: 2.36162, fm_loss: 0.00452, vgg_loss: 0.45396
checkpoint: current step 0,save after step 200
Epoch: [ 0], [ 100/ 290] time: 480, d_loss: 1.38684, g_loss: 6.41946, cheat_loss: 0.69287, const_loss: 0.00898, l1_loss: 1.82352, fm_loss: 0.00371, vgg_loss: 0.38904
checkpoint: current step 100,save after step 200
Epoch: [ 0], [ 200/ 290] time: 955, d_loss: 1.38675, g_loss: 7.04897, cheat_loss: 0.69287, const_loss: 0.01052, l1_loss: 2.09149, fm_loss: 0.00443, vgg_loss: 0.42497
checkpoint: current step 200
💾 Checkpoint saved at epoch 200
Scheduler step executed, current step: 1
Updated learning rate: G = 0.000100, D = 0.000100
Epoch: [ 1], [ 0/ 290] time: 1383, d_loss: 1.38681, g_loss: 6.41988, cheat_loss: 0.69385, const_loss: 0.01146, l1_loss: 1.80699, fm_loss: 0.00392, vgg_loss: 0.39037
checkpoint: current step 300
💾 Checkpoint saved at epoch 300
Epoch: [ 1], [ 100/ 290] time: 1861, d_loss: 1.38688, g_loss: 6.18465, cheat_loss: 0.69336, const_loss: 0.01128, l1_loss: 1.73817, fm_loss: 0.00367, vgg_loss: 0.37382
checkpoint: current step 400
💾 Checkpoint saved at epoch 400
Epoch: [ 1], [ 200/ 290] time: 2337, d_loss: 1.38675, g_loss: 6.30030, cheat_loss: 0.69434, const_loss: 0.00968, l1_loss: 1.77206, fm_loss: 0.00380, vgg_loss: 0.38204
checkpoint: current step 500
💾 Checkpoint saved at epoch 500
Scheduler step executed, current step: 2
Updated learning rate: G = 0.000099, D = 0.000099
Epoch: [ 2], [ 0/ 290] time: 2770, d_loss: 1.38677, g_loss: 6.85758, cheat_loss: 0.69287, const_loss: 0.00999, l1_loss: 1.94488, fm_loss: 0.00434, vgg_loss: 0.42055
checkpoint: current step 600
💾 Checkpoint saved at epoch 600
Epoch: [ 2], [ 100/ 290] time: 3246, d_loss: 1.38697, g_loss: 6.68530, cheat_loss: 0.69287, const_loss: 0.00910, l1_loss: 1.89909, fm_loss: 0.00405, vgg_loss: 0.40802
checkpoint: current step 700
💾 Checkpoint saved at epoch 700
Epoch: [ 2], [ 200/ 290] time: 3722, d_loss: 1.38691, g_loss: 6.77741, cheat_loss: 0.69287, const_loss: 0.00918, l1_loss: 1.97713, fm_loss: 0.00396, vgg_loss: 0.40943
checkpoint: current step 800
💾 Checkpoint saved at epoch 800
Scheduler step executed, current step: 3
Updated learning rate: G = 0.000099, D = 0.000099
Epoch: [ 3], [ 0/ 290] time: 4150, d_loss: 1.38697, g_loss: 6.22611, cheat_loss: 0.69336, const_loss: 0.01158, l1_loss: 1.87910, fm_loss: 0.00371, vgg_loss: 0.36384
checkpoint: current step 900
💾 Checkpoint saved at epoch 900
Epoch: [ 3], [ 100/ 290] time: 4632, d_loss: 1.38681, g_loss: 7.09874, cheat_loss: 0.69385, const_loss: 0.01072, l1_loss: 2.08162, fm_loss: 0.00448, vgg_loss: 0.43081
checkpoint: current step 1000
💾 Checkpoint saved at epoch 1000
Epoch: [ 3], [ 200/ 290] time: 5113, d_loss: 1.38678, g_loss: 5.94950, cheat_loss: 0.69287, const_loss: 0.01015, l1_loss: 1.61783, fm_loss: 0.00366, vgg_loss: 0.36250
checkpoint: current step 1100
💾 Checkpoint saved at epoch 1100
Scheduler step executed, current step: 4
Updated learning rate: G = 0.000098, D = 0.000098
Epoch: [ 4], [ 0/ 290] time: 5541, d_loss: 1.38673, g_loss: 6.21286, cheat_loss: 0.69287, const_loss: 0.00690, l1_loss: 1.76996, fm_loss: 0.00369, vgg_loss: 0.37394
checkpoint: current step 1200
💾 Checkpoint saved at epoch 1200
Epoch: [ 4], [ 100/ 290] time: 6023, d_loss: 1.38682, g_loss: 6.42526, cheat_loss: 0.69336, const_loss: 0.00945, l1_loss: 1.78809, fm_loss: 0.00382, vgg_loss: 0.39305
checkpoint: current step 1300
💾 Checkpoint saved at epoch 1300
Epoch: [ 4], [ 200/ 290] time: 6498, d_loss: 1.38685, g_loss: 5.85957, cheat_loss: 0.69287, const_loss: 0.01086, l1_loss: 1.60170, fm_loss: 0.00338, vgg_loss: 0.35508
checkpoint: current step 1400
💾 Checkpoint saved at epoch 1400
Scheduler step executed, current step: 5
Updated learning rate: G = 0.000096, D = 0.000096
Epoch: [ 5], [ 0/ 290] time: 6926, d_loss: 1.38678, g_loss: 5.83244, cheat_loss: 0.69287, const_loss: 0.01172, l1_loss: 1.61825, fm_loss: 0.00349, vgg_loss: 0.35061
checkpoint: current step 1500
💾 Checkpoint saved at epoch 1500
Epoch: [ 5], [ 100/ 290] time: 7405, d_loss: 1.38693, g_loss: 6.25700, cheat_loss: 0.69287, const_loss: 0.00811, l1_loss: 1.81687, fm_loss: 0.00376, vgg_loss: 0.37354
checkpoint: current step 1600
💾 Checkpoint saved at epoch 1600
Epoch: [ 5], [ 200/ 290] time: 7885, d_loss: 1.38678, g_loss: 6.63865, cheat_loss: 0.69336, const_loss: 0.00983, l1_loss: 1.89885, fm_loss: 0.00375, vgg_loss: 0.40329
checkpoint: current step 1700
💾 Checkpoint saved at epoch 1700
Scheduler step executed, current step: 6
Updated learning rate: G = 0.000095, D = 0.000095
Epoch: [ 6], [ 0/ 290] time: 8314, d_loss: 1.38676, g_loss: 6.20826, cheat_loss: 0.69336, const_loss: 0.00977, l1_loss: 1.80779, fm_loss: 0.00369, vgg_loss: 0.36936
checkpoint: current step 1800
💾 Checkpoint saved at epoch 1800
Epoch: [ 6], [ 100/ 290] time: 8793, d_loss: 1.38681, g_loss: 5.91608, cheat_loss: 0.69336, const_loss: 0.00817, l1_loss: 1.57263, fm_loss: 0.00372, vgg_loss: 0.36382
checkpoint: current step 1900
💾 Checkpoint saved at epoch 1900
Epoch: [ 6], [ 200/ 290] time: 9269, d_loss: 1.38676, g_loss: 5.75103, cheat_loss: 0.69336, const_loss: 0.00700, l1_loss: 1.51213, fm_loss: 0.00338, vgg_loss: 0.35352
checkpoint: current step 2000
💾 Checkpoint saved at epoch 2000
Scheduler step executed, current step: 7
Updated learning rate: G = 0.000093, D = 0.000093
Epoch: [ 7], [ 0/ 290] time: 9697, d_loss: 1.38677, g_loss: 6.21543, cheat_loss: 0.69287, const_loss: 0.00668, l1_loss: 1.71372, fm_loss: 0.00368, vgg_loss: 0.37985
checkpoint: current step 2100
💾 Checkpoint saved at epoch 2100
Epoch: [ 7], [ 100/ 290] time: 10178, d_loss: 1.38700, g_loss: 6.07064, cheat_loss: 0.69336, const_loss: 0.01171, l1_loss: 1.71163, fm_loss: 0.00373, vgg_loss: 0.36502
checkpoint: current step 2200
💾 Checkpoint saved at epoch 2200
Epoch: [ 7], [ 200/ 290] time: 10660, d_loss: 1.38693, g_loss: 6.17428, cheat_loss: 0.69385, const_loss: 0.00641, l1_loss: 1.66477, fm_loss: 0.00374, vgg_loss: 0.38055
checkpoint: current step 2300
💾 Checkpoint saved at epoch 2300
Scheduler step executed, current step: 8
Updated learning rate: G = 0.000091, D = 0.000091
Epoch: [ 8], [ 0/ 290] time: 11094, d_loss: 1.38679, g_loss: 6.07635, cheat_loss: 0.69336, const_loss: 0.00572, l1_loss: 1.58983, fm_loss: 0.00352, vgg_loss: 0.37839
checkpoint: current step 2400
💾 Checkpoint saved at epoch 2400
Epoch: [ 8], [ 100/ 290] time: 11575, d_loss: 1.38674, g_loss: 6.29299, cheat_loss: 0.69336, const_loss: 0.00985, l1_loss: 1.73617, fm_loss: 0.00378, vgg_loss: 0.38498
checkpoint: current step 2500
💾 Checkpoint saved at epoch 2500
Epoch: [ 8], [ 200/ 290] time: 12057, d_loss: 1.38676, g_loss: 5.90569, cheat_loss: 0.69287, const_loss: 0.00690, l1_loss: 1.58214, fm_loss: 0.00366, vgg_loss: 0.36201
checkpoint: current step 2600
💾 Checkpoint saved at epoch 2600
Scheduler step executed, current step: 9
Updated learning rate: G = 0.000088, D = 0.000088
Epoch: [ 9], [ 0/ 290] time: 12491, d_loss: 1.38676, g_loss: 6.36429, cheat_loss: 0.69287, const_loss: 0.00939, l1_loss: 1.73701, fm_loss: 0.00401, vgg_loss: 0.39210
checkpoint: current step 2700
💾 Checkpoint saved at epoch 2700
Epoch: [ 9], [ 100/ 290] time: 12967, d_loss: 1.38682, g_loss: 5.77301, cheat_loss: 0.69336, const_loss: 0.00996, l1_loss: 1.58300, fm_loss: 0.00341, vgg_loss: 0.34833
checkpoint: current step 2800
💾 Checkpoint saved at epoch 2800
Epoch: [ 9], [ 200/ 290] time: 13443, d_loss: 1.38682, g_loss: 6.05634, cheat_loss: 0.69336, const_loss: 0.03403, l1_loss: 1.69261, fm_loss: 0.00364, vgg_loss: 0.36327
Scheduler step executed, current step: 10
Updated learning rate: G = 0.000086, D = 0.000086
Epoch: [10], [ 0/ 290] time: 13869, d_loss: 1.38697, g_loss: 5.74047, cheat_loss: 0.69336, const_loss: 0.00837, l1_loss: 1.54565, fm_loss: 0.00333, vgg_loss: 0.34898
checkpoint: current step 2900
💾 Checkpoint saved at epoch 2900
Epoch: [10], [ 100/ 290] time: 14345, d_loss: 1.38716, g_loss: 5.93448, cheat_loss: 0.69287, const_loss: 0.01006, l1_loss: 1.61740, fm_loss: 0.00352, vgg_loss: 0.36106
checkpoint: current step 3000
💾 Checkpoint saved at epoch 3000
Epoch: [10], [ 200/ 290] time: 14821, d_loss: 1.38674, g_loss: 6.42213, cheat_loss: 0.69336, const_loss: 0.00695, l1_loss: 1.77466, fm_loss: 0.00374, vgg_loss: 0.39434
checkpoint: current step 3100
💾 Checkpoint saved at epoch 3100
Scheduler step executed, current step: 11
Updated learning rate: G = 0.000083, D = 0.000083
Epoch: [11], [ 0/ 290] time: 15255, d_loss: 1.38681, g_loss: 5.97088, cheat_loss: 0.69336, const_loss: 0.00991, l1_loss: 1.65258, fm_loss: 0.00359, vgg_loss: 0.36114
checkpoint: current step 3200
💾 Checkpoint saved at epoch 3200

某一個已經被修改的模型, 在修改之前, 已訓練很久, g_loss 維持在 5.x ~ 6.x, l1_loss 維持在 2.x ~ 3.x


部份權重消失後, 初次訓練的第一小時長這樣:

✅ Model 316 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/59...
Epoch: [ 0], Batch: [ 0/ 404] | Total Time: 2s
d_loss: 1867.5072, g_loss: 110.7468, const_loss: 0.0874, l1_loss: 86.2081, fm_loss: 3.8270, perc_loss: 19.2026
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 404] | Total Time: 3m 49s
d_loss: 1.4219, g_loss: 32.1815, const_loss: 0.0621, l1_loss: 18.2283, fm_loss: 0.2482, perc_loss: 12.9475
💾 Checkpoint saved at step 200
Epoch: [ 0], Batch: [ 200/ 404] | Total Time: 7m 38s
d_loss: 1.4051, g_loss: 28.4049, const_loss: 0.0660, l1_loss: 15.4496, fm_loss: 0.2385, perc_loss: 11.9550
💾 Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 404] | Total Time: 11m 27s
d_loss: 1.4068, g_loss: 28.3579, const_loss: 0.0578, l1_loss: 15.1906, fm_loss: 0.1915, perc_loss: 12.2247
💾 Checkpoint saved at step 400
Epoch: [ 0], Batch: [ 400/ 404] | Total Time: 15m 16s
d_loss: 1.4041, g_loss: 23.0935, const_loss: 0.0758, l1_loss: 11.8268, fm_loss: 0.1120, perc_loss: 10.3850
--- End of Epoch 0 --- Time: 921.4s ---
LR Scheduler stepped. Current LR G: 0.000300, LR D: 0.000300
Epoch: [ 1], Batch: [ 0/ 404] | Total Time: 15m 23s
d_loss: 1.4219, g_loss: 24.1651, const_loss: 0.0608, l1_loss: 12.0019, fm_loss: 0.1118, perc_loss: 11.2972
💾 Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 100/ 404] | Total Time: 19m 11s
d_loss: 1.4359, g_loss: 25.6365, const_loss: 0.0775, l1_loss: 13.3937, fm_loss: 0.1788, perc_loss: 11.2922
💾 Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 200/ 404] | Total Time: 23m 0s
d_loss: 1.3957, g_loss: 22.7205, const_loss: 0.0563, l1_loss: 11.5665, fm_loss: 0.1340, perc_loss: 10.2703
💾 Checkpoint saved at step 700
Epoch: [ 1], Batch: [ 300/ 404] | Total Time: 26m 48s
d_loss: 1.3953, g_loss: 20.9696, const_loss: 0.0426, l1_loss: 10.3583, fm_loss: 0.0961, perc_loss: 9.7792
💾 Checkpoint saved at step 800
Epoch: [ 1], Batch: [ 400/ 404] | Total Time: 30m 36s
d_loss: 1.4501, g_loss: 20.4242, const_loss: 0.0580, l1_loss: 9.8645, fm_loss: 0.0972, perc_loss: 9.7106
--- End of Epoch 1 --- Time: 920.2s ---
LR Scheduler stepped. Current LR G: 0.000299, LR D: 0.000299
Epoch: [ 2], Batch: [ 0/ 404] | Total Time: 30m 43s
d_loss: 1.3998, g_loss: 20.1641, const_loss: 0.0504, l1_loss: 9.5389, fm_loss: 0.1008, perc_loss: 9.7806
💾 Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 100/ 404] | Total Time: 34m 31s
d_loss: 1.3927, g_loss: 21.4867, const_loss: 0.0349, l1_loss: 10.5517, fm_loss: 0.1215, perc_loss: 10.0842
💾 Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 200/ 404] | Total Time: 38m 19s
d_loss: 1.4047, g_loss: 20.7624, const_loss: 0.0521, l1_loss: 10.2023, fm_loss: 0.1092, perc_loss: 9.7054
💾 Checkpoint saved at step 1100
Epoch: [ 2], Batch: [ 300/ 404] | Total Time: 42m 7s
d_loss: 1.3978, g_loss: 19.4745, const_loss: 0.0435, l1_loss: 9.2680, fm_loss: 0.0898, perc_loss: 9.3799
💾 Checkpoint saved at step 1200
Epoch: [ 2], Batch: [ 400/ 404] | Total Time: 45m 55s
d_loss: 1.3931, g_loss: 19.0814, const_loss: 0.0414, l1_loss: 8.6848, fm_loss: 0.0892, perc_loss: 9.5731
--- End of Epoch 2 --- Time: 919.2s ---
LR Scheduler stepped. Current LR G: 0.000298, LR D: 0.000298
Epoch: [ 3], Batch: [ 0/ 404] | Total Time: 46m 3s
d_loss: 1.4088, g_loss: 21.2052, const_loss: 0.0451, l1_loss: 10.2602, fm_loss: 0.0991, perc_loss: 10.1066
💾 Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 100/ 404] | Total Time: 49m 50s
d_loss: 1.3963, g_loss: 18.4739, const_loss: 0.0408, l1_loss: 8.2517, fm_loss: 0.0926, perc_loss: 9.3954
💾 Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 200/ 404] | Total Time: 53m 38s
d_loss: 1.4327, g_loss: 18.8475, const_loss: 0.0422, l1_loss: 8.7518, fm_loss: 0.0816, perc_loss: 9.2786
💾 Checkpoint saved at step 1500
Epoch: [ 3], Batch: [ 300/ 404] | Total Time: 57m 27s
d_loss: 1.3939, g_loss: 19.1620, const_loss: 0.0452, l1_loss: 8.6969, fm_loss: 0.0793, perc_loss: 9.6472
💾 Checkpoint saved at step 1600
Epoch: [ 3], Batch: [ 400/ 404] | Total Time: 1h 1m 15s
d_loss: 1.3985, g_loss: 18.8425, const_loss: 0.0351, l1_loss: 8.7562, fm_loss: 0.0810, perc_loss: 9.2764
--- End of Epoch 3 --- Time: 919.5s ---
LR Scheduler stepped. Current LR G: 0.000297, LR D: 0.000297
Epoch: [ 4], Batch: [ 0/ 404] | Total Time: 1h 1m 22s
d_loss: 1.4096, g_loss: 19.9162, const_loss: 0.0286, l1_loss: 9.2687, fm_loss: 0.0814, perc_loss: 9.8441
💾 Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 100/ 404] | Total Time: 1h 5m 10s
d_loss: 1.3930, g_loss: 20.4990, const_loss: 0.0436, l1_loss: 9.4853, fm_loss: 0.0782, perc_loss: 10.1984
💾 Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 200/ 404] | Total Time: 1h 8m 58s
d_loss: 1.4000, g_loss: 17.0301, const_loss: 0.0288, l1_loss: 7.4496, fm_loss: 0.0635, perc_loss: 8.7943

模型調整之後, 部分權重消失, g_loss and l1_loss 從超級高的地方 18.x , 慢慢回來 4.x, g_loss 也是從 30.x 下降到 11.x:

Starting training from epoch 0...
Epoch: [ 0], Batch: [ 0/ 429] | Time/Batch: 4.45s | Total Time: 5
d_loss: 1.3870, g_loss: 11.2197, const_loss: 0.0004, l1_loss: 3.8375, fm_loss: 0.0748, perc_loss: 6.6137
Epoch: [ 0], Batch: [ 100/ 429] | Time/Batch: 2.42s | Total Time: 248
d_loss: 1.3880, g_loss: 10.6320, const_loss: 0.0008, l1_loss: 3.5410, fm_loss: 0.0687, perc_loss: 6.3281

Checkpoint step 150 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 200/ 429] | Time/Batch: 2.44s | Total Time: 497
d_loss: 1.3868, g_loss: 10.7136, const_loss: 0.0018, l1_loss: 3.6025, fm_loss: 0.0736, perc_loss: 6.3422

Saving checkpoint at step 300...
💾 Checkpoint saved at epoch 300
Checkpoint saved.
Epoch: [ 0], Batch: [ 300/ 429] | Time/Batch: 2.50s | Total Time: 748
d_loss: 1.3868, g_loss: 12.0484, const_loss: 0.0006, l1_loss: 4.5537, fm_loss: 0.0933, perc_loss: 6.7073
Epoch: [ 0], Batch: [ 400/ 429] | Time/Batch: 2.43s | Total Time: 997
d_loss: 1.3871, g_loss: 10.7184, const_loss: 0.0017, l1_loss: 3.6693, fm_loss: 0.0735, perc_loss: 6.2805

--- End of Epoch 0 --- Time: 1067.17s ---
LR Scheduler stepped. Current LR G: 0.000160, LR D: 0.000160

--- Epoch 1/59 ---
Epoch: [ 1], Batch: [ 0/ 429] | Time/Batch: 2.44s | Total Time: 1070
d_loss: 1.3868, g_loss: 11.1438, const_loss: 0.0008, l1_loss: 3.7563, fm_loss: 0.0806, perc_loss: 6.6127

Saving checkpoint at step 450...
💾 Checkpoint saved at epoch 450
Checkpoint saved.
Epoch: [ 1], Batch: [ 100/ 429] | Time/Batch: 2.42s | Total Time: 1325
d_loss: 1.3880, g_loss: 11.2277, const_loss: 0.0012, l1_loss: 4.0324, fm_loss: 0.0844, perc_loss: 6.4169

Saving checkpoint at step 600...
💾 Checkpoint saved at epoch 600
Checkpoint saved.

以每訓練4小時為1個單位, 目前是第3個單位, 要推論結果可以看, 似乎短時間之內, 是沒辦法的.


突然發現, 訓練進去的資料放錯語言版本, 有1千筆左右資料是錯誤的. @_@, 用零碎的時間在做複雜的操作, 有時候會複製錯的資料夾, 而且還沒有發現, 當一發現已經訓練了很多回合, 天呀, 那這時候, 要切換到沒錯誤的 checkpoint 重新訓練, 還是不管了, 就再用正確的資料再訓練?

選擇後者, 接下來的訓練 log:

unpickled total 8065 examples
Starting training from epoch 0...

--- Epoch 0/59 ---
Epoch: [ 0], Batch: [ 0/ 385] | Time/Batch: 4.91s | Total Time: 5
d_loss: 1.3873, g_loss: 18.4826, const_loss: 0.0009, l1_loss: 8.5010, fm_loss: 0.1829, perc_loss: 9.1036
Epoch: [ 0], Batch: [ 100/ 385] | Time/Batch: 2.54s | Total Time: 259
d_loss: 1.3869, g_loss: 13.3189, const_loss: 0.0008, l1_loss: 5.1409, fm_loss: 0.1058, perc_loss: 7.3781

Checkpoint step 150 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 200/ 385] | Time/Batch: 2.56s | Total Time: 520
d_loss: 1.3870, g_loss: 11.7846, const_loss: 0.0024, l1_loss: 4.1571, fm_loss: 0.0832, perc_loss: 6.8485

Saving checkpoint at step 300...
💾 Checkpoint saved at epoch 300
Checkpoint saved.
Epoch: [ 0], Batch: [ 300/ 385] | Time/Batch: 2.55s | Total Time: 787
d_loss: 1.3873, g_loss: 13.0015, const_loss: 0.0009, l1_loss: 4.9049, fm_loss: 0.0958, perc_loss: 7.3066

--- End of Epoch 0 --- Time: 1004.38s ---
LR Scheduler stepped. Current LR G: 0.000150, LR D: 0.000150

--- Epoch 1/59 ---
Epoch: [ 1], Batch: [ 0/ 385] | Time/Batch: 2.58s | Total Time: 1007
d_loss: 1.3870, g_loss: 11.6078, const_loss: 0.0018, l1_loss: 4.1686, fm_loss: 0.0810, perc_loss: 6.6636

Saving checkpoint at step 450...
💾 Checkpoint saved at epoch 450
Checkpoint saved.
Epoch: [ 1], Batch: [ 100/ 385] | Time/Batch: 2.54s | Total Time: 1269
d_loss: 1.3870, g_loss: 12.3403, const_loss: 0.0013, l1_loss: 4.7776, fm_loss: 0.0901, perc_loss: 6.7784
Epoch: [ 1], Batch: [ 200/ 385] | Time/Batch: 2.56s | Total Time: 1529
d_loss: 1.3883, g_loss: 11.6794, const_loss: 0.0019, l1_loss: 4.1550, fm_loss: 0.0896, perc_loss: 6.7400

Saving checkpoint at step 600...
💾 Checkpoint saved at epoch 600
Checkpoint saved.
Epoch: [ 1], Batch: [ 300/ 385] | Time/Batch: 2.56s | Total Time: 1795
d_loss: 1.3868, g_loss: 11.6169, const_loss: 0.0010, l1_loss: 4.0936, fm_loss: 0.0837, perc_loss: 6.7453

Saving checkpoint at step 750...
💾 Checkpoint saved at epoch 750
Checkpoint saved.

--- End of Epoch 1 --- Time: 1008.75s ---
LR Scheduler stepped. Current LR G: 0.000150, LR D: 0.000150

--- Epoch 2/59 ---
Epoch: [ 2], Batch: [ 0/ 385] | Time/Batch: 2.57s | Total Time: 2016
d_loss: 1.3995, g_loss: 12.3267, const_loss: 0.0017, l1_loss: 4.7091, fm_loss: 0.0990, perc_loss: 6.8235
Epoch: [ 2], Batch: [ 100/ 385] | Time/Batch: 2.56s | Total Time: 2277
d_loss: 1.3872, g_loss: 11.0924, const_loss: 0.0007, l1_loss: 3.8605, fm_loss: 0.0781, perc_loss: 6.4597

Saving checkpoint at step 900...
💾 Checkpoint saved at epoch 900
Checkpoint saved.
Epoch: [ 2], Batch: [ 200/ 385] | Time/Batch: 2.54s | Total Time: 2544
d_loss: 1.3879, g_loss: 10.7140, const_loss: 0.0008, l1_loss: 3.6979, fm_loss: 0.0740, perc_loss: 6.2480

再訓練個4小時後, 就回來 g_loss: 9.x, l1_loss: 3.x

✅ Model 336 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/59...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3870, g_loss: 9.4993, const_loss: 0.0007, l1_loss: 2.9194, fm_loss: 0.0515, perc_loss: 5.8342
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 4m 13s
d_loss: 1.3880, g_loss: 10.7985, const_loss: 0.0007, l1_loss: 3.4049, fm_loss: 0.0579, perc_loss: 6.6416
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 8m 30s
d_loss: 1.3869, g_loss: 10.2884, const_loss: 0.0007, l1_loss: 3.2624, fm_loss: 0.0540, perc_loss: 6.2780
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 12m 41s
d_loss: 1.3868, g_loss: 10.2150, const_loss: 0.0003, l1_loss: 3.1942, fm_loss: 0.0549, perc_loss: 6.2722
--- End of Epoch 0 --- Time: 968.7s ---
LR Scheduler stepped. Current LR G: 0.000120, LR D: 0.000120
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 16m 11s
d_loss: 1.3869, g_loss: 10.8728, const_loss: 0.0004, l1_loss: 3.4890, fm_loss: 0.0637, perc_loss: 6.6263
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 20m 22s
d_loss: 1.3871, g_loss: 9.0645, const_loss: 0.0007, l1_loss: 2.8201, fm_loss: 0.0517, perc_loss: 5.4986
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 24m 32s
d_loss: 1.3868, g_loss: 9.4077, const_loss: 0.0006, l1_loss: 2.8860, fm_loss: 0.0486, perc_loss: 5.7792
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 28m 43s
d_loss: 1.3870, g_loss: 9.6776, const_loss: 0.0008, l1_loss: 3.0043, fm_loss: 0.0533, perc_loss: 5.9258
--- End of Epoch 1 --- Time: 962.9s ---
LR Scheduler stepped. Current LR G: 0.000120, LR D: 0.000120
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 32m 14s
d_loss: 1.3908, g_loss: 9.8633, const_loss: 0.0006, l1_loss: 3.1322, fm_loss: 0.0579, perc_loss: 5.9792
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 36m 24s
d_loss: 1.3869, g_loss: 8.8684, const_loss: 0.0006, l1_loss: 2.7217, fm_loss: 0.0466, perc_loss: 5.4062
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 40m 35s
d_loss: 1.3868, g_loss: 9.2448, const_loss: 0.0005, l1_loss: 2.8618, fm_loss: 0.0493, perc_loss: 5.6399
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 44m 46s
d_loss: 1.3871, g_loss: 9.6806, const_loss: 0.0005, l1_loss: 2.9997, fm_loss: 0.0535, perc_loss: 5.9341
--- End of Epoch 2 --- Time: 962.9s ---
LR Scheduler stepped. Current LR G: 0.000119, LR D: 0.000119
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 48m 16s
d_loss: 1.3882, g_loss: 9.9630, const_loss: 0.0009, l1_loss: 3.4199, fm_loss: 0.0584, perc_loss: 5.7909
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 52m 27s
d_loss: 1.3868, g_loss: 9.6802, const_loss: 0.0005, l1_loss: 3.0333, fm_loss: 0.0522, perc_loss: 5.9008
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 56m 37s
d_loss: 1.3868, g_loss: 10.0665, const_loss: 0.0007, l1_loss: 3.1766, fm_loss: 0.0544, perc_loss: 6.1414
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 1h 49s
d_loss: 1.3873, g_loss: 10.4656, const_loss: 0.0009, l1_loss: 3.2320, fm_loss: 0.0562, perc_loss: 6.4832
--- End of Epoch 3 --- Time: 963.5s ---
LR Scheduler stepped. Current LR G: 0.000119, LR D: 0.000119
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 1h 4m 20s
d_loss: 1.3868, g_loss: 9.4314, const_loss: 0.0005, l1_loss: 2.9337, fm_loss: 0.0500, perc_loss: 5.7539
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 8m 30s
d_loss: 1.3868, g_loss: 9.7548, const_loss: 0.0004, l1_loss: 2.9702, fm_loss: 0.0507, perc_loss: 6.0401
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 12m 41s
d_loss: 1.3869, g_loss: 10.1868, const_loss: 0.0005, l1_loss: 3.2178, fm_loss: 0.0573, perc_loss: 6.2177
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 16m 52s
d_loss: 1.3870, g_loss: 10.0047, const_loss: 0.0007, l1_loss: 3.2231, fm_loss: 0.0558, perc_loss: 6.0318
--- End of Epoch 4 --- Time: 963.0s ---
LR Scheduler stepped. Current LR G: 0.000118, LR D: 0.000118
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 20m 23s
d_loss: 1.3870, g_loss: 8.6669, const_loss: 0.0005, l1_loss: 2.6592, fm_loss: 0.0466, perc_loss: 5.2672
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 24m 33s
d_loss: 1.3868, g_loss: 9.4733, const_loss: 0.0004, l1_loss: 2.9239, fm_loss: 0.0508, perc_loss: 5.8049
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 28m 50s
d_loss: 1.3868, g_loss: 9.2385, const_loss: 0.0006, l1_loss: 2.9031, fm_loss: 0.0496, perc_loss: 5.5918
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 33m 1s
d_loss: 1.3869, g_loss: 9.8365, const_loss: 0.0005, l1_loss: 3.1036, fm_loss: 0.0528, perc_loss: 5.9862
--- End of Epoch 5 --- Time: 968.9s ---
LR Scheduler stepped. Current LR G: 0.000117, LR D: 0.000117
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 36m 32s
d_loss: 1.3923, g_loss: 10.1013, const_loss: 0.0006, l1_loss: 3.2773, fm_loss: 0.0535, perc_loss: 6.0766
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 40m 43s
d_loss: 1.3869, g_loss: 9.3514, const_loss: 0.0005, l1_loss: 2.8766, fm_loss: 0.0486, perc_loss: 5.7323
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 44m 53s
d_loss: 1.3867, g_loss: 9.9055, const_loss: 0.0008, l1_loss: 3.1463, fm_loss: 0.0539, perc_loss: 6.0111
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 49m 3s
d_loss: 1.3868, g_loss: 10.1216, const_loss: 0.0002, l1_loss: 3.1538, fm_loss: 0.0521, perc_loss: 6.2222
--- End of Epoch 6 --- Time: 960.8s ---
LR Scheduler stepped. Current LR G: 0.000116, LR D: 0.000116
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 52m 33s
d_loss: 1.3878, g_loss: 9.5405, const_loss: 0.0004, l1_loss: 3.0016, fm_loss: 0.0487, perc_loss: 5.7965

上面模型訓練了3天的資料, 整個都放棄不使用, 因為又修改模型.

訓練3個小時後, 換下一個帳號訓練1小時的結果:

✅ Model 344 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/29...
Epoch: [ 0], Batch: [ 0/ 404] | Total Time: 4s
d_loss: 1.4084, g_loss: 12.7214, const_loss: 0.0175, l1_loss: 4.8502, fm_loss: 0.0229, perc_loss: 7.1374
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 404] | Total Time: 4m 9s
d_loss: 1.3889, g_loss: 12.6852, const_loss: 0.0170, l1_loss: 4.7793, fm_loss: 0.0214, perc_loss: 7.1747
💾 Checkpoint saved at step 200
Epoch: [ 0], Batch: [ 200/ 404] | Total Time: 8m 16s
d_loss: 1.3914, g_loss: 13.9636, const_loss: 0.0200, l1_loss: 5.7138, fm_loss: 0.0256, perc_loss: 7.5108
💾 Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 404] | Total Time: 12m 23s
d_loss: 1.3934, g_loss: 13.9670, const_loss: 0.0176, l1_loss: 5.4541, fm_loss: 0.0231, perc_loss: 7.7794
💾 Checkpoint saved at step 400
Epoch: [ 0], Batch: [ 400/ 404] | Total Time: 16m 30s
d_loss: 1.3880, g_loss: 12.6579, const_loss: 0.0201, l1_loss: 5.0259, fm_loss: 0.0261, perc_loss: 6.8929
--- End of Epoch 0 --- Time: 995.8s ---
LR Scheduler stepped. Current LR G: 0.000269, LR D: 0.000269
Epoch: [ 1], Batch: [ 0/ 404] | Total Time: 16m 38s
d_loss: 1.3912, g_loss: 13.5750, const_loss: 0.0143, l1_loss: 5.2560, fm_loss: 0.0254, perc_loss: 7.5859
💾 Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 100/ 404] | Total Time: 20m 44s
d_loss: 1.3900, g_loss: 14.3555, const_loss: 0.0235, l1_loss: 5.8783, fm_loss: 0.0295, perc_loss: 7.7309
💾 Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 200/ 404] | Total Time: 24m 51s
d_loss: 1.3882, g_loss: 12.3923, const_loss: 0.0208, l1_loss: 4.7079, fm_loss: 0.0238, perc_loss: 6.9470
💾 Checkpoint saved at step 700
Epoch: [ 1], Batch: [ 300/ 404] | Total Time: 28m 58s
d_loss: 1.3883, g_loss: 12.6920, const_loss: 0.0201, l1_loss: 4.9821, fm_loss: 0.0242, perc_loss: 6.9723
💾 Checkpoint saved at step 800
Epoch: [ 1], Batch: [ 400/ 404] | Total Time: 33m 4s
d_loss: 1.3878, g_loss: 12.3721, const_loss: 0.0225, l1_loss: 4.6908, fm_loss: 0.0232, perc_loss: 6.9422
--- End of Epoch 1 --- Time: 994.5s ---
LR Scheduler stepped. Current LR G: 0.000267, LR D: 0.000267
Epoch: [ 2], Batch: [ 0/ 404] | Total Time: 33m 12s
d_loss: 1.3887, g_loss: 12.7595, const_loss: 0.0187, l1_loss: 4.8679, fm_loss: 0.0234, perc_loss: 7.1562
💾 Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 100/ 404] | Total Time: 37m 19s
d_loss: 1.3884, g_loss: 13.2664, const_loss: 0.0167, l1_loss: 5.2133, fm_loss: 0.0247, perc_loss: 7.3184
💾 Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 200/ 404] | Total Time: 41m 25s
d_loss: 1.3962, g_loss: 13.5604, const_loss: 0.0167, l1_loss: 5.5944, fm_loss: 0.0287, perc_loss: 7.2273
💾 Checkpoint saved at step 1100
Epoch: [ 2], Batch: [ 300/ 404] | Total Time: 45m 32s
d_loss: 1.3932, g_loss: 12.3333, const_loss: 0.0212, l1_loss: 4.6871, fm_loss: 0.0213, perc_loss: 6.9104
💾 Checkpoint saved at step 1200
Epoch: [ 2], Batch: [ 400/ 404] | Total Time: 49m 39s
d_loss: 1.3880, g_loss: 12.4818, const_loss: 0.0173, l1_loss: 4.6203, fm_loss: 0.0211, perc_loss: 7.1298
--- End of Epoch 2 --- Time: 994.8s ---
LR Scheduler stepped. Current LR G: 0.000263, LR D: 0.000263
Epoch: [ 3], Batch: [ 0/ 404] | Total Time: 49m 47s
d_loss: 1.3917, g_loss: 13.3609, const_loss: 0.0213, l1_loss: 5.2098, fm_loss: 0.0231, perc_loss: 7.4134
💾 Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 100/ 404] | Total Time: 53m 53s
d_loss: 1.3879, g_loss: 12.6190, const_loss: 0.0148, l1_loss: 4.6890, fm_loss: 0.0215, perc_loss: 7.2004
💾 Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 200/ 404] | Total Time: 58m 1s
d_loss: 1.3925, g_loss: 12.6245, const_loss: 0.0171, l1_loss: 4.8745, fm_loss: 0.0226, perc_loss: 7.0170
💾 Checkpoint saved at step 1500
Epoch: [ 3], Batch: [ 300/ 404] | Total Time: 1h 2m 7s
d_loss: 1.3877, g_loss: 13.1929, const_loss: 0.0184, l1_loss: 5.0457, fm_loss: 0.0229, perc_loss: 7.4126
💾 Checkpoint saved at step 1600
Epoch: [ 3], Batch: [ 400/ 404] | Total Time: 1h 6m 14s
d_loss: 1.3943, g_loss: 12.6764, const_loss: 0.0174, l1_loss: 4.8366, fm_loss: 0.0217, perc_loss: 7.1072
--- End of Epoch 3 --- Time: 994.5s ---
LR Scheduler stepped. Current LR G: 0.000258, LR D: 0.000258
Epoch: [ 4], Batch: [ 0/ 404] | Total Time: 1h 6m 22s
d_loss: 1.3884, g_loss: 13.3801, const_loss: 0.0139, l1_loss: 5.1460, fm_loss: 0.0225, perc_loss: 7.5042
💾 Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 100/ 404] | Total Time: 1h 10m 28s
d_loss: 1.3876, g_loss: 14.5599, const_loss: 0.0202, l1_loss: 5.7778, fm_loss: 0.0265, perc_loss: 8.0421
💾 Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 200/ 404] | Total Time: 1h 14m 36s
d_loss: 1.3874, g_loss: 12.4637, const_loss: 0.0137, l1_loss: 4.6523, fm_loss: 0.0216, perc_loss: 7.0828
💾 Checkpoint saved at step 1900
Epoch: [ 4], Batch: [ 300/ 404] | Total Time: 1h 18m 42s
d_loss: 1.3879, g_loss: 11.6626, const_loss: 0.0159, l1_loss: 4.3420, fm_loss: 0.0188, perc_loss: 6.5924
💾 Checkpoint saved at step 2000
Epoch: [ 4], Batch: [ 400/ 404] | Total Time: 1h 22m 48s
d_loss: 1.3878, g_loss: 13.6904, const_loss: 0.0125, l1_loss: 5.3557, fm_loss: 0.0247, perc_loss: 7.6041
--- End of Epoch 4 --- Time: 994.5s ---
LR Scheduler stepped. Current LR G: 0.000252, LR D: 0.000252
Epoch: [ 5], Batch: [ 0/ 404] | Total Time: 1h 22m 56s
d_loss: 1.3888, g_loss: 13.5113, const_loss: 0.0180, l1_loss: 5.3720, fm_loss: 0.0221, perc_loss: 7.4059
💾 Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 100/ 404] | Total Time: 1h 27m 3s
d_loss: 1.3882, g_loss: 12.8890, const_loss: 0.0145, l1_loss: 4.7612, fm_loss: 0.0201, perc_loss: 7.3998
💾 Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 200/ 404] | Total Time: 1h 31m 9s
d_loss: 1.3878, g_loss: 11.9704, const_loss: 0.0172, l1_loss: 4.5764, fm_loss: 0.0196, perc_loss: 6.6638

把上面的結果, 拿去推論, 果然, 只訓練3小時的模型, 完全無法使用, 筆畫像鬼畫符一樣在飄.

✅ Model 348 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/29...
Epoch: [ 0], Batch: [ 0/ 404] | Total Time: 2s
d_loss: 1.3886, g_loss: 11.8738, const_loss: 0.0034, l1_loss: 4.1439, fm_loss: 0.0170, perc_loss: 6.6964, style_loss: 0.3198
Epoch: [ 0], Batch: [ 100/ 404] | Total Time: 3m 52s
d_loss: 1.3881, g_loss: 11.1433, const_loss: 0.0003, l1_loss: 3.9579, fm_loss: 0.0152, perc_loss: 6.4692, style_loss: 0.0073
Epoch: [ 0], Batch: [ 200/ 404] | Total Time: 7m 42s
d_loss: 1.3889, g_loss: 11.5714, const_loss: 0.0001, l1_loss: 4.1737, fm_loss: 0.0162, perc_loss: 6.6850, style_loss: 0.0031
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 404] | Total Time: 11m 31s
d_loss: 1.3871, g_loss: 12.4110, const_loss: 0.0000, l1_loss: 4.5963, fm_loss: 0.0173, perc_loss: 7.1021, style_loss: 0.0018
Epoch: [ 0], Batch: [ 400/ 404] | Total Time: 15m 20s
d_loss: 1.3875, g_loss: 12.7810, const_loss: 0.0000, l1_loss: 4.8154, fm_loss: 0.0184, perc_loss: 7.2524, style_loss: 0.0014
--- End of Epoch 0 --- Time: 925.5s ---
LR Scheduler stepped. Current LR G: 0.000249, LR D: 0.000249
Epoch: [ 1], Batch: [ 0/ 404] | Total Time: 15m 27s
d_loss: 1.3871, g_loss: 13.1985, const_loss: 0.0000, l1_loss: 5.1255, fm_loss: 0.0196, perc_loss: 7.3585, style_loss: 0.0015
Epoch: [ 1], Batch: [ 100/ 404] | Total Time: 19m 17s
d_loss: 1.3878, g_loss: 12.6454, const_loss: 0.0000, l1_loss: 4.9086, fm_loss: 0.0188, perc_loss: 7.0239, style_loss: 0.0007
Epoch: [ 1], Batch: [ 200/ 404] | Total Time: 23m 6s
d_loss: 1.3875, g_loss: 11.9712, const_loss: 0.0000, l1_loss: 4.5275, fm_loss: 0.0166, perc_loss: 6.7333, style_loss: 0.0005
Epoch: [ 1], Batch: [ 300/ 404] | Total Time: 26m 55s
d_loss: 1.3882, g_loss: 12.5271, const_loss: 0.0000, l1_loss: 4.7555, fm_loss: 0.0192, perc_loss: 7.0588, style_loss: 0.0003
Epoch: [ 1], Batch: [ 400/ 404] | Total Time: 30m 44s
d_loss: 1.3871, g_loss: 12.2691, const_loss: 0.0000, l1_loss: 4.4776, fm_loss: 0.0182, perc_loss: 7.0796, style_loss: 0.0003
--- End of Epoch 1 --- Time: 923.9s ---
LR Scheduler stepped. Current LR G: 0.000247, LR D: 0.000247
Epoch: [ 2], Batch: [ 0/ 404] | Total Time: 30m 51s
d_loss: 1.4053, g_loss: 12.5477, const_loss: 0.0000, l1_loss: 4.8259, fm_loss: 0.0205, perc_loss: 7.0077, style_loss: 0.0003
Epoch: [ 2], Batch: [ 100/ 404] | Total Time: 34m 41s
d_loss: 1.3881, g_loss: 12.3871, const_loss: 0.0000, l1_loss: 4.5392, fm_loss: 0.0182, perc_loss: 7.1362, style_loss: 0.0001
Epoch: [ 2], Batch: [ 200/ 404] | Total Time: 38m 30s
d_loss: 1.3870, g_loss: 13.3371, const_loss: 0.0000, l1_loss: 5.4046, fm_loss: 0.0224, perc_loss: 7.2165, style_loss: 0.0001
Epoch: [ 2], Batch: [ 300/ 404] | Total Time: 42m 18s
d_loss: 1.3875, g_loss: 12.0445, const_loss: 0.0000, l1_loss: 4.4736, fm_loss: 0.0174, perc_loss: 6.8601, style_loss: 0.0001
Epoch: [ 2], Batch: [ 400/ 404] | Total Time: 46m 8s
d_loss: 1.3894, g_loss: 12.6803, const_loss: 0.0000, l1_loss: 4.6214, fm_loss: 0.0191, perc_loss: 7.3463, style_loss: 0.0001
--- End of Epoch 2 --- Time: 924.3s ---
LR Scheduler stepped. Current LR G: 0.000244, LR D: 0.000244
Epoch: [ 3], Batch: [ 0/ 404] | Total Time: 46m 16s
d_loss: 1.3889, g_loss: 11.4092, const_loss: 0.0000, l1_loss: 4.2041, fm_loss: 0.0172, perc_loss: 6.4944, style_loss: 0.0001
Epoch: [ 3], Batch: [ 100/ 404] | Total Time: 50m 4s
d_loss: 1.3872, g_loss: 12.1717, const_loss: 0.0000, l1_loss: 4.5128, fm_loss: 0.0187, perc_loss: 6.9467, style_loss: 0.0001
Epoch: [ 3], Batch: [ 200/ 404] | Total Time: 53m 53s
d_loss: 1.3899, g_loss: 11.5191, const_loss: 0.0000, l1_loss: 4.3207, fm_loss: 0.0167, perc_loss: 6.4882, style_loss: 0.0001
Epoch: [ 3], Batch: [ 300/ 404] | Total Time: 57m 42s
d_loss: 1.3881, g_loss: 11.8545, const_loss: 0.0000, l1_loss: 4.3395, fm_loss: 0.0171, perc_loss: 6.8044, style_loss: 0.0001
Epoch: [ 3], Batch: [ 400/ 404] | Total Time: 1h 1m 31s
d_loss: 1.3887, g_loss: 12.7386, const_loss: 0.0000, l1_loss: 4.7375, fm_loss: 0.0188, perc_loss: 7.2889, style_loss: 0.0001
--- End of Epoch 3 --- Time: 923.0s ---
LR Scheduler stepped. Current LR G: 0.000239, LR D: 0.000239
Epoch: [ 4], Batch: [ 0/ 404] | Total Time: 1h 1m 39s
d_loss: 1.3887, g_loss: 12.7682, const_loss: 0.0000, l1_loss: 4.8319, fm_loss: 0.0202, perc_loss: 7.2226, style_loss: 0.0001
Epoch: [ 4], Batch: [ 100/ 404] | Total Time: 1h 5m 27s
d_loss: 1.3869, g_loss: 11.6864, const_loss: 0.0000, l1_loss: 4.3606, fm_loss: 0.0177, perc_loss: 6.6147, style_loss: 0.0001
Epoch: [ 4], Batch: [ 200/ 404] | Total Time: 1h 9m 17s
d_loss: 1.3914, g_loss: 11.6921, const_loss: 0.0000, l1_loss: 4.3158, fm_loss: 0.0166, perc_loss: 6.6663, style_loss: 0.0001
Epoch: [ 4], Batch: [ 300/ 404] | Total Time: 1h 13m 5s
d_loss: 1.3874, g_loss: 12.2075, const_loss: 0.0000, l1_loss: 4.5162, fm_loss: 0.0180, perc_loss: 6.9800, style_loss: 0.0000
Epoch: [ 4], Batch: [ 400/ 404] | Total Time: 1h 16m 54s
d_loss: 1.3881, g_loss: 11.7921, const_loss: 0.0000, l1_loss: 4.4581, fm_loss: 0.0171, perc_loss: 6.6234, style_loss: 0.0000
--- End of Epoch 4 --- Time: 922.8s ---
LR Scheduler stepped. Current LR G: 0.000233, LR D: 0.000233
Epoch: [ 5], Batch: [ 0/ 404] | Total Time: 1h 17m 1s
d_loss: 1.3869, g_loss: 12.2049, const_loss: 0.0000, l1_loss: 4.6526, fm_loss: 0.0171, perc_loss: 6.8418, style_loss: 0.0000
Epoch: [ 5], Batch: [ 100/ 404] | Total Time: 1h 20m 51s
d_loss: 1.3872, g_loss: 11.8082, const_loss: 0.0000, l1_loss: 4.3435, fm_loss: 0.0181, perc_loss: 6.7532, style_loss: 0.0000
Epoch: [ 5], Batch: [ 200/ 404] | Total Time: 1h 24m 40s
d_loss: 1.3873, g_loss: 11.9758, const_loss: 0.0000, l1_loss: 4.5030, fm_loss: 0.0186, perc_loss: 6.7608, style_loss: 0.0000
Epoch: [ 5], Batch: [ 300/ 404] | Total Time: 1h 28m 28s
d_loss: 1.3943, g_loss: 12.6699, const_loss: 0.0000, l1_loss: 4.7060, fm_loss: 0.0178, perc_loss: 7.2527, style_loss: 0.0000
Epoch: [ 5], Batch: [ 400/ 404] | Total Time: 1h 32m 18s
d_loss: 1.3874, g_loss: 12.9331, const_loss: 0.0000, l1_loss: 4.9394, fm_loss: 0.0197, perc_loss: 7.2806, style_loss: 0.0000
--- End of Epoch 5 --- Time: 923.7s ---
LR Scheduler stepped. Current LR G: 0.000226, LR D: 0.000226
Epoch: [ 6], Batch: [ 0/ 404] | Total Time: 1h 32m 25s
d_loss: 1.3870, g_loss: 11.7637, const_loss: 0.0000, l1_loss: 4.3526, fm_loss: 0.0167, perc_loss: 6.7009, style_loss: 0.0000
Epoch: [ 6], Batch: [ 100/ 404] | Total Time: 1h 36m 14s
d_loss: 1.3912, g_loss: 12.1670, const_loss: 0.0000, l1_loss: 4.5236, fm_loss: 0.0185, perc_loss: 6.9315, style_loss: 0.0000
Epoch: [ 6], Batch: [ 200/ 404] | Total Time: 1h 40m 3s
d_loss: 1.3876, g_loss: 11.3949, const_loss: 0.0000, l1_loss: 4.2232, fm_loss: 0.0171, perc_loss: 6.4612, style_loss: 0.0000
Epoch: [ 6], Batch: [ 300/ 404] | Total Time: 1h 43m 52s
d_loss: 1.3876, g_loss: 11.3910, const_loss: 0.0000, l1_loss: 4.1972, fm_loss: 0.0159, perc_loss: 6.4846, style_loss: 0.0000
Epoch: [ 6], Batch: [ 400/ 404] | Total Time: 1h 47m 41s
d_loss: 1.3873, g_loss: 12.7016, const_loss: 0.0000, l1_loss: 4.8644, fm_loss: 0.0195, perc_loss: 7.1244, style_loss: 0.0000
--- End of Epoch 6 --- Time: 923.2s ---
LR Scheduler stepped. Current LR G: 0.000218, LR D: 0.000218
Epoch: [ 7], Batch: [ 0/ 404] | Total Time: 1h 47m 48s
d_loss: 1.3874, g_loss: 12.3552, const_loss: 0.0000, l1_loss: 4.4357, fm_loss: 0.0183, perc_loss: 7.2077, style_loss: 0.0000
Epoch: [ 7], Batch: [ 100/ 404] | Total Time: 1h 51m 37s
d_loss: 1.3884, g_loss: 11.5298, const_loss: 0.0000, l1_loss: 4.1099, fm_loss: 0.0169, perc_loss: 6.7101, style_loss: 0.0000
Epoch: [ 7], Batch: [ 200/ 404] | Total Time: 1h 55m 27s
d_loss: 1.3875, g_loss: 10.3801, const_loss: 0.0000, l1_loss: 3.6289, fm_loss: 0.0145, perc_loss: 6.0433, style_loss: 0.0000
Epoch: [ 7], Batch: [ 300/ 404] | Total Time: 1h 59m 16s
d_loss: 1.3873, g_loss: 11.1202, const_loss: 0.0000, l1_loss: 3.8972, fm_loss: 0.0158, perc_loss: 6.5138, style_loss: 0.0000
Epoch: [ 7], Batch: [ 400/ 404] | Total Time: 2h 3m 5s
d_loss: 1.3870, g_loss: 12.1631, const_loss: 0.0000, l1_loss: 4.3819, fm_loss: 0.0174, perc_loss: 7.0704, style_loss: 0.0000
--- End of Epoch 7 --- Time: 924.3s ---
LR Scheduler stepped. Current LR G: 0.000209, LR D: 0.000209
Epoch: [ 8], Batch: [ 0/ 404] | Total Time: 2h 3m 13s
d_loss: 1.3871, g_loss: 11.3369, const_loss: 0.0000, l1_loss: 4.0779, fm_loss: 0.0165, perc_loss: 6.5491, style_loss: 0.0000
Epoch: [ 8], Batch: [ 100/ 404] | Total Time: 2h 7m 2s
d_loss: 1.3888, g_loss: 10.9540, const_loss: 0.0000, l1_loss: 3.8408, fm_loss: 0.0157, perc_loss: 6.4042, style_loss: 0.0000
Epoch: [ 8], Batch: [ 200/ 404] | Total Time: 2h 10m 51s
d_loss: 1.3872, g_loss: 11.4119, const_loss: 0.0000, l1_loss: 4.0356, fm_loss: 0.0162, perc_loss: 6.6667, style_loss: 0.0000
Epoch: [ 8], Batch: [ 300/ 404] | Total Time: 2h 14m 40s
d_loss: 1.3878, g_loss: 10.8304, const_loss: 0.0000, l1_loss: 3.8120, fm_loss: 0.0156, perc_loss: 6.3094, style_loss: 0.0000
Epoch: [ 8], Batch: [ 400/ 404] | Total Time: 2h 18m 29s
d_loss: 1.3869, g_loss: 11.0935, const_loss: 0.0000, l1_loss: 4.0116, fm_loss: 0.0169, perc_loss: 6.3717, style_loss: 0.0000
--- End of Epoch 8 --- Time: 924.1s ---
LR Scheduler stepped. Current LR G: 0.000199, LR D: 0.000199
Epoch: [ 9], Batch: [ 0/ 404] | Total Time: 2h 18m 37s
d_loss: 1.3879, g_loss: 12.3386, const_loss: 0.0000, l1_loss: 4.5836, fm_loss: 0.0190, perc_loss: 7.0426, style_loss: 0.0000
Epoch: [ 9], Batch: [ 100/ 404] | Total Time: 2h 22m 26s
d_loss: 1.3870, g_loss: 10.8039, const_loss: 0.0000, l1_loss: 3.7465, fm_loss: 0.0149, perc_loss: 6.3491, style_loss: 0.0000
Epoch: [ 9], Batch: [ 200/ 404] | Total Time: 2h 26m 14s
d_loss: 1.3869, g_loss: 10.4330, const_loss: 0.0000, l1_loss: 3.5963, fm_loss: 0.0150, perc_loss: 6.1284, style_loss: 0.0000
Epoch: [ 9], Batch: [ 300/ 404] | Total Time: 2h 30m 4s
d_loss: 1.3870, g_loss: 12.2949, const_loss: 0.0000, l1_loss: 4.3350, fm_loss: 0.0183, perc_loss: 7.2483, style_loss: 0.0000
Epoch: [ 9], Batch: [ 400/ 404] | Total Time: 2h 33m 52s
d_loss: 1.3870, g_loss: 11.8616, const_loss: 0.0000, l1_loss: 4.3251, fm_loss: 0.0180, perc_loss: 6.8252, style_loss: 0.0000
--- End of Epoch 9 --- Time: 923.2s ---
LR Scheduler stepped. Current LR G: 0.000188, LR D: 0.000188
Epoch: [ 10], Batch: [ 0/ 404] | Total Time: 2h 34m 0s
d_loss: 1.3872, g_loss: 11.3410, const_loss: 0.0000, l1_loss: 3.9757, fm_loss: 0.0160, perc_loss: 6.6559, style_loss: 0.0000
Epoch: [ 10], Batch: [ 100/ 404] | Total Time: 2h 37m 49s
d_loss: 1.3881, g_loss: 11.3016, const_loss: 0.0000, l1_loss: 4.0055, fm_loss: 0.0170, perc_loss: 6.5857, style_loss: 0.0000
Epoch: [ 10], Batch: [ 200/ 404] | Total Time: 2h 41m 38s
d_loss: 1.3873, g_loss: 10.9171, const_loss: 0.0000, l1_loss: 3.8862, fm_loss: 0.0154, perc_loss: 6.3223, style_loss: 0.0000
Epoch: [ 10], Batch: [ 300/ 404] | Total Time: 2h 45m 27s
d_loss: 1.3872, g_loss: 11.8619, const_loss: 0.0000, l1_loss: 4.2575, fm_loss: 0.0177, perc_loss: 6.8933, style_loss: 0.0000
Epoch: [ 10], Batch: [ 400/ 404] | Total Time: 2h 49m 16s
d_loss: 1.3877, g_loss: 10.6684, const_loss: 0.0000, l1_loss: 3.7846, fm_loss: 0.0145, perc_loss: 6.1759, style_loss: 0.0000
--- End of Epoch 10 --- Time: 923.2s ---
LR Scheduler stepped. Current LR G: 0.000176, LR D: 0.000176
Epoch: [ 11], Batch: [ 0/ 404] | Total Time: 2h 49m 23s
d_loss: 1.3877, g_loss: 12.1687, const_loss: 0.0000, l1_loss: 4.3426, fm_loss: 0.0176, perc_loss: 7.1152, style_loss: 0.0000
Epoch: [ 11], Batch: [ 100/ 404] | Total Time: 2h 53m 13s
d_loss: 1.3873, g_loss: 12.0329, const_loss: 0.0000, l1_loss: 4.5131, fm_loss: 0.0181, perc_loss: 6.8083, style_loss: 0.0000
Epoch: [ 11], Batch: [ 200/ 404] | Total Time: 2h 57m 2s
d_loss: 1.3869, g_loss: 11.2102, const_loss: 0.0000, l1_loss: 3.9687, fm_loss: 0.0151, perc_loss: 6.5331, style_loss: 0.0000
Epoch: [ 11], Batch: [ 300/ 404] | Total Time: 3h 51s
d_loss: 1.3868, g_loss: 10.7452, const_loss: 0.0000, l1_loss: 3.7625, fm_loss: 0.0146, perc_loss: 6.2747, style_loss: 0.0000
Epoch: [ 11], Batch: [ 400/ 404] | Total Time: 3h 4m 40s
d_loss: 1.3870, g_loss: 10.6899, const_loss: 0.0000, l1_loss: 3.7340, fm_loss: 0.0144, perc_loss: 6.2481, style_loss: 0.0000
--- End of Epoch 11 --- Time: 924.6s ---
LR Scheduler stepped. Current LR G: 0.000164, LR D: 0.000164
Epoch: [ 12], Batch: [ 0/ 404] | Total Time: 3h 4m 48s
d_loss: 1.3901, g_loss: 11.1900, const_loss: 0.0000, l1_loss: 3.9232, fm_loss: 0.0149, perc_loss: 6.5585, style_loss: 0.0000
Epoch: [ 12], Batch: [ 100/ 404] | Total Time: 3h 8m 37s
d_loss: 1.3881, g_loss: 10.6850, const_loss: 0.0000, l1_loss: 3.6748, fm_loss: 0.0141, perc_loss: 6.3027, style_loss: 0.0000
Epoch: [ 12], Batch: [ 200/ 404] | Total Time: 3h 12m 25s
d_loss: 1.3869, g_loss: 10.5360, const_loss: 0.0000, l1_loss: 3.6313, fm_loss: 0.0143, perc_loss: 6.1970, style_loss: 0.0000

上面的訓練3個小時, 推論出來的結果, 還是一樣的慘烈, 是有改善, 但是離可以接受的目標, 還很遙遠, 有些常見的部件是接近完成, 但大多部份, 都是很花.


又重新調整模型, 又再一次重新計算權重:

✅ Model 350 loaded successfully
Epoch: [ 0], Batch: [ 0/ 425] | Total Time: 2s
d_loss: 1.4004, g_loss: 88.5646, const_loss: 0.0026, l1_loss: 63.3505, fm_loss: 0.2808, perc_loss: 24.2342, style_loss: 0.0001
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 425] | Total Time: 3m 50s
d_loss: 1.3877, g_loss: 15.4946, const_loss: 0.0000, l1_loss: 6.5833, fm_loss: 0.0210, perc_loss: 8.1968, style_loss: 0.0000
Epoch: [ 0], Batch: [ 200/ 425] | Total Time: 7m 37s
d_loss: 1.3872, g_loss: 13.1387, const_loss: 0.0000, l1_loss: 5.2182, fm_loss: 0.0184, perc_loss: 7.2087, style_loss: 0.0000
Epoch: [ 0], Batch: [ 300/ 425] | Total Time: 11m 25s
d_loss: 1.3910, g_loss: 13.1349, const_loss: 0.0000, l1_loss: 5.1652, fm_loss: 0.0167, perc_loss: 7.2596, style_loss: 0.0000
Epoch: [ 0], Batch: [ 400/ 425] | Total Time: 15m 12s
d_loss: 1.3896, g_loss: 12.4832, const_loss: 0.0000, l1_loss: 4.7529, fm_loss: 0.0168, perc_loss: 7.0202, style_loss: 0.0000
--- End of Epoch 0 --- Time: 966.3s ---
LR Scheduler stepped. Current LR G: 0.000349, LR D: 0.000349
Epoch: [ 1], Batch: [ 0/ 425] | Total Time: 16m 8s
d_loss: 1.3871, g_loss: 13.4808, const_loss: 0.0000, l1_loss: 5.3911, fm_loss: 0.0194, perc_loss: 7.3770, style_loss: 0.0000
Epoch: [ 1], Batch: [ 100/ 425] | Total Time: 19m 55s
d_loss: 1.3881, g_loss: 13.4881, const_loss: 0.0000, l1_loss: 5.2277, fm_loss: 0.0190, perc_loss: 7.5481, style_loss: 0.0000
Epoch: [ 1], Batch: [ 200/ 425] | Total Time: 23m 42s
d_loss: 1.3872, g_loss: 14.3140, const_loss: 0.0000, l1_loss: 5.8874, fm_loss: 0.0213, perc_loss: 7.7120, style_loss: 0.0000
Epoch: [ 1], Batch: [ 300/ 425] | Total Time: 27m 29s
d_loss: 1.3873, g_loss: 13.4419, const_loss: 0.0000, l1_loss: 5.2734, fm_loss: 0.0202, perc_loss: 7.4550, style_loss: 0.0000
Epoch: [ 1], Batch: [ 400/ 425] | Total Time: 31m 17s
d_loss: 1.3887, g_loss: 14.0173, const_loss: 0.0000, l1_loss: 5.5731, fm_loss: 0.0233, perc_loss: 7.7275, style_loss: 0.0000
--- End of Epoch 1 --- Time: 964.1s ---
LR Scheduler stepped. Current LR G: 0.000348, LR D: 0.000348
Epoch: [ 2], Batch: [ 0/ 425] | Total Time: 32m 12s
d_loss: 1.3875, g_loss: 13.5282, const_loss: 0.0000, l1_loss: 5.4471, fm_loss: 0.0201, perc_loss: 7.3676, style_loss: 0.0000
Epoch: [ 2], Batch: [ 100/ 425] | Total Time: 35m 59s
d_loss: 1.3877, g_loss: 13.8500, const_loss: 0.0000, l1_loss: 5.4772, fm_loss: 0.0197, perc_loss: 7.6597, style_loss: 0.0000
Epoch: [ 2], Batch: [ 200/ 425] | Total Time: 39m 47s
d_loss: 1.3954, g_loss: 13.0282, const_loss: 0.0000, l1_loss: 5.1440, fm_loss: 0.0190, perc_loss: 7.1718, style_loss: 0.0000
Epoch: [ 2], Batch: [ 300/ 425] | Total Time: 43m 35s
d_loss: 1.3875, g_loss: 12.4612, const_loss: 0.0000, l1_loss: 4.7620, fm_loss: 0.0171, perc_loss: 6.9888, style_loss: 0.0000
Epoch: [ 2], Batch: [ 400/ 425] | Total Time: 47m 22s
d_loss: 1.3888, g_loss: 14.9167, const_loss: 0.0000, l1_loss: 6.3523, fm_loss: 0.0234, perc_loss: 7.8476, style_loss: 0.0000
--- End of Epoch 2 --- Time: 965.8s ---
LR Scheduler stepped. Current LR G: 0.000345, LR D: 0.000345
Epoch: [ 3], Batch: [ 0/ 425] | Total Time: 48m 18s
d_loss: 1.3912, g_loss: 12.4869, const_loss: 0.0000, l1_loss: 4.8934, fm_loss: 0.0169, perc_loss: 6.8832, style_loss: 0.0000
Epoch: [ 3], Batch: [ 100/ 425] | Total Time: 52m 6s
d_loss: 1.3884, g_loss: 12.9828, const_loss: 0.0000, l1_loss: 5.1468, fm_loss: 0.0176, perc_loss: 7.1251, style_loss: 0.0000
Epoch: [ 3], Batch: [ 200/ 425] | Total Time: 55m 53s
d_loss: 1.3887, g_loss: 12.1547, const_loss: 0.0000, l1_loss: 4.6419, fm_loss: 0.0164, perc_loss: 6.8030, style_loss: 0.0000
Epoch: [ 3], Batch: [ 300/ 425] | Total Time: 59m 41s
d_loss: 1.3877, g_loss: 13.1747, const_loss: 0.0000, l1_loss: 4.9729, fm_loss: 0.0196, perc_loss: 7.4889, style_loss: 0.0000
Epoch: [ 3], Batch: [ 400/ 425] | Total Time: 1h 3m 29s
d_loss: 1.3876, g_loss: 13.0804, const_loss: 0.0000, l1_loss: 5.0224, fm_loss: 0.0189, perc_loss: 7.3457, style_loss: 0.0000
--- End of Epoch 3 --- Time: 967.5s ---
LR Scheduler stepped. Current LR G: 0.000341, LR D: 0.000341
Epoch: [ 4], Batch: [ 0/ 425] | Total Time: 1h 4m 25s
d_loss: 1.3889, g_loss: 13.4231, const_loss: 0.0000, l1_loss: 5.1789, fm_loss: 0.0192, perc_loss: 7.5322, style_loss: 0.0000
Epoch: [ 4], Batch: [ 100/ 425] | Total Time: 1h 8m 13s
d_loss: 1.3869, g_loss: 12.3676, const_loss: 0.0000, l1_loss: 4.5819, fm_loss: 0.0172, perc_loss: 7.0751, style_loss: 0.0000
Epoch: [ 4], Batch: [ 200/ 425] | Total Time: 1h 12m 0s
d_loss: 1.3871, g_loss: 12.5961, const_loss: 0.0000, l1_loss: 4.7914, fm_loss: 0.0195, perc_loss: 7.0919, style_loss: 0.0000
Epoch: [ 4], Batch: [ 300/ 425] | Total Time: 1h 15m 47s
d_loss: 1.3894, g_loss: 12.5353, const_loss: 0.0000, l1_loss: 4.7822, fm_loss: 0.0187, perc_loss: 7.0411, style_loss: 0.0000
Epoch: [ 4], Batch: [ 400/ 425] | Total Time: 1h 19m 35s
d_loss: 1.3885, g_loss: 14.4965, const_loss: 0.0000, l1_loss: 6.2915, fm_loss: 0.0285, perc_loss: 7.4832, style_loss: 0.0000
--- End of Epoch 4 --- Time: 964.5s ---
LR Scheduler stepped. Current LR G: 0.000337, LR D: 0.000337
Epoch: [ 5], Batch: [ 0/ 425] | Total Time: 1h 20m 30s
d_loss: 1.3897, g_loss: 13.1391, const_loss: 0.0000, l1_loss: 5.1639, fm_loss: 0.0207, perc_loss: 7.2611, style_loss: 0.0000

感覺像是算圖的無限循環.

上面是從 350 接續算, 最後又再捨棄不用, 改用checkpoint 316, 原本中間應該還有一些更適合的版本,因為一直修改架構,所以沒保存到,而且,實際測試,分別使用 316 和 350 訓練400個 step(類似1個epoch, 訓練資料8000筆), 這2個版本推論出來的結果, 都非常可怕, 316雖然筆畫都消失, 但比 350 還好一點, 因為 350 套用再更之後的模型接續訓練, 並 Re-initializing param 推論出來的結果像蓋印章, 大多數的字四個邊都黑的, 意思是, 接近1周訓練的結果都丟掉, 入水流, 開始希望新的模型訓練一陣子之後, 有機會品質或穩定性高一點點。反正, 最差的情況就是再持續訓練, 可以取得上一個最佳結果, 因為程式碼都還在。

消失的這1周, 檢討為什麼會花這麼多時間, 因為省略了驗證的步驟, 直接相信 chatGPT 給的程式碼, 就去訓練。會出問題的項目有:

套用上面提供的程式, 的確記憶體使用有降低, 雖然 chatGPT 是寫有保持風格特徵, 但實際上在資料量少的情況下, 有些字會推論不出來, 但不省記憶體在巨量特徵情況下, 舊的架構(占記憶體)版本效果較佳, 不確定要不要花時間去做這個嘗試。

chatGPT 是寫, 有機會套用到其他非 innermost 層, 可能會提升風格一致性, 實際使用極少量的資料測試, 在非 innermost 呼叫 forword, 再呼叫 self.film 推論結果會是98% 左右變黑色.

針對架構的調整, 一定要先在少量資料做測試, 確定影響不大, 再套用到已經訓練很久的模型上, 避免花了大量時間去訓練, 訓練出來的結果完全無法使用。

先接續某一個古代checkpint 316, 訓練 500 step, 再推論看看結果, 看看內容會不會崩掉, 確定沒崩掉後, 再訓練2小時的log:

 Model 500 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 2s
d_loss: 1.3998, g_loss: 20.7584, const_loss: 0.0135, l1_loss: 9.8463, fm_loss: 0.1478, perc_loss: 10.0575
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 54s
d_loss: 1.4019, g_loss: 23.3850, const_loss: 0.0102, l1_loss: 11.5756, fm_loss: 0.1493, perc_loss: 10.9561
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 45s
d_loss: 1.4044, g_loss: 20.8684, const_loss: 0.0106, l1_loss: 9.8532, fm_loss: 0.1002, perc_loss: 10.2106
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 36s
d_loss: 1.4228, g_loss: 17.3219, const_loss: 0.0091, l1_loss: 7.7482, fm_loss: 0.0701, perc_loss: 8.8007
--- End of Epoch 0 --- Time: 889.1s ---
LR Scheduler stepped. Current LR G: 0.000389, LR D: 0.000389
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 51s
d_loss: 1.3925, g_loss: 22.0666, const_loss: 0.0088, l1_loss: 10.6540, fm_loss: 0.0949, perc_loss: 10.6155
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 44s
d_loss: 1.3972, g_loss: 17.8669, const_loss: 0.0133, l1_loss: 8.0888, fm_loss: 0.0689, perc_loss: 9.0019
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 22m 36s
d_loss: 1.4055, g_loss: 19.3239, const_loss: 0.0125, l1_loss: 8.6965, fm_loss: 0.0858, perc_loss: 9.8359
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 26m 29s
d_loss: 1.4033, g_loss: 18.9616, const_loss: 0.0106, l1_loss: 8.3514, fm_loss: 0.0731, perc_loss: 9.8328
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 893.7s ---
LR Scheduler stepped. Current LR G: 0.000388, LR D: 0.000388
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 29m 45s
d_loss: 1.3994, g_loss: 18.6681, const_loss: 0.0049, l1_loss: 8.4758, fm_loss: 0.0664, perc_loss: 9.4272
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 33m 37s
d_loss: 1.3935, g_loss: 19.0090, const_loss: 0.0109, l1_loss: 8.3501, fm_loss: 0.0628, perc_loss: 9.8924
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 37m 29s
d_loss: 1.3966, g_loss: 16.8602, const_loss: 0.0099, l1_loss: 7.3193, fm_loss: 0.0402, perc_loss: 8.7969
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 41m 23s
d_loss: 1.3901, g_loss: 19.0856, const_loss: 0.0134, l1_loss: 8.6061, fm_loss: 0.0562, perc_loss: 9.7166
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 895.3s ---
LR Scheduler stepped. Current LR G: 0.000385, LR D: 0.000385
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 44m 40s
d_loss: 1.4065, g_loss: 17.5045, const_loss: 0.0084, l1_loss: 7.7277, fm_loss: 0.0459, perc_loss: 9.0297
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 48m 32s
d_loss: 1.4109, g_loss: 16.6247, const_loss: 0.0101, l1_loss: 7.2674, fm_loss: 0.0451, perc_loss: 8.6086
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 52m 25s
d_loss: 1.3968, g_loss: 16.9427, const_loss: 0.0094, l1_loss: 7.4295, fm_loss: 0.0426, perc_loss: 8.7683
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 56m 17s
d_loss: 1.3899, g_loss: 17.8410, const_loss: 0.0065, l1_loss: 8.0423, fm_loss: 0.0467, perc_loss: 9.0522
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 893.3s ---
LR Scheduler stepped. Current LR G: 0.000380, LR D: 0.000380
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 59m 33s
d_loss: 1.3927, g_loss: 17.9981, const_loss: 0.0114, l1_loss: 7.8743, fm_loss: 0.0444, perc_loss: 9.3752
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 3m 26s
d_loss: 1.4030, g_loss: 16.3263, const_loss: 0.0075, l1_loss: 6.8570, fm_loss: 0.0411, perc_loss: 8.7274
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 7m 18s
d_loss: 1.3969, g_loss: 15.9240, const_loss: 0.0087, l1_loss: 6.6304, fm_loss: 0.0410, perc_loss: 8.5505
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 11m 10s
d_loss: 1.3939, g_loss: 18.1640, const_loss: 0.0050, l1_loss: 7.9909, fm_loss: 0.0486, perc_loss: 9.4260
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 892.6s ---
LR Scheduler stepped. Current LR G: 0.000375, LR D: 0.000375
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 14m 26s
d_loss: 1.3906, g_loss: 15.9471, const_loss: 0.0073, l1_loss: 6.6500, fm_loss: 0.0391, perc_loss: 8.5573
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 18m 18s
d_loss: 1.3976, g_loss: 14.7550, const_loss: 0.0108, l1_loss: 6.3309, fm_loss: 0.0344, perc_loss: 7.6851
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 22m 10s
d_loss: 1.4028, g_loss: 15.2442, const_loss: 0.0043, l1_loss: 6.5220, fm_loss: 0.0353, perc_loss: 7.9893
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 26m 3s
d_loss: 1.3909, g_loss: 15.7260, const_loss: 0.0063, l1_loss: 6.3842, fm_loss: 0.0341, perc_loss: 8.6080
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 893.3s ---
LR Scheduler stepped. Current LR G: 0.000369, LR D: 0.000369
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 29m 19s
d_loss: 1.3980, g_loss: 18.9280, const_loss: 0.0097, l1_loss: 8.5582, fm_loss: 0.0449, perc_loss: 9.6224
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 33m 11s
d_loss: 1.3907, g_loss: 15.5946, const_loss: 0.0069, l1_loss: 6.7471, fm_loss: 0.0364, perc_loss: 8.1103
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 37m 5s
d_loss: 1.3910, g_loss: 14.7716, const_loss: 0.0091, l1_loss: 5.9959, fm_loss: 0.0313, perc_loss: 8.0420
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 40m 58s
d_loss: 1.3949, g_loss: 15.8706, const_loss: 0.0058, l1_loss: 6.6760, fm_loss: 0.0357, perc_loss: 8.4598
--- End of Epoch 6 --- Time: 893.1s ---
LR Scheduler stepped. Current LR G: 0.000361, LR D: 0.000361
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 44m 12s
d_loss: 1.3922, g_loss: 17.0295, const_loss: 0.0047, l1_loss: 7.2477, fm_loss: 0.0424, perc_loss: 9.0414
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 48m 5s
d_loss: 1.3943, g_loss: 15.3428, const_loss: 0.0049, l1_loss: 6.1924, fm_loss: 0.0377, perc_loss: 8.4144
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 51m 57s
d_loss: 1.3952, g_loss: 15.4020, const_loss: 0.0063, l1_loss: 6.2819, fm_loss: 0.0335, perc_loss: 8.3869
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 55m 50s
d_loss: 1.3908, g_loss: 13.5728, const_loss: 0.0051, l1_loss: 5.4902, fm_loss: 0.0278, perc_loss: 7.3563
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 893.2s ---
LR Scheduler stepped. Current LR G: 0.000353, LR D: 0.000353
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 59m 5s
d_loss: 1.3881, g_loss: 15.6916, const_loss: 0.0081, l1_loss: 6.3827, fm_loss: 0.0322, perc_loss: 8.5753
Checkpoint saved at step 3100

由於是 316 做 resume, 又是架構調整, 架構調整部份單位+100, 完成1個帳號的訓練quota 單位+2, 所下一個是使用 418 當作開始訓練資料:

 Model 418 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3892, g_loss: 14.0079, const_loss: 0.0065, l1_loss: 5.5386, fm_loss: 0.0274, perc_loss: 7.7420
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 52s
d_loss: 1.3941, g_loss: 14.9618, const_loss: 0.0042, l1_loss: 6.0998, fm_loss: 0.0294, perc_loss: 8.1350
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 42s
d_loss: 1.3898, g_loss: 14.4653, const_loss: 0.0066, l1_loss: 5.7251, fm_loss: 0.0292, perc_loss: 8.0111
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 31s
d_loss: 1.3896, g_loss: 12.5628, const_loss: 0.0048, l1_loss: 4.7962, fm_loss: 0.0254, perc_loss: 7.0436
--- End of Epoch 0 --- Time: 882.9s ---
LR Scheduler stepped. Current LR G: 0.000359, LR D: 0.000359
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 45s
d_loss: 1.3893, g_loss: 16.0789, const_loss: 0.0039, l1_loss: 6.7092, fm_loss: 0.0328, perc_loss: 8.6398
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 36s
d_loss: 1.3889, g_loss: 13.1530, const_loss: 0.0051, l1_loss: 5.1791, fm_loss: 0.0259, perc_loss: 7.2501
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 22m 26s
d_loss: 1.3882, g_loss: 14.3335, const_loss: 0.0060, l1_loss: 5.6213, fm_loss: 0.0321, perc_loss: 7.9807
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 26m 17s
d_loss: 1.3922, g_loss: 14.0111, const_loss: 0.0051, l1_loss: 5.2747, fm_loss: 0.0269, perc_loss: 8.0110
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 886.6s ---
LR Scheduler stepped. Current LR G: 0.000358, LR D: 0.000358
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 29m 31s
d_loss: 1.3888, g_loss: 13.5328, const_loss: 0.0054, l1_loss: 5.2374, fm_loss: 0.0239, perc_loss: 7.5733
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 33m 23s
d_loss: 1.3944, g_loss: 14.3032, const_loss: 0.0051, l1_loss: 5.4884, fm_loss: 0.0266, perc_loss: 8.0903
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 37m 13s
d_loss: 1.3880, g_loss: 12.7111, const_loss: 0.0046, l1_loss: 4.7652, fm_loss: 0.0224, perc_loss: 7.2255
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 41m 4s
d_loss: 1.3884, g_loss: 14.8223, const_loss: 0.0093, l1_loss: 5.9114, fm_loss: 0.0250, perc_loss: 8.1837
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 886.7s ---
LR Scheduler stepped. Current LR G: 0.000355, LR D: 0.000355
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 44m 18s
d_loss: 1.3951, g_loss: 14.5972, const_loss: 0.0051, l1_loss: 5.8924, fm_loss: 0.0278, perc_loss: 7.9786
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 48m 9s
d_loss: 1.4010, g_loss: 12.8680, const_loss: 0.0050, l1_loss: 4.9541, fm_loss: 0.0238, perc_loss: 7.1919
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 51m 59s
d_loss: 1.3905, g_loss: 14.1147, const_loss: 0.0053, l1_loss: 5.6602, fm_loss: 0.0266, perc_loss: 7.7293
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 55m 50s
d_loss: 1.3930, g_loss: 14.4016, const_loss: 0.0040, l1_loss: 5.8406, fm_loss: 0.0305, perc_loss: 7.8330
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 886.6s ---
LR Scheduler stepped. Current LR G: 0.000351, LR D: 0.000351
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 59m 5s
d_loss: 1.3951, g_loss: 14.8139, const_loss: 0.0057, l1_loss: 5.9245, fm_loss: 0.0259, perc_loss: 8.1649
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 2m 56s
d_loss: 1.3876, g_loss: 13.9686, const_loss: 0.0055, l1_loss: 5.4379, fm_loss: 0.0267, perc_loss: 7.8056
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 6m 46s
d_loss: 1.3888, g_loss: 13.3968, const_loss: 0.0061, l1_loss: 5.0861, fm_loss: 0.0265, perc_loss: 7.5853
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 10m 37s
d_loss: 1.3889, g_loss: 13.8343, const_loss: 0.0052, l1_loss: 5.2250, fm_loss: 0.0265, perc_loss: 7.8843
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 886.8s ---
LR Scheduler stepped. Current LR G: 0.000346, LR D: 0.000346
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 13m 51s
d_loss: 1.3881, g_loss: 13.1283, const_loss: 0.0046, l1_loss: 4.9394, fm_loss: 0.0227, perc_loss: 7.4682
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 17m 43s
d_loss: 1.3927, g_loss: 12.7704, const_loss: 0.0064, l1_loss: 5.0640, fm_loss: 0.0229, perc_loss: 6.9837
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 21m 33s
d_loss: 1.3878, g_loss: 13.2428, const_loss: 0.0043, l1_loss: 5.2534, fm_loss: 0.0236, perc_loss: 7.2682
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 25m 24s
d_loss: 1.3901, g_loss: 13.4320, const_loss: 0.0050, l1_loss: 5.0283, fm_loss: 0.0240, perc_loss: 7.6813
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 886.7s ---
LR Scheduler stepped. Current LR G: 0.000340, LR D: 0.000340
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 28m 38s
d_loss: 1.3954, g_loss: 15.4715, const_loss: 0.0052, l1_loss: 6.3522, fm_loss: 0.0271, perc_loss: 8.3941
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 32m 29s
d_loss: 1.3942, g_loss: 13.2564, const_loss: 0.0062, l1_loss: 5.2963, fm_loss: 0.0227, perc_loss: 7.2379
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 36m 20s
d_loss: 1.3876, g_loss: 12.7917, const_loss: 0.0049, l1_loss: 4.8329, fm_loss: 0.0222, perc_loss: 7.2384
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 40m 11s
d_loss: 1.3879, g_loss: 13.7794, const_loss: 0.0034, l1_loss: 5.4345, fm_loss: 0.0253, perc_loss: 7.6232
--- End of Epoch 6 --- Time: 885.6s ---
LR Scheduler stepped. Current LR G: 0.000334, LR D: 0.000334
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 43m 24s
d_loss: 1.3916, g_loss: 15.0642, const_loss: 0.0035, l1_loss: 6.0745, fm_loss: 0.0302, perc_loss: 8.2627
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 47m 15s
d_loss: 1.3890, g_loss: 13.6863, const_loss: 0.0047, l1_loss: 5.1957, fm_loss: 0.0244, perc_loss: 7.7682
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 51m 5s
d_loss: 1.3888, g_loss: 13.0039, const_loss: 0.0042, l1_loss: 4.8315, fm_loss: 0.0243, perc_loss: 7.4506
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 54m 56s
d_loss: 1.3897, g_loss: 11.7173, const_loss: 0.0037, l1_loss: 4.3563, fm_loss: 0.0215, perc_loss: 6.6424
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 886.9s ---
LR Scheduler stepped. Current LR G: 0.000326, LR D: 0.000326
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 58m 11s
d_loss: 1.3883, g_loss: 14.3739, const_loss: 0.0067, l1_loss: 5.5746, fm_loss: 0.0278, perc_loss: 8.0714
Checkpoint saved at step 3100

上面訓練結束存成 420, 接續訓練:

 Model 420 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3897, g_loss: 12.4941, const_loss: 0.0039, l1_loss: 4.6625, fm_loss: 0.0191, perc_loss: 7.1154
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 50s
d_loss: 1.3874, g_loss: 13.2108, const_loss: 0.0030, l1_loss: 4.9992, fm_loss: 0.0205, perc_loss: 7.4948
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 37s
d_loss: 1.3875, g_loss: 12.5924, const_loss: 0.0044, l1_loss: 4.6015, fm_loss: 0.0203, perc_loss: 7.2729
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 24s
d_loss: 1.3932, g_loss: 11.0397, const_loss: 0.0026, l1_loss: 3.8979, fm_loss: 0.0163, perc_loss: 6.4295
--- End of Epoch 0 --- Time: 873.3s ---
LR Scheduler stepped. Current LR G: 0.000339, LR D: 0.000339
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 35s
d_loss: 1.3881, g_loss: 14.4474, const_loss: 0.0029, l1_loss: 5.7265, fm_loss: 0.0242, perc_loss: 8.0003
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 24s
d_loss: 1.3886, g_loss: 11.7574, const_loss: 0.0039, l1_loss: 4.3385, fm_loss: 0.0192, perc_loss: 6.7025
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 22m 12s
d_loss: 1.3874, g_loss: 12.9105, const_loss: 0.0054, l1_loss: 4.7736, fm_loss: 0.0226, perc_loss: 7.4156
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 26m 1s
d_loss: 1.3902, g_loss: 12.6999, const_loss: 0.0035, l1_loss: 4.5197, fm_loss: 0.0211, perc_loss: 7.4623
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 877.2s ---
LR Scheduler stepped. Current LR G: 0.000338, LR D: 0.000338
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 29m 12s
d_loss: 1.3873, g_loss: 12.4865, const_loss: 0.0029, l1_loss: 4.6703, fm_loss: 0.0198, perc_loss: 7.1007
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 33m 1s
d_loss: 1.3892, g_loss: 13.5776, const_loss: 0.0036, l1_loss: 5.0938, fm_loss: 0.0223, perc_loss: 7.7650
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 36m 49s
d_loss: 1.3879, g_loss: 11.9674, const_loss: 0.0037, l1_loss: 4.3614, fm_loss: 0.0186, perc_loss: 6.8909
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 40m 37s
d_loss: 1.3878, g_loss: 13.2199, const_loss: 0.0063, l1_loss: 4.9765, fm_loss: 0.0206, perc_loss: 7.5236
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 876.6s ---
LR Scheduler stepped. Current LR G: 0.000335, LR D: 0.000335
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 43m 49s
d_loss: 1.3920, g_loss: 12.7741, const_loss: 0.0045, l1_loss: 4.7909, fm_loss: 0.0214, perc_loss: 7.2645
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 47m 37s
d_loss: 1.3984, g_loss: 11.9943, const_loss: 0.0046, l1_loss: 4.4490, fm_loss: 0.0175, perc_loss: 6.8304
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 51m 25s
d_loss: 1.3879, g_loss: 12.6613, const_loss: 0.0039, l1_loss: 4.8152, fm_loss: 0.0210, perc_loss: 7.1278
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 55m 14s
d_loss: 1.3956, g_loss: 13.4695, const_loss: 0.0028, l1_loss: 5.2344, fm_loss: 0.0239, perc_loss: 7.5150
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 878.5s ---
LR Scheduler stepped. Current LR G: 0.000332, LR D: 0.000332
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 58m 27s
d_loss: 1.4032, g_loss: 13.2710, const_loss: 0.0047, l1_loss: 4.9739, fm_loss: 0.0204, perc_loss: 7.5785
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 2m 15s
d_loss: 1.3879, g_loss: 12.7929, const_loss: 0.0038, l1_loss: 4.7317, fm_loss: 0.0199, perc_loss: 7.3442
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 6m 3s
d_loss: 1.3872, g_loss: 12.5891, const_loss: 0.0039, l1_loss: 4.6193, fm_loss: 0.0201, perc_loss: 7.2525
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 9m 52s
d_loss: 1.3904, g_loss: 13.6109, const_loss: 0.0031, l1_loss: 5.1387, fm_loss: 0.0233, perc_loss: 7.7525
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 876.4s ---
LR Scheduler stepped. Current LR G: 0.000327, LR D: 0.000327
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 13m 4s
d_loss: 1.3897, g_loss: 12.8192, const_loss: 0.0033, l1_loss: 4.7725, fm_loss: 0.0214, perc_loss: 7.3292
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 16m 52s
d_loss: 1.3899, g_loss: 11.0066, const_loss: 0.0052, l1_loss: 4.0139, fm_loss: 0.0169, perc_loss: 6.2773
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 20m 41s
d_loss: 1.3878, g_loss: 11.7744, const_loss: 0.0032, l1_loss: 4.3509, fm_loss: 0.0181, perc_loss: 6.7088
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 24m 29s
d_loss: 1.3889, g_loss: 13.0652, const_loss: 0.0030, l1_loss: 4.8363, fm_loss: 0.0229, perc_loss: 7.5097
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 876.8s ---
LR Scheduler stepped. Current LR G: 0.000322, LR D: 0.000322
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 27m 41s
d_loss: 1.4114, g_loss: 14.1218, const_loss: 0.0057, l1_loss: 5.5612, fm_loss: 0.0219, perc_loss: 7.8395
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 31m 29s
d_loss: 1.3913, g_loss: 11.9787, const_loss: 0.0046, l1_loss: 4.4777, fm_loss: 0.0167, perc_loss: 6.7864
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 35m 18s
d_loss: 1.3871, g_loss: 11.8051, const_loss: 0.0043, l1_loss: 4.2425, fm_loss: 0.0168, perc_loss: 6.8482
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 39m 6s
d_loss: 1.3872, g_loss: 12.6618, const_loss: 0.0024, l1_loss: 4.7864, fm_loss: 0.0196, perc_loss: 7.1601
--- End of Epoch 6 --- Time: 876.0s ---
LR Scheduler stepped. Current LR G: 0.000315, LR D: 0.000315
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 42m 17s
d_loss: 1.3899, g_loss: 14.6426, const_loss: 0.0020, l1_loss: 5.9035, fm_loss: 0.0248, perc_loss: 8.0194
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 46m 5s
d_loss: 1.3899, g_loss: 12.1779, const_loss: 0.0041, l1_loss: 4.2849, fm_loss: 0.0178, perc_loss: 7.1778
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 49m 53s
d_loss: 1.3875, g_loss: 12.2259, const_loss: 0.0024, l1_loss: 4.4070, fm_loss: 0.0179, perc_loss: 7.1058
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 53m 41s
d_loss: 1.3885, g_loss: 11.1480, const_loss: 0.0028, l1_loss: 4.0251, fm_loss: 0.0163, perc_loss: 6.4103
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 877.4s ---
LR Scheduler stepped. Current LR G: 0.000308, LR D: 0.000308
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 56m 54s
d_loss: 1.3886, g_loss: 12.3217, const_loss: 0.0039, l1_loss: 4.3302, fm_loss: 0.0175, perc_loss: 7.2767
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 2h 43s
d_loss: 1.3879, g_loss: 11.6215, const_loss: 0.0035, l1_loss: 4.0814, fm_loss: 0.0170, perc_loss: 6.8262
Checkpoint saved at step 3200
Epoch: [ 8], Batch: [ 200/ 385] | Total Time: 2h 4m 31s
d_loss: 1.3875, g_loss: 12.0946, const_loss: 0.0026, l1_loss: 4.4823, fm_loss: 0.0180, perc_loss: 6.8983
Checkpoint saved at step 3300
Epoch: [ 8], Batch: [ 300/ 385] | Total Time: 2h 8m 19s
d_loss: 1.3881, g_loss: 12.1378, const_loss: 0.0034, l1_loss: 4.5043, fm_loss: 0.0176, perc_loss: 6.9196
Checkpoint saved at step 3400
--- End of Epoch 8 --- Time: 877.0s ---
LR Scheduler stepped. Current LR G: 0.000299, LR D: 0.000299
Epoch: [ 9], Batch: [ 0/ 385] | Total Time: 2h 11m 31s
d_loss: 1.3959, g_loss: 12.6532, const_loss: 0.0029, l1_loss: 4.7154, fm_loss: 0.0176, perc_loss: 7.2240
Checkpoint saved at step 3500
Epoch: [ 9], Batch: [ 100/ 385] | Total Time: 2h 15m 19s
d_loss: 1.3874, g_loss: 11.4990, const_loss: 0.0034, l1_loss: 3.9875, fm_loss: 0.0148, perc_loss: 6.8004
Checkpoint saved at step 3600
Epoch: [ 9], Batch: [ 200/ 385] | Total Time: 2h 19m 8s
d_loss: 1.3874, g_loss: 11.8805, const_loss: 0.0030, l1_loss: 4.2328, fm_loss: 0.0162, perc_loss: 6.9351
Checkpoint saved at step 3700
Epoch: [ 9], Batch: [ 300/ 385] | Total Time: 2h 22m 57s
d_loss: 1.3885, g_loss: 11.9505, const_loss: 0.0033, l1_loss: 4.2465, fm_loss: 0.0166, perc_loss: 6.9908
Checkpoint saved at step 3800
--- End of Epoch 9 --- Time: 877.4s ---
LR Scheduler stepped. Current LR G: 0.000290, LR D: 0.000290
Epoch: [ 10], Batch: [ 0/ 385] | Total Time: 2h 26m 8s
d_loss: 1.3919, g_loss: 12.5682, const_loss: 0.0033, l1_loss: 4.7182, fm_loss: 0.0182, perc_loss: 7.1351
Checkpoint saved at step 3900
Epoch: [ 10], Batch: [ 100/ 385] | Total Time: 2h 29m 56s
d_loss: 1.3878, g_loss: 11.7507, const_loss: 0.0047, l1_loss: 4.2679, fm_loss: 0.0163, perc_loss: 6.7684
Checkpoint saved at step 4000
Epoch: [ 10], Batch: [ 200/ 385] | Total Time: 2h 33m 44s
d_loss: 1.3871, g_loss: 11.5851, const_loss: 0.0025, l1_loss: 4.1346, fm_loss: 0.0157, perc_loss: 6.7389
Checkpoint saved at step 4100
Epoch: [ 10], Batch: [ 300/ 385] | Total Time: 2h 37m 32s
d_loss: 1.3885, g_loss: 11.8330, const_loss: 0.0037, l1_loss: 4.2708, fm_loss: 0.0169, perc_loss: 6.8482
Checkpoint saved at step 4200
--- End of Epoch 10 --- Time: 876.0s ---
LR Scheduler stepped. Current LR G: 0.000281, LR D: 0.000281
Epoch: [ 11], Batch: [ 0/ 385] | Total Time: 2h 40m 44s
d_loss: 1.3890, g_loss: 12.3697, const_loss: 0.0034, l1_loss: 4.4931, fm_loss: 0.0180, perc_loss: 7.1618
Checkpoint saved at step 4300
Epoch: [ 11], Batch: [ 100/ 385] | Total Time: 2h 44m 37s
d_loss: 1.3870, g_loss: 12.8741, const_loss: 0.0039, l1_loss: 4.8471, fm_loss: 0.0184, perc_loss: 7.3114
Checkpoint saved at step 4400
Epoch: [ 11], Batch: [ 200/ 385] | Total Time: 2h 48m 30s
d_loss: 1.3877, g_loss: 12.6256, const_loss: 0.0036, l1_loss: 4.6444, fm_loss: 0.0181, perc_loss: 7.2662
Checkpoint saved at step 4500
Epoch: [ 11], Batch: [ 300/ 385] | Total Time: 2h 52m 24s
d_loss: 1.3873, g_loss: 12.0384, const_loss: 0.0032, l1_loss: 4.3994, fm_loss: 0.0179, perc_loss: 6.9245
Checkpoint saved at step 4600
--- End of Epoch 11 --- Time: 893.5s ---
LR Scheduler stepped. Current LR G: 0.000270, LR D: 0.000270
Epoch: [ 12], Batch: [ 0/ 385] | Total Time: 2h 55m 38s
d_loss: 1.4079, g_loss: 12.6169, const_loss: 0.0033, l1_loss: 4.6421, fm_loss: 0.0188, perc_loss: 7.2593
Checkpoint saved at step 4700
Epoch: [ 12], Batch: [ 100/ 385] | Total Time: 2h 59m 26s
d_loss: 1.3875, g_loss: 12.2708, const_loss: 0.0045, l1_loss: 4.5726, fm_loss: 0.0181, perc_loss: 6.9823
Checkpoint saved at step 4800
Epoch: [ 12], Batch: [ 200/ 385] | Total Time: 3h 3m 14s
d_loss: 1.3872, g_loss: 11.6662, const_loss: 0.0029, l1_loss: 4.0271, fm_loss: 0.0163, perc_loss: 6.9266
Checkpoint saved at step 4900
Epoch: [ 12], Batch: [ 300/ 385] | Total Time: 3h 7m 2s
d_loss: 1.3889, g_loss: 11.7306, const_loss: 0.0037, l1_loss: 4.1607, fm_loss: 0.0165, perc_loss: 6.8563
Checkpoint saved at step 5000
--- End of Epoch 12 --- Time: 882.6s ---
LR Scheduler stepped. Current LR G: 0.000259, LR D: 0.000259
Epoch: [ 13], Batch: [ 0/ 385] | Total Time: 3h 10m 21s
d_loss: 1.3912, g_loss: 12.0083, const_loss: 0.0027, l1_loss: 4.2642, fm_loss: 0.0172, perc_loss: 7.0309
Checkpoint saved at step 5100
Epoch: [ 13], Batch: [ 100/ 385] | Total Time: 3h 14m 9s
d_loss: 1.3872, g_loss: 11.3617, const_loss: 0.0029, l1_loss: 3.9903, fm_loss: 0.0151, perc_loss: 6.6600
Checkpoint saved at step 5200
Epoch: [ 13], Batch: [ 200/ 385] | Total Time: 3h 18m 3s
d_loss: 1.3891, g_loss: 12.1281, const_loss: 0.0030, l1_loss: 4.3393, fm_loss: 0.0173, perc_loss: 7.0751
Checkpoint saved at step 5300
Epoch: [ 13], Batch: [ 300/ 385] | Total Time: 3h 21m 57s
d_loss: 1.3872, g_loss: 11.2731, const_loss: 0.0031, l1_loss: 4.0361, fm_loss: 0.0159, perc_loss: 6.5247
--- End of Epoch 13 --- Time: 887.7s ---
LR Scheduler stepped. Current LR G: 0.000247, LR D: 0.000247
Epoch: [ 14], Batch: [ 0/ 385] | Total Time: 3h 25m 8s
d_loss: 1.3872, g_loss: 11.3911, const_loss: 0.0028, l1_loss: 4.0469, fm_loss: 0.0159, perc_loss: 6.6321
Checkpoint saved at step 5400
Epoch: [ 14], Batch: [ 100/ 385] | Total Time: 3h 29m 2s
d_loss: 1.3912, g_loss: 10.3045, const_loss: 0.0032, l1_loss: 3.5497, fm_loss: 0.0130, perc_loss: 6.0452
Checkpoint saved at step 5500
Epoch: [ 14], Batch: [ 200/ 385] | Total Time: 3h 32m 56s
d_loss: 1.3876, g_loss: 11.1149, const_loss: 0.0026, l1_loss: 3.9021, fm_loss: 0.0146, perc_loss: 6.5023
Checkpoint saved at step 5600
Epoch: [ 14], Batch: [ 300/ 385] | Total Time: 3h 36m 44s
d_loss: 1.3871, g_loss: 11.5549, const_loss: 0.0021, l1_loss: 3.8950, fm_loss: 0.0139, perc_loss: 6.9505
Checkpoint saved at step 5700
--- End of Epoch 14 --- Time: 892.6s ---
LR Scheduler stepped. Current LR G: 0.000235, LR D: 0.000235
Epoch: [ 15], Batch: [ 0/ 385] | Total Time: 3h 40m 1s
d_loss: 1.4083, g_loss: 12.7839, const_loss: 0.0031, l1_loss: 4.6461, fm_loss: 0.0176, perc_loss: 7.4238
Checkpoint saved at step 5800
Epoch: [ 15], Batch: [ 100/ 385] | Total Time: 3h 43m 53s
d_loss: 1.3874, g_loss: 12.1312, const_loss: 0.0034, l1_loss: 4.4607, fm_loss: 0.0157, perc_loss: 6.9580
Checkpoint saved at step 5900
Epoch: [ 15], Batch: [ 200/ 385] | Total Time: 3h 47m 42s
d_loss: 1.3870, g_loss: 11.5886, const_loss: 0.0039, l1_loss: 4.3153, fm_loss: 0.0153, perc_loss: 6.5607
Checkpoint saved at step 6000
Epoch: [ 15], Batch: [ 300/ 385] | Total Time: 3h 51m 31s
d_loss: 1.3873, g_loss: 11.5191, const_loss: 0.0025, l1_loss: 4.1201, fm_loss: 0.0153, perc_loss: 6.6879
Checkpoint saved at step 6100
--- End of Epoch 15 --- Time: 882.3s ---
LR Scheduler stepped. Current LR G: 0.000223, LR D: 0.000223
Epoch: [ 16], Batch: [ 0/ 385] | Total Time: 3h 54m 43s
d_loss: 1.4196, g_loss: 11.0409, const_loss: 0.0028, l1_loss: 3.7915, fm_loss: 0.0130, perc_loss: 6.5402
Checkpoint saved at step 6200
Epoch: [ 16], Batch: [ 100/ 385] | Total Time: 3h 58m 32s
d_loss: 1.3886, g_loss: 11.7193, const_loss: 0.0032, l1_loss: 4.1293, fm_loss: 0.0157, perc_loss: 6.8777
Checkpoint saved at step 6300
Epoch: [ 16], Batch: [ 200/ 385] | Total Time: 4h 2m 20s
d_loss: 1.3881, g_loss: 10.0399, const_loss: 0.0029, l1_loss: 3.3933, fm_loss: 0.0120, perc_loss: 5.9383
Checkpoint saved at step 6400
Epoch: [ 16], Batch: [ 300/ 385] | Total Time: 4h 6m 8s
d_loss: 1.3876, g_loss: 12.5949, const_loss: 0.0032, l1_loss: 4.5435, fm_loss: 0.0172, perc_loss: 7.3377
Checkpoint saved at step 6500
--- End of Epoch 16 --- Time: 877.0s ---
LR Scheduler stepped. Current LR G: 0.000210, LR D: 0.000210
Epoch: [ 17], Batch: [ 0/ 385] | Total Time: 4h 9m 20s
d_loss: 1.3897, g_loss: 11.3926, const_loss: 0.0036, l1_loss: 3.9409, fm_loss: 0.0148, perc_loss: 6.7400
Checkpoint saved at step 6600
Epoch: [ 17], Batch: [ 100/ 385] | Total Time: 4h 13m 13s
d_loss: 1.3873, g_loss: 12.2228, const_loss: 0.0025, l1_loss: 4.2209, fm_loss: 0.0176, perc_loss: 7.2885
Checkpoint saved at step 6700
Epoch: [ 17], Batch: [ 200/ 385] | Total Time: 4h 17m 1s
d_loss: 1.3870, g_loss: 10.8722, const_loss: 0.0029, l1_loss: 3.6927, fm_loss: 0.0139, perc_loss: 6.4694
Checkpoint saved at step 6800
Epoch: [ 17], Batch: [ 300/ 385] | Total Time: 4h 20m 49s
d_loss: 1.3877, g_loss: 11.9301, const_loss: 0.0025, l1_loss: 4.2050, fm_loss: 0.0152, perc_loss: 7.0141
Checkpoint saved at step 6900
--- End of Epoch 17 --- Time: 880.6s ---
LR Scheduler stepped. Current LR G: 0.000197, LR D: 0.000197
Epoch: [ 18], Batch: [ 0/ 385] | Total Time: 4h 24m 1s
d_loss: 1.3880, g_loss: 11.1598, const_loss: 0.0024, l1_loss: 3.6991, fm_loss: 0.0142, perc_loss: 6.7508

from 422 to 424:

 Model 422 loaded successfully
unpickled total 8065 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3869, g_loss: 10.6882, const_loss: 0.0025, l1_loss: 3.6014, fm_loss: 0.0133, perc_loss: 6.3777
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 48s
d_loss: 1.3876, g_loss: 11.7382, const_loss: 0.0022, l1_loss: 4.1578, fm_loss: 0.0155, perc_loss: 6.8693
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 32s
d_loss: 1.3873, g_loss: 11.4647, const_loss: 0.0025, l1_loss: 3.9732, fm_loss: 0.0151, perc_loss: 6.7806
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 16s
d_loss: 1.3885, g_loss: 10.1188, const_loss: 0.0018, l1_loss: 3.4045, fm_loss: 0.0129, perc_loss: 6.0062
--- End of Epoch 0 --- Time: 862.7s ---
LR Scheduler stepped. Current LR G: 0.000300, LR D: 0.000300
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 24s
d_loss: 1.3955, g_loss: 12.8302, const_loss: 0.0023, l1_loss: 4.7246, fm_loss: 0.0161, perc_loss: 7.3939
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 10s
d_loss: 1.3873, g_loss: 10.3187, const_loss: 0.0028, l1_loss: 3.5126, fm_loss: 0.0129, perc_loss: 6.0971
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 21m 55s
d_loss: 1.3871, g_loss: 11.6288, const_loss: 0.0028, l1_loss: 4.0511, fm_loss: 0.0168, perc_loss: 6.8647
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 25m 41s
d_loss: 1.3874, g_loss: 11.6835, const_loss: 0.0027, l1_loss: 3.9790, fm_loss: 0.0150, perc_loss: 6.9935
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 865.5s ---
LR Scheduler stepped. Current LR G: 0.000298, LR D: 0.000298
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 28m 50s
d_loss: 1.3872, g_loss: 11.6026, const_loss: 0.0023, l1_loss: 4.1564, fm_loss: 0.0161, perc_loss: 6.7343
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 32m 36s
d_loss: 1.3870, g_loss: 11.8463, const_loss: 0.0027, l1_loss: 4.0758, fm_loss: 0.0152, perc_loss: 7.0597
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 36m 21s
d_loss: 1.3870, g_loss: 10.5007, const_loss: 0.0023, l1_loss: 3.5328, fm_loss: 0.0131, perc_loss: 6.2592
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 40m 6s
d_loss: 1.3870, g_loss: 12.1566, const_loss: 0.0033, l1_loss: 4.3880, fm_loss: 0.0158, perc_loss: 7.0566
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 865.5s ---
LR Scheduler stepped. Current LR G: 0.000296, LR D: 0.000296
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 43m 16s
d_loss: 1.3914, g_loss: 12.0814, const_loss: 0.0033, l1_loss: 4.3910, fm_loss: 0.0171, perc_loss: 6.9771
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 47m 1s
d_loss: 1.3928, g_loss: 10.8416, const_loss: 0.0026, l1_loss: 3.7865, fm_loss: 0.0134, perc_loss: 6.3458
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 50m 46s
d_loss: 1.3874, g_loss: 11.3161, const_loss: 0.0024, l1_loss: 4.0380, fm_loss: 0.0150, perc_loss: 6.5673
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 54m 31s
d_loss: 1.3888, g_loss: 11.6360, const_loss: 0.0024, l1_loss: 4.1333, fm_loss: 0.0156, perc_loss: 6.7912
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 865.8s ---
LR Scheduler stepped. Current LR G: 0.000293, LR D: 0.000293
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 57m 41s
d_loss: 1.4247, g_loss: 11.7694, const_loss: 0.0029, l1_loss: 4.1314, fm_loss: 0.0152, perc_loss: 6.9265
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 1m 26s
d_loss: 1.3874, g_loss: 11.3656, const_loss: 0.0027, l1_loss: 3.9197, fm_loss: 0.0137, perc_loss: 6.7362
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 5m 11s
d_loss: 1.3870, g_loss: 11.6239, const_loss: 0.0027, l1_loss: 4.0959, fm_loss: 0.0156, perc_loss: 6.8162
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 8m 56s
d_loss: 1.3876, g_loss: 11.5856, const_loss: 0.0021, l1_loss: 3.9742, fm_loss: 0.0152, perc_loss: 6.9012
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 864.5s ---
LR Scheduler stepped. Current LR G: 0.000289, LR D: 0.000289
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 12m 6s
d_loss: 1.3990, g_loss: 11.1742, const_loss: 0.0018, l1_loss: 3.8493, fm_loss: 0.0135, perc_loss: 6.6162
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 15m 50s
d_loss: 1.3891, g_loss: 10.1772, const_loss: 0.0036, l1_loss: 3.5309, fm_loss: 0.0122, perc_loss: 5.9372
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 19m 35s
d_loss: 1.3877, g_loss: 10.9283, const_loss: 0.0021, l1_loss: 3.8586, fm_loss: 0.0137, perc_loss: 6.3610
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 23m 21s
d_loss: 1.3882, g_loss: 11.8513, const_loss: 0.0020, l1_loss: 4.1735, fm_loss: 0.0159, perc_loss: 6.9667
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 864.4s ---
LR Scheduler stepped. Current LR G: 0.000284, LR D: 0.000284
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 26m 30s
d_loss: 1.4169, g_loss: 12.4187, const_loss: 0.0025, l1_loss: 4.4913, fm_loss: 0.0150, perc_loss: 7.2165
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 30m 15s
d_loss: 1.3879, g_loss: 11.0540, const_loss: 0.0029, l1_loss: 3.9522, fm_loss: 0.0131, perc_loss: 6.3925
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 34m 1s
d_loss: 1.3870, g_loss: 10.9445, const_loss: 0.0034, l1_loss: 3.7584, fm_loss: 0.0125, perc_loss: 6.4769
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 37m 46s
d_loss: 1.3873, g_loss: 11.7232, const_loss: 0.0022, l1_loss: 4.2156, fm_loss: 0.0143, perc_loss: 6.7977
--- End of Epoch 6 --- Time: 864.1s ---
LR Scheduler stepped. Current LR G: 0.000278, LR D: 0.000278
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 40m 54s
d_loss: 1.3928, g_loss: 13.3422, const_loss: 0.0015, l1_loss: 5.0900, fm_loss: 0.0180, perc_loss: 7.5393
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 44m 39s
d_loss: 1.3879, g_loss: 11.6346, const_loss: 0.0026, l1_loss: 3.9980, fm_loss: 0.0147, perc_loss: 6.9259
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 48m 25s
d_loss: 1.3869, g_loss: 11.1973, const_loss: 0.0020, l1_loss: 3.7981, fm_loss: 0.0126, perc_loss: 6.6912
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 52m 10s
d_loss: 1.3874, g_loss: 9.7923, const_loss: 0.0023, l1_loss: 3.2496, fm_loss: 0.0109, perc_loss: 5.8361
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 865.3s ---
LR Scheduler stepped. Current LR G: 0.000271, LR D: 0.000271
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 55m 20s
d_loss: 1.3873, g_loss: 11.9574, const_loss: 0.0028, l1_loss: 4.1691, fm_loss: 0.0146, perc_loss: 7.0781
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 1h 59m 5s
d_loss: 1.3877, g_loss: 10.8141, const_loss: 0.0022, l1_loss: 3.6529, fm_loss: 0.0136, perc_loss: 6.4520
Checkpoint saved at step 3200
Epoch: [ 8], Batch: [ 200/ 385] | Total Time: 2h 2m 50s
d_loss: 1.3870, g_loss: 11.2245, const_loss: 0.0021, l1_loss: 3.9961, fm_loss: 0.0139, perc_loss: 6.5190
Checkpoint saved at step 3300
Epoch: [ 8], Batch: [ 300/ 385] | Total Time: 2h 6m 35s
d_loss: 1.3876, g_loss: 11.7305, const_loss: 0.0025, l1_loss: 4.2608, fm_loss: 0.0146, perc_loss: 6.7593
Checkpoint saved at step 3400
--- End of Epoch 8 --- Time: 865.2s ---
LR Scheduler stepped. Current LR G: 0.000264, LR D: 0.000264
Epoch: [ 9], Batch: [ 0/ 385] | Total Time: 2h 9m 45s
d_loss: 1.3979, g_loss: 11.2179, const_loss: 0.0023, l1_loss: 3.8876, fm_loss: 0.0140, perc_loss: 6.6207
Checkpoint saved at step 3500
Epoch: [ 9], Batch: [ 100/ 385] | Total Time: 2h 13m 30s
d_loss: 1.3868, g_loss: 10.6663, const_loss: 0.0031, l1_loss: 3.4788, fm_loss: 0.0119, perc_loss: 6.4792
Checkpoint saved at step 3600
Epoch: [ 9], Batch: [ 200/ 385] | Total Time: 2h 17m 15s
d_loss: 1.3870, g_loss: 11.2393, const_loss: 0.0024, l1_loss: 3.8905, fm_loss: 0.0136, perc_loss: 6.6394
Checkpoint saved at step 3700
Epoch: [ 9], Batch: [ 300/ 385] | Total Time: 2h 21m 1s
d_loss: 1.3870, g_loss: 11.1378, const_loss: 0.0023, l1_loss: 3.8142, fm_loss: 0.0138, perc_loss: 6.6146
Checkpoint saved at step 3800
--- End of Epoch 9 --- Time: 865.5s ---
LR Scheduler stepped. Current LR G: 0.000256, LR D: 0.000256
Epoch: [ 10], Batch: [ 0/ 385] | Total Time: 2h 24m 10s
d_loss: 1.3877, g_loss: 10.8242, const_loss: 0.0027, l1_loss: 3.6790, fm_loss: 0.0125, perc_loss: 6.4366
Checkpoint saved at step 3900
Epoch: [ 10], Batch: [ 100/ 385] | Total Time: 2h 27m 56s
d_loss: 1.3872, g_loss: 10.4427, const_loss: 0.0028, l1_loss: 3.5054, fm_loss: 0.0124, perc_loss: 6.2288
Checkpoint saved at step 4000
Epoch: [ 10], Batch: [ 200/ 385] | Total Time: 2h 31m 41s
d_loss: 1.3868, g_loss: 10.6732, const_loss: 0.0020, l1_loss: 3.6064, fm_loss: 0.0125, perc_loss: 6.3588
Checkpoint saved at step 4100
Epoch: [ 10], Batch: [ 300/ 385] | Total Time: 2h 35m 27s
d_loss: 1.3874, g_loss: 10.9172, const_loss: 0.0028, l1_loss: 3.7691, fm_loss: 0.0134, perc_loss: 6.4385
Checkpoint saved at step 4200
--- End of Epoch 10 --- Time: 865.9s ---
LR Scheduler stepped. Current LR G: 0.000248, LR D: 0.000248
Epoch: [ 11], Batch: [ 0/ 385] | Total Time: 2h 38m 36s
d_loss: 1.3878, g_loss: 11.2865, const_loss: 0.0025, l1_loss: 3.8930, fm_loss: 0.0135, perc_loss: 6.6841
Checkpoint saved at step 4300
Epoch: [ 11], Batch: [ 100/ 385] | Total Time: 2h 42m 22s
d_loss: 1.3869, g_loss: 12.0209, const_loss: 0.0026, l1_loss: 4.3382, fm_loss: 0.0139, perc_loss: 6.9728
Checkpoint saved at step 4400
Epoch: [ 11], Batch: [ 200/ 385] | Total Time: 2h 46m 7s
d_loss: 1.3871, g_loss: 11.5778, const_loss: 0.0025, l1_loss: 4.0459, fm_loss: 0.0138, perc_loss: 6.8221
Checkpoint saved at step 4500
Epoch: [ 11], Batch: [ 300/ 385] | Total Time: 2h 49m 52s
d_loss: 1.3874, g_loss: 11.2383, const_loss: 0.0028, l1_loss: 3.9577, fm_loss: 0.0138, perc_loss: 6.5707
Checkpoint saved at step 4600
--- End of Epoch 11 --- Time: 865.4s ---
LR Scheduler stepped. Current LR G: 0.000238, LR D: 0.000238
Epoch: [ 12], Batch: [ 0/ 385] | Total Time: 2h 53m 2s
d_loss: 1.3980, g_loss: 11.5365, const_loss: 0.0020, l1_loss: 4.0228, fm_loss: 0.0141, perc_loss: 6.8043
Checkpoint saved at step 4700
Epoch: [ 12], Batch: [ 100/ 385] | Total Time: 2h 56m 49s
d_loss: 1.3872, g_loss: 11.3440, const_loss: 0.0025, l1_loss: 3.9968, fm_loss: 0.0140, perc_loss: 6.6372
Checkpoint saved at step 4800
Epoch: [ 12], Batch: [ 200/ 385] | Total Time: 3h 34s
d_loss: 1.3870, g_loss: 10.8594, const_loss: 0.0020, l1_loss: 3.5827, fm_loss: 0.0124, perc_loss: 6.5690
Checkpoint saved at step 4900
Epoch: [ 12], Batch: [ 300/ 385] | Total Time: 3h 4m 19s
d_loss: 1.3880, g_loss: 10.7671, const_loss: 0.0029, l1_loss: 3.6128, fm_loss: 0.0126, perc_loss: 6.4456
Checkpoint saved at step 5000
--- End of Epoch 12 --- Time: 866.7s ---
LR Scheduler stepped. Current LR G: 0.000229, LR D: 0.000229
Epoch: [ 13], Batch: [ 0/ 385] | Total Time: 3h 7m 28s
d_loss: 1.3892, g_loss: 11.4156, const_loss: 0.0020, l1_loss: 3.9644, fm_loss: 0.0137, perc_loss: 6.7422
Checkpoint saved at step 5100
Epoch: [ 13], Batch: [ 100/ 385] | Total Time: 3h 11m 13s
d_loss: 1.3869, g_loss: 10.3465, const_loss: 0.0022, l1_loss: 3.3955, fm_loss: 0.0115, perc_loss: 6.2439
Checkpoint saved at step 5200
Epoch: [ 13], Batch: [ 200/ 385] | Total Time: 3h 14m 59s
d_loss: 1.3890, g_loss: 10.8157, const_loss: 0.0022, l1_loss: 3.5725, fm_loss: 0.0128, perc_loss: 6.5348
Checkpoint saved at step 5300
Epoch: [ 13], Batch: [ 300/ 385] | Total Time: 3h 18m 44s
d_loss: 1.3872, g_loss: 10.4401, const_loss: 0.0020, l1_loss: 3.5740, fm_loss: 0.0119, perc_loss: 6.1588
--- End of Epoch 13 --- Time: 864.3s ---
LR Scheduler stepped. Current LR G: 0.000218, LR D: 0.000218
Epoch: [ 14], Batch: [ 0/ 385] | Total Time: 3h 21m 53s
d_loss: 1.3885, g_loss: 10.0669, const_loss: 0.0024, l1_loss: 3.2850, fm_loss: 0.0111, perc_loss: 6.0750
Checkpoint saved at step 5400
Epoch: [ 14], Batch: [ 100/ 385] | Total Time: 3h 25m 38s
d_loss: 1.3896, g_loss: 9.8025, const_loss: 0.0024, l1_loss: 3.2776, fm_loss: 0.0108, perc_loss: 5.8183
Checkpoint saved at step 5500
Epoch: [ 14], Batch: [ 200/ 385] | Total Time: 3h 29m 23s
d_loss: 1.3871, g_loss: 10.1233, const_loss: 0.0020, l1_loss: 3.3219, fm_loss: 0.0107, perc_loss: 6.0954
Checkpoint saved at step 5600
Epoch: [ 14], Batch: [ 300/ 385] | Total Time: 3h 33m 8s
d_loss: 1.3869, g_loss: 10.7770, const_loss: 0.0017, l1_loss: 3.4698, fm_loss: 0.0111, perc_loss: 6.6011
Checkpoint saved at step 5700
--- End of Epoch 14 --- Time: 864.5s ---
LR Scheduler stepped. Current LR G: 0.000208, LR D: 0.000208
Epoch: [ 15], Batch: [ 0/ 385] | Total Time: 3h 36m 17s
d_loss: 1.4072, g_loss: 11.5166, const_loss: 0.0022, l1_loss: 3.9210, fm_loss: 0.0133, perc_loss: 6.8868
Checkpoint saved at step 5800
Epoch: [ 15], Batch: [ 100/ 385] | Total Time: 3h 40m 2s
d_loss: 1.3873, g_loss: 11.2537, const_loss: 0.0025, l1_loss: 3.9461, fm_loss: 0.0119, perc_loss: 6.5999
Checkpoint saved at step 5900
Epoch: [ 15], Batch: [ 200/ 385] | Total Time: 3h 43m 46s
d_loss: 1.3869, g_loss: 10.7816, const_loss: 0.0023, l1_loss: 3.8340, fm_loss: 0.0122, perc_loss: 6.2398
Checkpoint saved at step 6000
Epoch: [ 15], Batch: [ 300/ 385] | Total Time: 3h 47m 30s
d_loss: 1.3870, g_loss: 10.5254, const_loss: 0.0017, l1_loss: 3.5524, fm_loss: 0.0115, perc_loss: 6.2666
Checkpoint saved at step 6100
--- End of Epoch 15 --- Time: 861.6s ---
LR Scheduler stepped. Current LR G: 0.000197, LR D: 0.000197
Epoch: [ 16], Batch: [ 0/ 385] | Total Time: 3h 50m 39s
d_loss: 1.4082, g_loss: 10.2091, const_loss: 0.0023, l1_loss: 3.3245, fm_loss: 0.0100, perc_loss: 6.1789
Checkpoint saved at step 6200
Epoch: [ 16], Batch: [ 100/ 385] | Total Time: 3h 54m 24s
d_loss: 1.3874, g_loss: 10.8598, const_loss: 0.0023, l1_loss: 3.6566, fm_loss: 0.0126, perc_loss: 6.4950
Checkpoint saved at step 6300
Epoch: [ 16], Batch: [ 200/ 385] | Total Time: 3h 58m 8s
d_loss: 1.3872, g_loss: 9.3479, const_loss: 0.0023, l1_loss: 3.0062, fm_loss: 0.0094, perc_loss: 5.6371
Checkpoint saved at step 6400
Epoch: [ 16], Batch: [ 300/ 385] | Total Time: 4h 1m 53s
d_loss: 1.3873, g_loss: 11.6435, const_loss: 0.0023, l1_loss: 3.9812, fm_loss: 0.0128, perc_loss: 6.9538
Checkpoint saved at step 6500
--- End of Epoch 16 --- Time: 863.0s ---
LR Scheduler stepped. Current LR G: 0.000185, LR D: 0.000185
Epoch: [ 17], Batch: [ 0/ 385] | Total Time: 4h 5m 2s
d_loss: 1.3874, g_loss: 10.6313, const_loss: 0.0025, l1_loss: 3.5116, fm_loss: 0.0115, perc_loss: 6.4124
Checkpoint saved at step 6600
Epoch: [ 17], Batch: [ 100/ 385] | Total Time: 4h 8m 47s
d_loss: 1.3870, g_loss: 11.2854, const_loss: 0.0021, l1_loss: 3.6982, fm_loss: 0.0135, perc_loss: 6.8782
Checkpoint saved at step 6700
Epoch: [ 17], Batch: [ 200/ 385] | Total Time: 4h 12m 33s
d_loss: 1.3871, g_loss: 10.1511, const_loss: 0.0026, l1_loss: 3.2835, fm_loss: 0.0108, perc_loss: 6.1608
Checkpoint saved at step 6800
Epoch: [ 17], Batch: [ 300/ 385] | Total Time: 4h 16m 18s
d_loss: 1.3877, g_loss: 11.1076, const_loss: 0.0021, l1_loss: 3.7406, fm_loss: 0.0117, perc_loss: 6.6598
Checkpoint saved at step 6900
--- End of Epoch 17 --- Time: 866.8s ---
LR Scheduler stepped. Current LR G: 0.000174, LR D: 0.000174
Epoch: [ 18], Batch: [ 0/ 385] | Total Time: 4h 19m 29s
d_loss: 1.3887, g_loss: 10.6942, const_loss: 0.0023, l1_loss: 3.4796, fm_loss: 0.0119, perc_loss: 6.5070
Checkpoint saved at step 7000
Epoch: [ 18], Batch: [ 100/ 385] | Total Time: 4h 23m 14s
d_loss: 1.3873, g_loss: 10.7058, const_loss: 0.0022, l1_loss: 3.5444, fm_loss: 0.0121, perc_loss: 6.4538
Checkpoint saved at step 7100
Epoch: [ 18], Batch: [ 200/ 385] | Total Time: 4h 26m 59s
d_loss: 1.3868, g_loss: 10.8086, const_loss: 0.0022, l1_loss: 3.5955, fm_loss: 0.0126, perc_loss: 6.5055
Checkpoint saved at step 7200
Epoch: [ 18], Batch: [ 300/ 385] | Total Time: 4h 30m 44s
d_loss: 1.3873, g_loss: 10.9888, const_loss: 0.0016, l1_loss: 3.6578, fm_loss: 0.0120, perc_loss: 6.6240
Checkpoint saved at step 7300
--- End of Epoch 18 --- Time: 864.9s ---
LR Scheduler stepped. Current LR G: 0.000162, LR D: 0.000162
Epoch: [ 19], Batch: [ 0/ 385] | Total Time: 4h 33m 54s
d_loss: 1.3998, g_loss: 10.4117, const_loss: 0.0018, l1_loss: 3.4526, fm_loss: 0.0127, perc_loss: 6.2512
Checkpoint saved at step 7400
Epoch: [ 19], Batch: [ 100/ 385] | Total Time: 4h 37m 39s
d_loss: 1.3869, g_loss: 9.9888, const_loss: 0.0018, l1_loss: 3.2551, fm_loss: 0.0108, perc_loss: 6.0278
Checkpoint saved at step 7500

from 424 to 426

 Model 424 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3869, g_loss: 11.1806, const_loss: 0.0018, l1_loss: 3.6198, fm_loss: 0.0121, perc_loss: 6.8535
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 46s
d_loss: 1.3870, g_loss: 10.8430, const_loss: 0.0020, l1_loss: 3.4994, fm_loss: 0.0118, perc_loss: 6.6365
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 31s
d_loss: 1.3897, g_loss: 10.8677, const_loss: 0.0016, l1_loss: 3.5963, fm_loss: 0.0138, perc_loss: 6.5626
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 15s
d_loss: 1.3873, g_loss: 11.0088, const_loss: 0.0015, l1_loss: 3.6902, fm_loss: 0.0124, perc_loss: 6.6114
--- End of Epoch 0 --- Time: 862.8s ---
LR Scheduler stepped. Current LR G: 0.000280, LR D: 0.000280
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 25s
d_loss: 1.3879, g_loss: 12.1291, const_loss: 0.0024, l1_loss: 4.2935, fm_loss: 0.0136, perc_loss: 7.1262
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 10s
d_loss: 1.3869, g_loss: 10.2484, const_loss: 0.0023, l1_loss: 3.5430, fm_loss: 0.0107, perc_loss: 5.9990
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 21m 56s
d_loss: 1.3871, g_loss: 11.1570, const_loss: 0.0020, l1_loss: 3.7555, fm_loss: 0.0120, perc_loss: 6.6941
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 25m 41s
d_loss: 1.3870, g_loss: 11.6904, const_loss: 0.0023, l1_loss: 4.1361, fm_loss: 0.0130, perc_loss: 6.8457
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 867.3s ---
LR Scheduler stepped. Current LR G: 0.000278, LR D: 0.000278
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 28m 52s
d_loss: 1.3937, g_loss: 11.0253, const_loss: 0.0025, l1_loss: 3.8701, fm_loss: 0.0130, perc_loss: 6.4463
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 32m 37s
d_loss: 1.3883, g_loss: 11.0309, const_loss: 0.0017, l1_loss: 3.8300, fm_loss: 0.0125, perc_loss: 6.4932
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 36m 23s
d_loss: 1.3878, g_loss: 11.3592, const_loss: 0.0018, l1_loss: 3.9249, fm_loss: 0.0127, perc_loss: 6.7264
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 40m 9s
d_loss: 1.3868, g_loss: 11.0829, const_loss: 0.0025, l1_loss: 3.8689, fm_loss: 0.0123, perc_loss: 6.5058
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 867.8s ---
LR Scheduler stepped. Current LR G: 0.000276, LR D: 0.000276
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 43m 20s
d_loss: 1.3870, g_loss: 11.2629, const_loss: 0.0015, l1_loss: 3.8693, fm_loss: 0.0122, perc_loss: 6.6864
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 47m 5s
d_loss: 1.3874, g_loss: 11.3415, const_loss: 0.0021, l1_loss: 4.0008, fm_loss: 0.0129, perc_loss: 6.6324
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 50m 51s
d_loss: 1.3878, g_loss: 11.2414, const_loss: 0.0020, l1_loss: 3.8600, fm_loss: 0.0123, perc_loss: 6.6737
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 54m 37s
d_loss: 1.3882, g_loss: 11.1252, const_loss: 0.0019, l1_loss: 3.7201, fm_loss: 0.0121, perc_loss: 6.6977
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 868.5s ---
LR Scheduler stepped. Current LR G: 0.000273, LR D: 0.000273
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 57m 48s
d_loss: 1.3870, g_loss: 10.6897, const_loss: 0.0022, l1_loss: 3.5911, fm_loss: 0.0109, perc_loss: 6.3922
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 1m 33s
d_loss: 1.3869, g_loss: 10.1890, const_loss: 0.0019, l1_loss: 3.3677, fm_loss: 0.0104, perc_loss: 6.1156
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 5m 19s
d_loss: 1.3870, g_loss: 11.2101, const_loss: 0.0022, l1_loss: 3.7896, fm_loss: 0.0119, perc_loss: 6.7130
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 9m 5s
d_loss: 1.3880, g_loss: 11.2669, const_loss: 0.0022, l1_loss: 3.8100, fm_loss: 0.0126, perc_loss: 6.7487
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 868.1s ---
LR Scheduler stepped. Current LR G: 0.000269, LR D: 0.000269
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 12m 16s
d_loss: 1.3914, g_loss: 10.6814, const_loss: 0.0020, l1_loss: 3.6363, fm_loss: 0.0111, perc_loss: 6.3387
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 16m 2s
d_loss: 1.3870, g_loss: 12.0488, const_loss: 0.0023, l1_loss: 4.4681, fm_loss: 0.0142, perc_loss: 6.8709

from 426 to 428

 Model 426 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3871, g_loss: 11.4864, const_loss: 0.0019, l1_loss: 3.8128, fm_loss: 0.0123, perc_loss: 6.9661
Checkpoint step 100 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 38s
d_loss: 1.3871, g_loss: 11.1537, const_loss: 0.0019, l1_loss: 3.7063, fm_loss: 0.0119, perc_loss: 6.7402
Checkpoint step 200 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 19s
d_loss: 1.3901, g_loss: 11.0716, const_loss: 0.0014, l1_loss: 3.7562, fm_loss: 0.0139, perc_loss: 6.6067
Checkpoint step 300 reached, but saving starts after step 400.
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 3s
d_loss: 1.3874, g_loss: 10.6064, const_loss: 0.0016, l1_loss: 3.4642, fm_loss: 0.0114, perc_loss: 6.4359
--- End of Epoch 0 --- Time: 849.9s ---
LR Scheduler stepped. Current LR G: 0.000270, LR D: 0.000270
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 12s
d_loss: 1.3869, g_loss: 11.8119, const_loss: 0.0025, l1_loss: 4.1143, fm_loss: 0.0126, perc_loss: 6.9892
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 17m 56s
d_loss: 1.3869, g_loss: 10.0392, const_loss: 0.0023, l1_loss: 3.3881, fm_loss: 0.0103, perc_loss: 5.9452
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 21m 41s
d_loss: 1.3870, g_loss: 10.8769, const_loss: 0.0020, l1_loss: 3.6014, fm_loss: 0.0111, perc_loss: 6.5690
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 25m 26s
d_loss: 1.3870, g_loss: 11.5829, const_loss: 0.0022, l1_loss: 4.0642, fm_loss: 0.0132, perc_loss: 6.8100
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 863.9s ---
LR Scheduler stepped. Current LR G: 0.000268, LR D: 0.000268
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 28m 36s
d_loss: 1.3912, g_loss: 10.7868, const_loss: 0.0025, l1_loss: 3.6996, fm_loss: 0.0120, perc_loss: 6.3792
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 32m 20s
d_loss: 1.3879, g_loss: 10.9078, const_loss: 0.0020, l1_loss: 3.7633, fm_loss: 0.0121, perc_loss: 6.4370
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 36m 5s
d_loss: 1.3880, g_loss: 10.8988, const_loss: 0.0018, l1_loss: 3.6619, fm_loss: 0.0115, perc_loss: 6.5302
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 39m 49s
d_loss: 1.3868, g_loss: 10.6262, const_loss: 0.0026, l1_loss: 3.6160, fm_loss: 0.0115, perc_loss: 6.3028
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 863.4s ---
LR Scheduler stepped. Current LR G: 0.000266, LR D: 0.000266
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 42m 59s
d_loss: 1.3869, g_loss: 10.9025, const_loss: 0.0013, l1_loss: 3.6624, fm_loss: 0.0110, perc_loss: 6.5345
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 46m 44s
d_loss: 1.3875, g_loss: 10.8758, const_loss: 0.0017, l1_loss: 3.7337, fm_loss: 0.0113, perc_loss: 6.4357
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 50m 28s
d_loss: 1.3876, g_loss: 11.0729, const_loss: 0.0018, l1_loss: 3.7805, fm_loss: 0.0117, perc_loss: 6.5855
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 54m 13s
d_loss: 1.3880, g_loss: 11.0305, const_loss: 0.0022, l1_loss: 3.7032, fm_loss: 0.0114, perc_loss: 6.6204
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 863.8s ---
LR Scheduler stepped. Current LR G: 0.000263, LR D: 0.000263
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 57m 23s
d_loss: 1.3886, g_loss: 10.4444, const_loss: 0.0021, l1_loss: 3.4685, fm_loss: 0.0114, perc_loss: 6.2690
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 1m 7s
d_loss: 1.3869, g_loss: 9.8886, const_loss: 0.0020, l1_loss: 3.1997, fm_loss: 0.0095, perc_loss: 5.9840
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 4m 52s
d_loss: 1.3870, g_loss: 10.8011, const_loss: 0.0019, l1_loss: 3.5361, fm_loss: 0.0107, perc_loss: 6.5589
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 8m 37s
d_loss: 1.3881, g_loss: 10.9537, const_loss: 0.0020, l1_loss: 3.6551, fm_loss: 0.0116, perc_loss: 6.5915
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 863.4s ---
LR Scheduler stepped. Current LR G: 0.000260, LR D: 0.000260
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 11m 46s
d_loss: 1.3883, g_loss: 10.5398, const_loss: 0.0020, l1_loss: 3.5638, fm_loss: 0.0102, perc_loss: 6.2704
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 15m 30s
d_loss: 1.3870, g_loss: 11.3324, const_loss: 0.0021, l1_loss: 4.0488, fm_loss: 0.0121, perc_loss: 6.5761
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 19m 15s
d_loss: 1.3869, g_loss: 10.0735, const_loss: 0.0016, l1_loss: 3.3610, fm_loss: 0.0098, perc_loss: 6.0078
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 23m 0s
d_loss: 1.3868, g_loss: 10.7395, const_loss: 0.0027, l1_loss: 3.5800, fm_loss: 0.0110, perc_loss: 6.4525
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 863.2s ---
LR Scheduler stepped. Current LR G: 0.000255, LR D: 0.000255
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 26m 9s
d_loss: 1.3878, g_loss: 11.4837, const_loss: 0.0018, l1_loss: 3.9791, fm_loss: 0.0123, perc_loss: 6.7972
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 29m 54s
d_loss: 1.3870, g_loss: 10.3969, const_loss: 0.0020, l1_loss: 3.4390, fm_loss: 0.0101, perc_loss: 6.2525
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 33m 39s
d_loss: 1.3869, g_loss: 11.2981, const_loss: 0.0020, l1_loss: 3.8564, fm_loss: 0.0118, perc_loss: 6.7344
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 37m 23s
d_loss: 1.3875, g_loss: 10.0059, const_loss: 0.0019, l1_loss: 3.3280, fm_loss: 0.0093, perc_loss: 5.9734
--- End of Epoch 6 --- Time: 862.8s ---
LR Scheduler stepped. Current LR G: 0.000250, LR D: 0.000250
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 40m 32s
d_loss: 1.3880, g_loss: 10.1194, const_loss: 0.0017, l1_loss: 3.2770, fm_loss: 0.0106, perc_loss: 6.1368
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 44m 17s
d_loss: 1.3872, g_loss: 11.5068, const_loss: 0.0022, l1_loss: 3.9063, fm_loss: 0.0114, perc_loss: 6.8934
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 48m 2s
d_loss: 1.3871, g_loss: 9.8736, const_loss: 0.0020, l1_loss: 3.2647, fm_loss: 0.0097, perc_loss: 5.9038
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 51m 46s
d_loss: 1.3869, g_loss: 10.6296, const_loss: 0.0019, l1_loss: 3.5746, fm_loss: 0.0110, perc_loss: 6.3487
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 863.9s ---
LR Scheduler stepped. Current LR G: 0.000244, LR D: 0.000244
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 54m 56s
d_loss: 1.3917, g_loss: 11.1088, const_loss: 0.0020, l1_loss: 3.7675, fm_loss: 0.0126, perc_loss: 6.6333
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 1h 58m 41s
d_loss: 1.3868, g_loss: 10.0420, const_loss: 0.0016, l1_loss: 3.2655, fm_loss: 0.0100, perc_loss: 6.0715
Checkpoint saved at step 3200

from 428 to 430

 Model 428 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 5s
d_loss: 1.3868, g_loss: 11.2296, const_loss: 0.0015, l1_loss: 3.7013, fm_loss: 0.0109, perc_loss: 6.8225
Checkpoint step 100 reached, but saving starts after step 300.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 48s
d_loss: 1.3869, g_loss: 10.1865, const_loss: 0.0019, l1_loss: 3.3454, fm_loss: 0.0097, perc_loss: 6.1361
Checkpoint step 200 reached, but saving starts after step 300.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 31s
d_loss: 1.3875, g_loss: 10.2233, const_loss: 0.0016, l1_loss: 3.3088, fm_loss: 0.0100, perc_loss: 6.2095
Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 16s
d_loss: 1.3869, g_loss: 10.3245, const_loss: 0.0016, l1_loss: 3.3609, fm_loss: 0.0102, perc_loss: 6.2584
--- End of Epoch 0 --- Time: 862.5s ---
LR Scheduler stepped. Current LR G: 0.000250, LR D: 0.000250
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 24s
d_loss: 1.3869, g_loss: 10.0011, const_loss: 0.0025, l1_loss: 3.3578, fm_loss: 0.0092, perc_loss: 5.9383
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 9s
d_loss: 1.3868, g_loss: 10.8995, const_loss: 0.0018, l1_loss: 3.6673, fm_loss: 0.0109, perc_loss: 6.5262
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 21m 53s
d_loss: 1.3870, g_loss: 10.7245, const_loss: 0.0020, l1_loss: 3.4968, fm_loss: 0.0106, perc_loss: 6.5219
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 25m 37s
d_loss: 1.3872, g_loss: 9.8427, const_loss: 0.0025, l1_loss: 3.1975, fm_loss: 0.0092, perc_loss: 5.9401
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 862.8s ---
LR Scheduler stepped. Current LR G: 0.000248, LR D: 0.000248
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 28m 47s
d_loss: 1.3883, g_loss: 10.6449, const_loss: 0.0023, l1_loss: 3.4340, fm_loss: 0.0096, perc_loss: 6.5056
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 32m 31s
d_loss: 1.3869, g_loss: 10.1288, const_loss: 0.0019, l1_loss: 3.3222, fm_loss: 0.0098, perc_loss: 6.1015
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 36m 15s
d_loss: 1.3872, g_loss: 10.5533, const_loss: 0.0015, l1_loss: 3.4965, fm_loss: 0.0096, perc_loss: 6.3524
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 39m 59s
d_loss: 1.3868, g_loss: 10.9119, const_loss: 0.0021, l1_loss: 3.6552, fm_loss: 0.0114, perc_loss: 6.5498
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 862.0s ---
LR Scheduler stepped. Current LR G: 0.000247, LR D: 0.000247
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 43m 9s
d_loss: 1.3872, g_loss: 11.3372, const_loss: 0.0030, l1_loss: 3.8342, fm_loss: 0.0115, perc_loss: 6.7952
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 46m 53s
d_loss: 1.3876, g_loss: 10.8799, const_loss: 0.0018, l1_loss: 3.6631, fm_loss: 0.0114, perc_loss: 6.5102
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 50m 38s
d_loss: 1.3879, g_loss: 9.8645, const_loss: 0.0024, l1_loss: 3.1982, fm_loss: 0.0092, perc_loss: 5.9613
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 54m 22s
d_loss: 1.3872, g_loss: 11.0359, const_loss: 0.0014, l1_loss: 3.7239, fm_loss: 0.0111, perc_loss: 6.6062
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 863.4s ---
LR Scheduler stepped. Current LR G: 0.000244, LR D: 0.000244
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 57m 32s
d_loss: 1.3868, g_loss: 11.4111, const_loss: 0.0013, l1_loss: 3.8360, fm_loss: 0.0116, perc_loss: 6.8688
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 1m 16s
d_loss: 1.3871, g_loss: 10.0045, const_loss: 0.0017, l1_loss: 3.1827, fm_loss: 0.0097, perc_loss: 6.1171
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 5m 0s
d_loss: 1.3868, g_loss: 10.7962, const_loss: 0.0023, l1_loss: 3.5501, fm_loss: 0.0104, perc_loss: 6.5401
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 8m 45s
d_loss: 1.3871, g_loss: 10.3815, const_loss: 0.0019, l1_loss: 3.3719, fm_loss: 0.0103, perc_loss: 6.3040
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 862.3s ---
LR Scheduler stepped. Current LR G: 0.000241, LR D: 0.000241
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 11m 55s
d_loss: 1.3875, g_loss: 10.4030, const_loss: 0.0014, l1_loss: 3.4780, fm_loss: 0.0100, perc_loss: 6.2203
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 15m 39s
d_loss: 1.3875, g_loss: 10.8816, const_loss: 0.0020, l1_loss: 3.5981, fm_loss: 0.0109, perc_loss: 6.5772
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 19m 23s
d_loss: 1.3878, g_loss: 9.5150, const_loss: 0.0013, l1_loss: 3.1637, fm_loss: 0.0094, perc_loss: 5.6472
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 23m 7s
d_loss: 1.3869, g_loss: 10.1003, const_loss: 0.0013, l1_loss: 3.3268, fm_loss: 0.0096, perc_loss: 6.0692
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 861.9s ---
LR Scheduler stepped. Current LR G: 0.000236, LR D: 0.000236
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 26m 17s
d_loss: 1.3869, g_loss: 10.7127, const_loss: 0.0016, l1_loss: 3.6247, fm_loss: 0.0109, perc_loss: 6.3820
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 30m 1s
d_loss: 1.3874, g_loss: 11.0585, const_loss: 0.0019, l1_loss: 3.5987, fm_loss: 0.0111, perc_loss: 6.7534
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 33m 46s
d_loss: 1.3873, g_loss: 10.7547, const_loss: 0.0017, l1_loss: 3.5085, fm_loss: 0.0106, perc_loss: 6.5406
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 37m 30s
d_loss: 1.3868, g_loss: 10.9939, const_loss: 0.0024, l1_loss: 3.6789, fm_loss: 0.0115, perc_loss: 6.6082
--- End of Epoch 6 --- Time: 861.6s ---
LR Scheduler stepped. Current LR G: 0.000232, LR D: 0.000232
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 40m 38s
d_loss: 1.3868, g_loss: 10.6472, const_loss: 0.0015, l1_loss: 3.4565, fm_loss: 0.0106, perc_loss: 6.4853
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 44m 23s
d_loss: 1.3875, g_loss: 9.5786, const_loss: 0.0021, l1_loss: 3.0766, fm_loss: 0.0087, perc_loss: 5.7979
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 48m 7s
d_loss: 1.3869, g_loss: 10.3006, const_loss: 0.0019, l1_loss: 3.3742, fm_loss: 0.0100, perc_loss: 6.2211
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 51m 51s
d_loss: 1.3868, g_loss: 10.6911, const_loss: 0.0019, l1_loss: 3.5443, fm_loss: 0.0112, perc_loss: 6.4403
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 863.1s ---
LR Scheduler stepped. Current LR G: 0.000226, LR D: 0.000226
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 55m 1s
d_loss: 1.3868, g_loss: 10.0220, const_loss: 0.0018, l1_loss: 3.3381, fm_loss: 0.0106, perc_loss: 5.9782
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 1h 58m 46s
d_loss: 1.3869, g_loss: 10.0420, const_loss: 0.0018, l1_loss: 3.3007, fm_loss: 0.0099, perc_loss: 6.0363
Checkpoint saved at step 3200
Epoch: [ 8], Batch: [ 200/ 385] | Total Time: 2h 2m 31s
d_loss: 1.3868, g_loss: 10.5709, const_loss: 0.0019, l1_loss: 3.5092, fm_loss: 0.0101, perc_loss: 6.3564
Checkpoint saved at step 3300
Epoch: [ 8], Batch: [ 300/ 385] | Total Time: 2h 6m 15s
d_loss: 1.3868, g_loss: 10.5896, const_loss: 0.0020, l1_loss: 3.5933, fm_loss: 0.0109, perc_loss: 6.2900
Checkpoint saved at step 3400
--- End of Epoch 8 --- Time: 863.0s ---
LR Scheduler stepped. Current LR G: 0.000220, LR D: 0.000220
Epoch: [ 9], Batch: [ 0/ 385] | Total Time: 2h 9m 24s
d_loss: 1.3871, g_loss: 10.3087, const_loss: 0.0019, l1_loss: 3.3802, fm_loss: 0.0104, perc_loss: 6.2229
Checkpoint saved at step 3500
Epoch: [ 9], Batch: [ 100/ 385] | Total Time: 2h 13m 9s
d_loss: 1.3869, g_loss: 10.4074, const_loss: 0.0019, l1_loss: 3.3921, fm_loss: 0.0102, perc_loss: 6.3099
Checkpoint saved at step 3600
Epoch: [ 9], Batch: [ 200/ 385] | Total Time: 2h 16m 53s
d_loss: 1.3868, g_loss: 10.5573, const_loss: 0.0021, l1_loss: 3.4076, fm_loss: 0.0107, perc_loss: 6.4435
Checkpoint saved at step 3700
Epoch: [ 9], Batch: [ 300/ 385] | Total Time: 2h 20m 37s
d_loss: 1.3872, g_loss: 10.8584, const_loss: 0.0017, l1_loss: 3.5096, fm_loss: 0.0107, perc_loss: 6.6431
Checkpoint saved at step 3800
--- End of Epoch 9 --- Time: 862.1s ---
LR Scheduler stepped. Current LR G: 0.000214, LR D: 0.000214
Epoch: [ 10], Batch: [ 0/ 385] | Total Time: 2h 23m 46s
d_loss: 1.3868, g_loss: 10.0126, const_loss: 0.0017, l1_loss: 3.2560, fm_loss: 0.0097, perc_loss: 6.0518
Checkpoint saved at step 3900
Epoch: [ 10], Batch: [ 100/ 385] | Total Time: 2h 27m 31s
d_loss: 1.3869, g_loss: 10.3164, const_loss: 0.0016, l1_loss: 3.4187, fm_loss: 0.0101, perc_loss: 6.1926
Checkpoint saved at step 4000
Epoch: [ 10], Batch: [ 200/ 385] | Total Time: 2h 31m 15s
d_loss: 1.3876, g_loss: 10.3727, const_loss: 0.0015, l1_loss: 3.4013, fm_loss: 0.0096, perc_loss: 6.2669
Checkpoint saved at step 4100
Epoch: [ 10], Batch: [ 300/ 385] | Total Time: 2h 34m 59s
d_loss: 1.3868, g_loss: 10.1708, const_loss: 0.0018, l1_loss: 3.2933, fm_loss: 0.0096, perc_loss: 6.1728
Checkpoint saved at step 4200
--- End of Epoch 10 --- Time: 862.1s ---
LR Scheduler stepped. Current LR G: 0.000206, LR D: 0.000206
Epoch: [ 11], Batch: [ 0/ 385] | Total Time: 2h 38m 9s
d_loss: 1.3869, g_loss: 10.0989, const_loss: 0.0018, l1_loss: 3.2614, fm_loss: 0.0099, perc_loss: 6.1325
Checkpoint saved at step 4300
Epoch: [ 11], Batch: [ 100/ 385] | Total Time: 2h 41m 53s
d_loss: 1.3869, g_loss: 9.8511, const_loss: 0.0019, l1_loss: 3.1120, fm_loss: 0.0090, perc_loss: 6.0349
Checkpoint saved at step 4400
Epoch: [ 11], Batch: [ 200/ 385] | Total Time: 2h 45m 37s
d_loss: 1.3872, g_loss: 10.5917, const_loss: 0.0020, l1_loss: 3.5117, fm_loss: 0.0100, perc_loss: 6.3746
Checkpoint saved at step 4500
Epoch: [ 11], Batch: [ 300/ 385] | Total Time: 2h 49m 22s
d_loss: 1.3873, g_loss: 10.3277, const_loss: 0.0026, l1_loss: 3.3826, fm_loss: 0.0098, perc_loss: 6.2393
Checkpoint saved at step 4600
--- End of Epoch 11 --- Time: 863.2s ---
LR Scheduler stepped. Current LR G: 0.000199, LR D: 0.000199
Epoch: [ 12], Batch: [ 0/ 385] | Total Time: 2h 52m 32s
d_loss: 1.3868, g_loss: 10.0473, const_loss: 0.0016, l1_loss: 3.3031, fm_loss: 0.0095, perc_loss: 6.0398
Checkpoint saved at step 4700
Epoch: [ 12], Batch: [ 100/ 385] | Total Time: 2h 56m 16s
d_loss: 1.3868, g_loss: 9.4926, const_loss: 0.0018, l1_loss: 3.0096, fm_loss: 0.0085, perc_loss: 5.7794
Checkpoint saved at step 4800
Epoch: [ 12], Batch: [ 200/ 385] | Total Time: 3h 1s
d_loss: 1.3870, g_loss: 9.6633, const_loss: 0.0017, l1_loss: 3.0964, fm_loss: 0.0090, perc_loss: 5.8629
Checkpoint saved at step 4900
Epoch: [ 12], Batch: [ 300/ 385] | Total Time: 3h 3m 45s
d_loss: 1.3870, g_loss: 9.7093, const_loss: 0.0016, l1_loss: 3.0761, fm_loss: 0.0091, perc_loss: 5.9291
Checkpoint saved at step 5000
--- End of Epoch 12 --- Time: 862.3s ---
LR Scheduler stepped. Current LR G: 0.000191, LR D: 0.000191
Epoch: [ 13], Batch: [ 0/ 385] | Total Time: 3h 6m 54s
d_loss: 1.3868, g_loss: 10.1602, const_loss: 0.0024, l1_loss: 3.2997, fm_loss: 0.0097, perc_loss: 6.1551
Checkpoint saved at step 5100
Epoch: [ 13], Batch: [ 100/ 385] | Total Time: 3h 10m 39s
d_loss: 1.3868, g_loss: 10.0067, const_loss: 0.0017, l1_loss: 3.1714, fm_loss: 0.0091, perc_loss: 6.1311
Checkpoint saved at step 5200
Epoch: [ 13], Batch: [ 200/ 385] | Total Time: 3h 14m 23s
d_loss: 1.3871, g_loss: 10.6271, const_loss: 0.0018, l1_loss: 3.4840, fm_loss: 0.0105, perc_loss: 6.4375
Checkpoint saved at step 5300
Epoch: [ 13], Batch: [ 300/ 385] | Total Time: 3h 18m 7s
d_loss: 1.3868, g_loss: 9.7444, const_loss: 0.0022, l1_loss: 3.0687, fm_loss: 0.0088, perc_loss: 5.9713
--- End of Epoch 13 --- Time: 861.4s ---
LR Scheduler stepped. Current LR G: 0.000182, LR D: 0.000182
Epoch: [ 14], Batch: [ 0/ 385] | Total Time: 3h 21m 15s
d_loss: 1.3876, g_loss: 10.2527, const_loss: 0.0021, l1_loss: 3.3517, fm_loss: 0.0093, perc_loss: 6.1963
Checkpoint saved at step 5400
Epoch: [ 14], Batch: [ 100/ 385] | Total Time: 3h 25m 0s
d_loss: 1.3868, g_loss: 10.3330, const_loss: 0.0017, l1_loss: 3.3245, fm_loss: 0.0095, perc_loss: 6.3039
Checkpoint saved at step 5500
Epoch: [ 14], Batch: [ 200/ 385] | Total Time: 3h 28m 44s
d_loss: 1.3868, g_loss: 9.5740, const_loss: 0.0015, l1_loss: 3.0325, fm_loss: 0.0084, perc_loss: 5.8383
Checkpoint saved at step 5600
Epoch: [ 14], Batch: [ 300/ 385] | Total Time: 3h 32m 29s
d_loss: 1.3871, g_loss: 11.1124, const_loss: 0.0016, l1_loss: 3.6757, fm_loss: 0.0111, perc_loss: 6.7306
Checkpoint saved at step 5700
--- End of Epoch 14 --- Time: 862.4s ---
LR Scheduler stepped. Current LR G: 0.000173, LR D: 0.000173
Epoch: [ 15], Batch: [ 0/ 385] | Total Time: 3h 35m 38s
d_loss: 1.3879, g_loss: 10.2067, const_loss: 0.0016, l1_loss: 3.1863, fm_loss: 0.0088, perc_loss: 6.3167
Checkpoint saved at step 5800
Epoch: [ 15], Batch: [ 100/ 385] | Total Time: 3h 39m 22s
d_loss: 1.3870, g_loss: 10.3934, const_loss: 0.0014, l1_loss: 3.4332, fm_loss: 0.0096, perc_loss: 6.2558
Checkpoint saved at step 5900
Epoch: [ 15], Batch: [ 200/ 385] | Total Time: 3h 43m 6s
d_loss: 1.3869, g_loss: 9.7289, const_loss: 0.0018, l1_loss: 3.0882, fm_loss: 0.0081, perc_loss: 5.9375
Checkpoint saved at step 6000
Epoch: [ 15], Batch: [ 300/ 385] | Total Time: 3h 46m 50s
d_loss: 1.3871, g_loss: 9.5620, const_loss: 0.0017, l1_loss: 3.0251, fm_loss: 0.0080, perc_loss: 5.8339
Checkpoint saved at step 6100
--- End of Epoch 15 --- Time: 861.8s ---
LR Scheduler stepped. Current LR G: 0.000164, LR D: 0.000164
Epoch: [ 16], Batch: [ 0/ 385] | Total Time: 3h 50m 0s
d_loss: 1.3875, g_loss: 10.1631, const_loss: 0.0012, l1_loss: 3.2287, fm_loss: 0.0093, perc_loss: 6.2304
Checkpoint saved at step 6200
Epoch: [ 16], Batch: [ 100/ 385] | Total Time: 3h 53m 44s
d_loss: 1.3868, g_loss: 10.2364, const_loss: 0.0015, l1_loss: 3.3527, fm_loss: 0.0095, perc_loss: 6.1792
Checkpoint saved at step 6300
Epoch: [ 16], Batch: [ 200/ 385] | Total Time: 3h 57m 29s
d_loss: 1.3868, g_loss: 10.0058, const_loss: 0.0019, l1_loss: 3.1442, fm_loss: 0.0091, perc_loss: 6.1572
Checkpoint saved at step 6400
Epoch: [ 16], Batch: [ 300/ 385] | Total Time: 4h 1m 13s
d_loss: 1.3868, g_loss: 9.0048, const_loss: 0.0019, l1_loss: 2.8011, fm_loss: 0.0075, perc_loss: 5.5009
Checkpoint saved at step 6500
--- End of Epoch 16 --- Time: 868.7s ---
LR Scheduler stepped. Current LR G: 0.000155, LR D: 0.000155
Epoch: [ 17], Batch: [ 0/ 385] | Total Time: 4h 4m 28s
d_loss: 1.3869, g_loss: 9.6440, const_loss: 0.0016, l1_loss: 3.0451, fm_loss: 0.0083, perc_loss: 5.8957

from 430 to 432

 Model 430 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 5s
d_loss: 1.3870, g_loss: 10.6330, const_loss: 0.0014, l1_loss: 3.3976, fm_loss: 0.0090, perc_loss: 6.5316
Checkpoint step 100 reached, but saving starts after step 300.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 51s
d_loss: 1.3868, g_loss: 9.9911, const_loss: 0.0016, l1_loss: 3.1922, fm_loss: 0.0087, perc_loss: 6.0951
Checkpoint step 200 reached, but saving starts after step 300.
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 41s
d_loss: 1.3869, g_loss: 9.7300, const_loss: 0.0012, l1_loss: 3.0829, fm_loss: 0.0087, perc_loss: 5.9438
Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 33s
d_loss: 1.3869, g_loss: 9.6958, const_loss: 0.0015, l1_loss: 3.0490, fm_loss: 0.0084, perc_loss: 5.9435
--- End of Epoch 0 --- Time: 885.1s ---
LR Scheduler stepped. Current LR G: 0.000223, LR D: 0.000223
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 47s
d_loss: 1.3867, g_loss: 9.8427, const_loss: 0.0014, l1_loss: 3.1460, fm_loss: 0.0081, perc_loss: 5.9939
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 18m 38s
d_loss: 1.3874, g_loss: 10.2123, const_loss: 0.0012, l1_loss: 3.2710, fm_loss: 0.0085, perc_loss: 6.2382
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 22m 29s
d_loss: 1.3868, g_loss: 9.9802, const_loss: 0.0019, l1_loss: 3.1683, fm_loss: 0.0082, perc_loss: 6.1084
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 26m 21s
d_loss: 1.3868, g_loss: 11.2853, const_loss: 0.0019, l1_loss: 3.6876, fm_loss: 0.0106, perc_loss: 6.8918
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 889.7s ---
LR Scheduler stepped. Current LR G: 0.000222, LR D: 0.000222
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 29m 37s
d_loss: 1.3868, g_loss: 10.6471, const_loss: 0.0017, l1_loss: 3.4989, fm_loss: 0.0096, perc_loss: 6.4436
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 33m 26s
d_loss: 1.3868, g_loss: 9.6885, const_loss: 0.0017, l1_loss: 3.0809, fm_loss: 0.0088, perc_loss: 5.9038
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 37m 17s
d_loss: 1.3873, g_loss: 10.9808, const_loss: 0.0017, l1_loss: 4.0770, fm_loss: 0.0102, perc_loss: 6.1985
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 41m 8s
d_loss: 1.3871, g_loss: 10.4142, const_loss: 0.0014, l1_loss: 3.3112, fm_loss: 0.0090, perc_loss: 6.3993
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 887.1s ---
LR Scheduler stepped. Current LR G: 0.000220, LR D: 0.000220
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 44m 24s
d_loss: 1.3872, g_loss: 10.6495, const_loss: 0.0015, l1_loss: 3.3381, fm_loss: 0.0095, perc_loss: 6.6071
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 48m 16s
d_loss: 1.3868, g_loss: 10.5704, const_loss: 0.0015, l1_loss: 3.5060, fm_loss: 0.0099, perc_loss: 6.3596
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 52m 4s
d_loss: 1.3873, g_loss: 10.1275, const_loss: 0.0018, l1_loss: 3.2437, fm_loss: 0.0088, perc_loss: 6.1798
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 55m 55s
d_loss: 1.3872, g_loss: 9.8626, const_loss: 0.0014, l1_loss: 3.1193, fm_loss: 0.0081, perc_loss: 6.0404
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 887.9s ---
LR Scheduler stepped. Current LR G: 0.000218, LR D: 0.000218
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 59m 12s
d_loss: 1.3872, g_loss: 10.6761, const_loss: 0.0015, l1_loss: 3.5504, fm_loss: 0.0097, perc_loss: 6.4211
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 3m 1s
d_loss: 1.3878, g_loss: 9.0355, const_loss: 0.0017, l1_loss: 2.8644, fm_loss: 0.0074, perc_loss: 5.4686
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 6m 48s
d_loss: 1.3871, g_loss: 11.1970, const_loss: 0.0013, l1_loss: 3.6594, fm_loss: 0.0100, perc_loss: 6.8329
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 10m 34s
d_loss: 1.3868, g_loss: 9.9450, const_loss: 0.0014, l1_loss: 3.1882, fm_loss: 0.0082, perc_loss: 6.0538
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 877.8s ---
LR Scheduler stepped. Current LR G: 0.000215, LR D: 0.000215
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 13m 49s
d_loss: 1.3870, g_loss: 10.0924, const_loss: 0.0014, l1_loss: 3.3102, fm_loss: 0.0082, perc_loss: 6.0792
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 17m 40s
d_loss: 1.3868, g_loss: 10.6460, const_loss: 0.0022, l1_loss: 3.5120, fm_loss: 0.0096, perc_loss: 6.4288
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 21m 32s
d_loss: 1.3870, g_loss: 10.4716, const_loss: 0.0016, l1_loss: 3.4670, fm_loss: 0.0090, perc_loss: 6.3006
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 25m 23s
d_loss: 1.3868, g_loss: 9.6084, const_loss: 0.0016, l1_loss: 3.0927, fm_loss: 0.0083, perc_loss: 5.8125
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 889.7s ---
LR Scheduler stepped. Current LR G: 0.000211, LR D: 0.000211
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 28m 39s
d_loss: 1.3868, g_loss: 10.4266, const_loss: 0.0020, l1_loss: 3.3728, fm_loss: 0.0095, perc_loss: 6.3489
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 32m 30s
d_loss: 1.3868, g_loss: 10.0187, const_loss: 0.0018, l1_loss: 3.1791, fm_loss: 0.0082, perc_loss: 6.1362
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 36m 22s
d_loss: 1.3871, g_loss: 10.6701, const_loss: 0.0017, l1_loss: 3.5373, fm_loss: 0.0096, perc_loss: 6.4281
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 40m 14s
d_loss: 1.3867, g_loss: 10.4381, const_loss: 0.0014, l1_loss: 3.3973, fm_loss: 0.0094, perc_loss: 6.3366
--- End of Epoch 6 --- Time: 888.6s ---
LR Scheduler stepped. Current LR G: 0.000207, LR D: 0.000207
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 43m 28s
d_loss: 1.3875, g_loss: 10.3419, const_loss: 0.0014, l1_loss: 3.2977, fm_loss: 0.0091, perc_loss: 6.3404
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 47m 19s
d_loss: 1.3868, g_loss: 9.5218, const_loss: 0.0018, l1_loss: 3.0657, fm_loss: 0.0080, perc_loss: 5.7530
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 51m 11s
d_loss: 1.3870, g_loss: 10.7111, const_loss: 0.0018, l1_loss: 3.6666, fm_loss: 0.0098, perc_loss: 6.3395
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 55m 2s
d_loss: 1.3869, g_loss: 10.8129, const_loss: 0.0018, l1_loss: 3.4905, fm_loss: 0.0093, perc_loss: 6.6180
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 891.5s ---
LR Scheduler stepped. Current LR G: 0.000202, LR D: 0.000202
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 58m 19s
d_loss: 1.3868, g_loss: 9.4400, const_loss: 0.0014, l1_loss: 3.0056, fm_loss: 0.0076, perc_loss: 5.7320
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 2h 2m 11s
d_loss: 1.3869, g_loss: 10.2767, const_loss: 0.0020, l1_loss: 3.2796, fm_loss: 0.0088, perc_loss: 6.2929
Checkpoint saved at step 3200
Epoch: [ 8], Batch: [ 200/ 385] | Total Time: 2h 6m 2s
d_loss: 1.3868, g_loss: 10.4518, const_loss: 0.0016, l1_loss: 3.4020, fm_loss: 0.0091, perc_loss: 6.3457
Checkpoint saved at step 3300
Epoch: [ 8], Batch: [ 300/ 385] | Total Time: 2h 9m 53s
d_loss: 1.3870, g_loss: 10.2984, const_loss: 0.0020, l1_loss: 3.3236, fm_loss: 0.0093, perc_loss: 6.2702
Checkpoint saved at step 3400
--- End of Epoch 8 --- Time: 889.6s ---
LR Scheduler stepped. Current LR G: 0.000196, LR D: 0.000196
Epoch: [ 9], Batch: [ 0/ 385] | Total Time: 2h 13m 9s
d_loss: 1.3868, g_loss: 10.4211, const_loss: 0.0019, l1_loss: 3.3250, fm_loss: 0.0085, perc_loss: 6.3924
Checkpoint saved at step 3500

from 432 to 434

 Model 432 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 385] | Total Time: 4s
d_loss: 1.3867, g_loss: 9.8590, const_loss: 0.0015, l1_loss: 3.0977, fm_loss: 0.0074, perc_loss: 6.0590
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 385] | Total Time: 3m 41s
d_loss: 1.3868, g_loss: 9.4904, const_loss: 0.0011, l1_loss: 2.9342, fm_loss: 0.0072, perc_loss: 5.8545
Checkpoint saved at step 200
Epoch: [ 0], Batch: [ 200/ 385] | Total Time: 7m 23s
d_loss: 1.3867, g_loss: 9.1810, const_loss: 0.0010, l1_loss: 2.8742, fm_loss: 0.0068, perc_loss: 5.6056
Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 385] | Total Time: 11m 4s
d_loss: 1.3868, g_loss: 9.3278, const_loss: 0.0015, l1_loss: 2.8550, fm_loss: 0.0063, perc_loss: 5.7717
--- End of Epoch 0 --- Time: 847.2s ---
LR Scheduler stepped. Current LR G: 0.000205, LR D: 0.000205
Epoch: [ 1], Batch: [ 0/ 385] | Total Time: 14m 9s
d_loss: 1.3867, g_loss: 10.1734, const_loss: 0.0014, l1_loss: 3.3679, fm_loss: 0.0079, perc_loss: 6.1028
Checkpoint saved at step 400
Epoch: [ 1], Batch: [ 100/ 385] | Total Time: 17m 50s
d_loss: 1.3868, g_loss: 9.3136, const_loss: 0.0012, l1_loss: 2.9390, fm_loss: 0.0075, perc_loss: 5.6725
Checkpoint saved at step 500
Epoch: [ 1], Batch: [ 200/ 385] | Total Time: 21m 30s
d_loss: 1.3868, g_loss: 9.9407, const_loss: 0.0014, l1_loss: 3.0986, fm_loss: 0.0079, perc_loss: 6.1394
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 300/ 385] | Total Time: 25m 11s
d_loss: 1.3868, g_loss: 9.9876, const_loss: 0.0014, l1_loss: 3.2503, fm_loss: 0.0078, perc_loss: 6.0347
Checkpoint saved at step 700
--- End of Epoch 1 --- Time: 848.1s ---
LR Scheduler stepped. Current LR G: 0.000204, LR D: 0.000204
Epoch: [ 2], Batch: [ 0/ 385] | Total Time: 28m 17s
d_loss: 1.3869, g_loss: 9.9302, const_loss: 0.0015, l1_loss: 3.1552, fm_loss: 0.0078, perc_loss: 6.0723
Checkpoint saved at step 800
Epoch: [ 2], Batch: [ 100/ 385] | Total Time: 31m 58s
d_loss: 1.3869, g_loss: 9.8794, const_loss: 0.0015, l1_loss: 3.1528, fm_loss: 0.0082, perc_loss: 6.0235
Checkpoint saved at step 900
Epoch: [ 2], Batch: [ 200/ 385] | Total Time: 35m 39s
d_loss: 1.3869, g_loss: 9.8318, const_loss: 0.0013, l1_loss: 3.0752, fm_loss: 0.0077, perc_loss: 6.0543
Checkpoint saved at step 1000
Epoch: [ 2], Batch: [ 300/ 385] | Total Time: 39m 20s
d_loss: 1.3868, g_loss: 9.6339, const_loss: 0.0013, l1_loss: 3.0133, fm_loss: 0.0072, perc_loss: 5.9188
Checkpoint saved at step 1100
--- End of Epoch 2 --- Time: 849.4s ---
LR Scheduler stepped. Current LR G: 0.000202, LR D: 0.000202
Epoch: [ 3], Batch: [ 0/ 385] | Total Time: 42m 26s
d_loss: 1.3867, g_loss: 9.7536, const_loss: 0.0014, l1_loss: 3.0074, fm_loss: 0.0073, perc_loss: 6.0442
Checkpoint saved at step 1200
Epoch: [ 3], Batch: [ 100/ 385] | Total Time: 46m 7s
d_loss: 1.3878, g_loss: 9.5165, const_loss: 0.0017, l1_loss: 3.0445, fm_loss: 0.0072, perc_loss: 5.7697
Checkpoint saved at step 1300
Epoch: [ 3], Batch: [ 200/ 385] | Total Time: 49m 47s
d_loss: 1.3897, g_loss: 9.4583, const_loss: 0.0013, l1_loss: 2.9167, fm_loss: 0.0068, perc_loss: 5.8401
Checkpoint saved at step 1400
Epoch: [ 3], Batch: [ 300/ 385] | Total Time: 53m 28s
d_loss: 1.3870, g_loss: 9.6047, const_loss: 0.0013, l1_loss: 2.9807, fm_loss: 0.0074, perc_loss: 5.9220
Checkpoint saved at step 1500
--- End of Epoch 3 --- Time: 849.8s ---
LR Scheduler stepped. Current LR G: 0.000200, LR D: 0.000200
Epoch: [ 4], Batch: [ 0/ 385] | Total Time: 56m 36s
d_loss: 1.3867, g_loss: 9.6306, const_loss: 0.0012, l1_loss: 3.0187, fm_loss: 0.0075, perc_loss: 5.9098
Checkpoint saved at step 1600
Epoch: [ 4], Batch: [ 100/ 385] | Total Time: 1h 16s
d_loss: 1.3869, g_loss: 9.2052, const_loss: 0.0013, l1_loss: 2.8032, fm_loss: 0.0069, perc_loss: 5.7004
Checkpoint saved at step 1700
Epoch: [ 4], Batch: [ 200/ 385] | Total Time: 1h 3m 57s
d_loss: 1.3868, g_loss: 10.1062, const_loss: 0.0013, l1_loss: 3.1800, fm_loss: 0.0086, perc_loss: 6.2229
Checkpoint saved at step 1800
Epoch: [ 4], Batch: [ 300/ 385] | Total Time: 1h 7m 37s
d_loss: 1.3868, g_loss: 10.2734, const_loss: 0.0011, l1_loss: 3.2805, fm_loss: 0.0088, perc_loss: 6.2896
Checkpoint saved at step 1900
--- End of Epoch 4 --- Time: 847.3s ---
LR Scheduler stepped. Current LR G: 0.000197, LR D: 0.000197
Epoch: [ 5], Batch: [ 0/ 385] | Total Time: 1h 10m 43s
d_loss: 1.3868, g_loss: 9.8831, const_loss: 0.0013, l1_loss: 3.0593, fm_loss: 0.0078, perc_loss: 6.1214
Checkpoint saved at step 2000
Epoch: [ 5], Batch: [ 100/ 385] | Total Time: 1h 14m 24s
d_loss: 1.3868, g_loss: 10.2434, const_loss: 0.0016, l1_loss: 3.2768, fm_loss: 0.0087, perc_loss: 6.2629
Checkpoint saved at step 2100
Epoch: [ 5], Batch: [ 200/ 385] | Total Time: 1h 18m 5s
d_loss: 1.3867, g_loss: 9.5913, const_loss: 0.0012, l1_loss: 3.0177, fm_loss: 0.0075, perc_loss: 5.8715
Checkpoint saved at step 2200
Epoch: [ 5], Batch: [ 300/ 385] | Total Time: 1h 21m 45s
d_loss: 1.3868, g_loss: 9.5619, const_loss: 0.0016, l1_loss: 3.0135, fm_loss: 0.0073, perc_loss: 5.8462
Checkpoint saved at step 2300
--- End of Epoch 5 --- Time: 847.6s ---
LR Scheduler stepped. Current LR G: 0.000194, LR D: 0.000194
Epoch: [ 6], Batch: [ 0/ 385] | Total Time: 1h 24m 51s
d_loss: 1.3868, g_loss: 10.0363, const_loss: 0.0009, l1_loss: 3.1111, fm_loss: 0.0077, perc_loss: 6.2231
Checkpoint saved at step 2400
Epoch: [ 6], Batch: [ 100/ 385] | Total Time: 1h 28m 32s
d_loss: 1.3868, g_loss: 9.8891, const_loss: 0.0016, l1_loss: 3.1358, fm_loss: 0.0076, perc_loss: 6.0508
Checkpoint saved at step 2500
Epoch: [ 6], Batch: [ 200/ 385] | Total Time: 1h 32m 12s
d_loss: 1.3871, g_loss: 9.7750, const_loss: 0.0010, l1_loss: 3.0323, fm_loss: 0.0073, perc_loss: 6.0411
Checkpoint saved at step 2600
Epoch: [ 6], Batch: [ 300/ 385] | Total Time: 1h 35m 53s
d_loss: 1.3870, g_loss: 9.2562, const_loss: 0.0015, l1_loss: 2.8947, fm_loss: 0.0071, perc_loss: 5.6595
--- End of Epoch 6 --- Time: 846.9s ---
LR Scheduler stepped. Current LR G: 0.000190, LR D: 0.000190
Epoch: [ 7], Batch: [ 0/ 385] | Total Time: 1h 38m 58s
d_loss: 1.3868, g_loss: 9.5897, const_loss: 0.0012, l1_loss: 2.9252, fm_loss: 0.0072, perc_loss: 5.9628
Checkpoint saved at step 2700
Epoch: [ 7], Batch: [ 100/ 385] | Total Time: 1h 42m 38s
d_loss: 1.3868, g_loss: 8.9969, const_loss: 0.0014, l1_loss: 2.7996, fm_loss: 0.0068, perc_loss: 5.4957
Checkpoint saved at step 2800
Epoch: [ 7], Batch: [ 200/ 385] | Total Time: 1h 46m 19s
d_loss: 1.3868, g_loss: 10.4045, const_loss: 0.0013, l1_loss: 3.3352, fm_loss: 0.0081, perc_loss: 6.3666
Checkpoint saved at step 2900
Epoch: [ 7], Batch: [ 300/ 385] | Total Time: 1h 49m 59s
d_loss: 1.3870, g_loss: 10.3706, const_loss: 0.0011, l1_loss: 3.3612, fm_loss: 0.0081, perc_loss: 6.3068
Checkpoint saved at step 3000
--- End of Epoch 7 --- Time: 847.1s ---
LR Scheduler stepped. Current LR G: 0.000186, LR D: 0.000186
Epoch: [ 8], Batch: [ 0/ 385] | Total Time: 1h 53m 5s
d_loss: 1.3868, g_loss: 9.2529, const_loss: 0.0018, l1_loss: 2.8736, fm_loss: 0.0068, perc_loss: 5.6773
Checkpoint saved at step 3100
Epoch: [ 8], Batch: [ 100/ 385] | Total Time: 1h 56m 48s
d_loss: 1.3868, g_loss: 9.4293, const_loss: 0.0016, l1_loss: 2.9409, fm_loss: 0.0071, perc_loss: 5.7863
Checkpoint saved at step 3200
Epoch: [ 8], Batch: [ 200/ 385] | Total Time: 2h 28s
d_loss: 1.3868, g_loss: 10.3314, const_loss: 0.0015, l1_loss: 3.2890, fm_loss: 0.0082, perc_loss: 6.3394
Checkpoint saved at step 3300
Epoch: [ 8], Batch: [ 300/ 385] | Total Time: 2h 4m 9s
d_loss: 1.3873, g_loss: 9.3449, const_loss: 0.0014, l1_loss: 2.9054, fm_loss: 0.0067, perc_loss: 5.7380
Checkpoint saved at step 3400
--- End of Epoch 8 --- Time: 849.8s ---
LR Scheduler stepped. Current LR G: 0.000181, LR D: 0.000181
Epoch: [ 9], Batch: [ 0/ 385] | Total Time: 2h 7m 15s
d_loss: 1.3879, g_loss: 9.4634, const_loss: 0.0015, l1_loss: 2.9875, fm_loss: 0.0073, perc_loss: 5.7738
Checkpoint saved at step 3500
Epoch: [ 9], Batch: [ 100/ 385] | Total Time: 2h 10m 55s
d_loss: 1.3868, g_loss: 10.3298, const_loss: 0.0014, l1_loss: 3.2726, fm_loss: 0.0081, perc_loss: 6.3544
Checkpoint saved at step 3600
Epoch: [ 9], Batch: [ 200/ 385] | Total Time: 2h 14m 36s
d_loss: 1.3869, g_loss: 9.3511, const_loss: 0.0014, l1_loss: 2.8571, fm_loss: 0.0072, perc_loss: 5.7921
Checkpoint saved at step 3700
Epoch: [ 9], Batch: [ 300/ 385] | Total Time: 2h 18m 20s
d_loss: 1.3867, g_loss: 9.3757, const_loss: 0.0013, l1_loss: 2.9166, fm_loss: 0.0071, perc_loss: 5.7573
Checkpoint saved at step 3800
--- End of Epoch 9 --- Time: 857.5s ---
LR Scheduler stepped. Current LR G: 0.000175, LR D: 0.000175
Epoch: [ 10], Batch: [ 0/ 385] | Total Time: 2h 21m 32s
d_loss: 1.3868, g_loss: 8.8145, const_loss: 0.0015, l1_loss: 2.7121, fm_loss: 0.0061, perc_loss: 5.4014
Checkpoint saved at step 3900
Epoch: [ 10], Batch: [ 100/ 385] | Total Time: 2h 25m 18s
d_loss: 1.3871, g_loss: 10.8135, const_loss: 0.0018, l1_loss: 3.5252, fm_loss: 0.0087, perc_loss: 6.5844
Checkpoint saved at step 4000
Epoch: [ 10], Batch: [ 200/ 385] | Total Time: 2h 29m 3s
d_loss: 1.3870, g_loss: 9.6735, const_loss: 0.0016, l1_loss: 3.0885, fm_loss: 0.0069, perc_loss: 5.8832
Checkpoint saved at step 4100
Epoch: [ 10], Batch: [ 300/ 385] | Total Time: 2h 32m 49s
d_loss: 1.3868, g_loss: 9.8745, const_loss: 0.0016, l1_loss: 3.0679, fm_loss: 0.0075, perc_loss: 6.1042
Checkpoint saved at step 4200
--- End of Epoch 10 --- Time: 867.9s ---
LR Scheduler stepped. Current LR G: 0.000169, LR D: 0.000169
Epoch: [ 11], Batch: [ 0/ 385] | Total Time: 2h 36m 0s
d_loss: 1.3879, g_loss: 9.5449, const_loss: 0.0016, l1_loss: 3.0693, fm_loss: 0.0074, perc_loss: 5.7733
Checkpoint saved at step 4300
Epoch: [ 11], Batch: [ 100/ 385] | Total Time: 2h 39m 47s
d_loss: 1.3868, g_loss: 9.1393, const_loss: 0.0016, l1_loss: 2.7804, fm_loss: 0.0066, perc_loss: 5.6573
Checkpoint saved at step 4400
Epoch: [ 11], Batch: [ 200/ 385] | Total Time: 2h 43m 32s
d_loss: 1.3869, g_loss: 9.4408, const_loss: 0.0012, l1_loss: 2.9018, fm_loss: 0.0068, perc_loss: 5.8377
Checkpoint saved at step 4500
Epoch: [ 11], Batch: [ 300/ 385] | Total Time: 2h 47m 19s
d_loss: 1.3868, g_loss: 9.4162, const_loss: 0.0016, l1_loss: 2.8965, fm_loss: 0.0064, perc_loss: 5.8184
Checkpoint saved at step 4600
--- End of Epoch 11 --- Time: 871.2s ---
LR Scheduler stepped. Current LR G: 0.000163, LR D: 0.000163
Epoch: [ 12], Batch: [ 0/ 385] | Total Time: 2h 50m 31s
d_loss: 1.3870, g_loss: 10.2738, const_loss: 0.0011, l1_loss: 3.3422, fm_loss: 0.0080, perc_loss: 6.2292
Checkpoint saved at step 4700
Epoch: [ 12], Batch: [ 100/ 385] | Total Time: 2h 54m 14s
d_loss: 1.3870, g_loss: 9.5246, const_loss: 0.0016, l1_loss: 3.0449, fm_loss: 0.0070, perc_loss: 5.7778
Checkpoint saved at step 4800
Epoch: [ 12], Batch: [ 200/ 385] | Total Time: 2h 58m 0s
d_loss: 1.3867, g_loss: 9.6968, const_loss: 0.0015, l1_loss: 3.0424, fm_loss: 0.0072, perc_loss: 5.9524
Checkpoint saved at step 4900
Epoch: [ 12], Batch: [ 300/ 385] | Total Time: 3h 1m 46s
d_loss: 1.3868, g_loss: 8.6468, const_loss: 0.0016, l1_loss: 2.6345, fm_loss: 0.0059, perc_loss: 5.3115
Checkpoint saved at step 5000
--- End of Epoch 12 --- Time: 860.4s ---
LR Scheduler stepped. Current LR G: 0.000156, LR D: 0.000156
Epoch: [ 13], Batch: [ 0/ 385] | Total Time: 3h 4m 52s
d_loss: 1.3871, g_loss: 9.1874, const_loss: 0.0013, l1_loss: 2.8849, fm_loss: 0.0068, perc_loss: 5.6010
Checkpoint saved at step 5100
Epoch: [ 13], Batch: [ 100/ 385] | Total Time: 3h 8m 32s
d_loss: 1.3868, g_loss: 9.4256, const_loss: 0.0016, l1_loss: 2.9694, fm_loss: 0.0069, perc_loss: 5.7543
Checkpoint saved at step 5200
Epoch: [ 13], Batch: [ 200/ 385] | Total Time: 3h 12m 13s
d_loss: 1.3868, g_loss: 10.0006, const_loss: 0.0015, l1_loss: 3.1309, fm_loss: 0.0073, perc_loss: 6.1675
Checkpoint saved at step 5300
Epoch: [ 13], Batch: [ 300/ 385] | Total Time: 3h 15m 54s
d_loss: 1.3867, g_loss: 9.3285, const_loss: 0.0015, l1_loss: 2.9032, fm_loss: 0.0070, perc_loss: 5.7234
--- End of Epoch 13 --- Time: 847.1s ---
LR Scheduler stepped. Current LR G: 0.000149, LR D: 0.000149
Epoch: [ 14], Batch: [ 0/ 385] | Total Time: 3h 18m 59s
d_loss: 1.3869, g_loss: 9.7788, const_loss: 0.0012, l1_loss: 3.0766, fm_loss: 0.0072, perc_loss: 6.0005
Checkpoint saved at step 5400

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *