又切換模型, 隨手記錄 log.
首輪 figure:

首輪 log:
Epoch: [ 0], Batch: [ 0/ 539] | Total Time: 2s
d_loss: 1.3916, g_loss: 128.3826, const_loss: 0.0035, l1_loss: 106.7848, fm_loss: 0.1838, perc_loss: 20.1327, edge: 0.5826
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 539] | Total Time: 3m 20s
d_loss: 1.3882, g_loss: 68.7826, const_loss: 0.0007, l1_loss: 49.9586, fm_loss: 0.0577, perc_loss: 17.7509, edge: 0.3214
Checkpoint saved at step 200
Epoch: [ 0], Batch: [ 200/ 539] | Total Time: 6m 47s
d_loss: 1.3869, g_loss: 63.3113, const_loss: 0.0015, l1_loss: 47.0153, fm_loss: 0.0423, perc_loss: 15.2588, edge: 0.3005
Checkpoint saved at step 300
Epoch: [ 0], Batch: [ 300/ 539] | Total Time: 10m 10s
d_loss: 1.3869, g_loss: 60.5318, const_loss: 0.0026, l1_loss: 43.4512, fm_loss: 0.0418, perc_loss: 16.0189, edge: 0.3239
Checkpoint saved at step 400
Epoch: [ 0], Batch: [ 400/ 539] | Total Time: 13m 33s
d_loss: 1.3868, g_loss: 48.3982, const_loss: 0.0046, l1_loss: 34.7729, fm_loss: 0.0286, perc_loss: 12.6314, edge: 0.2678
Checkpoint saved at step 500
Epoch: [ 0], Batch: [ 500/ 539] | Total Time: 16m 55s
d_loss: 1.3868, g_loss: 44.5594, const_loss: 0.0026, l1_loss: 32.0873, fm_loss: 0.0266, perc_loss: 11.4887, edge: 0.2609
--- End of Epoch 0 --- Time: 1090.0s ---
LR Scheduler stepped. Current LR G: 0.000255, LR D: 0.000255
Epoch: [ 1], Batch: [ 0/ 539] | Total Time: 18m 12s
d_loss: 1.3878, g_loss: 44.0807, const_loss: 0.0030, l1_loss: 31.3945, fm_loss: 0.0220, perc_loss: 11.7074, edge: 0.2609
Checkpoint saved at step 600
Epoch: [ 1], Batch: [ 100/ 539] | Total Time: 21m 33s
d_loss: 1.3875, g_loss: 41.6850, const_loss: 0.0034, l1_loss: 30.0054, fm_loss: 0.0252, perc_loss: 10.7220, edge: 0.2356
Checkpoint saved at step 700
Epoch: [ 1], Batch: [ 200/ 539] | Total Time: 24m 55s
d_loss: 1.3870, g_loss: 41.6490, const_loss: 0.0044, l1_loss: 29.5475, fm_loss: 0.0221, perc_loss: 11.1428, edge: 0.2387
Checkpoint saved at step 800
Epoch: [ 1], Batch: [ 300/ 539] | Total Time: 28m 17s
d_loss: 1.3868, g_loss: 38.7753, const_loss: 0.0031, l1_loss: 27.8523, fm_loss: 0.0197, perc_loss: 9.9859, edge: 0.2211
Checkpoint saved at step 900
Epoch: [ 1], Batch: [ 400/ 539] | Total Time: 31m 40s
d_loss: 1.3869, g_loss: 38.9029, const_loss: 0.0040, l1_loss: 27.4882, fm_loss: 0.0200, perc_loss: 10.4687, edge: 0.2286
Checkpoint saved at step 1000
Epoch: [ 1], Batch: [ 500/ 539] | Total Time: 35m 2s
d_loss: 1.3885, g_loss: 38.2001, const_loss: 0.0035, l1_loss: 26.6255, fm_loss: 0.0192, perc_loss: 10.6098, edge: 0.2488
--- End of Epoch 1 --- Time: 1086.4s ---
LR Scheduler stepped. Current LR G: 0.000253, LR D: 0.000253
Epoch: [ 2], Batch: [ 0/ 539] | Total Time: 36m 18s
d_loss: 1.3923, g_loss: 37.2597, const_loss: 0.0040, l1_loss: 26.2508, fm_loss: 0.0195, perc_loss: 10.0646, edge: 0.2280
Checkpoint saved at step 1100
Epoch: [ 2], Batch: [ 100/ 539] | Total Time: 39m 40s
d_loss: 1.3868, g_loss: 34.7166, const_loss: 0.0037, l1_loss: 24.5339, fm_loss: 0.0163, perc_loss: 9.2540, edge: 0.2158
Checkpoint saved at step 1200
Epoch: [ 2], Batch: [ 200/ 539] | Total Time: 43m 1s
d_loss: 1.3868, g_loss: 35.6725, const_loss: 0.0041, l1_loss: 24.8662, fm_loss: 0.0180, perc_loss: 9.8595, edge: 0.2318
Checkpoint saved at step 1300
Epoch: [ 2], Batch: [ 300/ 539] | Total Time: 46m 25s
d_loss: 1.3868, g_loss: 32.5751, const_loss: 0.0035, l1_loss: 23.2021, fm_loss: 0.0147, perc_loss: 8.4634, edge: 0.1981
Checkpoint saved at step 1400
Epoch: [ 2], Batch: [ 400/ 539] | Total Time: 49m 47s
d_loss: 1.3868, g_loss: 31.8499, const_loss: 0.0030, l1_loss: 22.5475, fm_loss: 0.0135, perc_loss: 8.3929, edge: 0.2002
Checkpoint saved at step 1500
Epoch: [ 2], Batch: [ 500/ 539] | Total Time: 53m 9s
d_loss: 1.3870, g_loss: 33.2113, const_loss: 0.0026, l1_loss: 23.2373, fm_loss: 0.0155, perc_loss: 9.0461, edge: 0.2169
Checkpoint saved at step 1600
--- End of Epoch 2 --- Time: 1089.9s ---
LR Scheduler stepped. Current LR G: 0.000251, LR D: 0.000251
Epoch: [ 3], Batch: [ 0/ 539] | Total Time: 54m 28s
d_loss: 1.3934, g_loss: 34.9374, const_loss: 0.0022, l1_loss: 24.0809, fm_loss: 0.0160, perc_loss: 9.9140, edge: 0.2315
Checkpoint saved at step 1700
Epoch: [ 3], Batch: [ 100/ 539] | Total Time: 57m 49s
d_loss: 1.3869, g_loss: 31.1795, const_loss: 0.0029, l1_loss: 21.9099, fm_loss: 0.0130, perc_loss: 8.3583, edge: 0.2026
Checkpoint saved at step 1800
Epoch: [ 3], Batch: [ 200/ 539] | Total Time: 1h 1m 11s
d_loss: 1.3878, g_loss: 31.3780, const_loss: 0.0029, l1_loss: 22.1058, fm_loss: 0.0133, perc_loss: 8.3594, edge: 0.2037
Checkpoint saved at step 1900
Epoch: [ 3], Batch: [ 300/ 539] | Total Time: 1h 4m 33s
d_loss: 1.3869, g_loss: 32.9926, const_loss: 0.0026, l1_loss: 22.6001, fm_loss: 0.0147, perc_loss: 9.4651, edge: 0.2172
Checkpoint saved at step 2000
Epoch: [ 3], Batch: [ 400/ 539] | Total Time: 1h 7m 55s
d_loss: 1.3868, g_loss: 30.4508, const_loss: 0.0029, l1_loss: 21.3316, fm_loss: 0.0130, perc_loss: 8.2086, edge: 0.2019
Checkpoint saved at step 2100
Epoch: [ 3], Batch: [ 500/ 539] | Total Time: 1h 11m 17s
d_loss: 1.3872, g_loss: 30.6283, const_loss: 0.0020, l1_loss: 21.2892, fm_loss: 0.0119, perc_loss: 8.4331, edge: 0.1992
--- End of Epoch 3 --- Time: 1085.6s ---
LR Scheduler stepped. Current LR G: 0.000249, LR D: 0.000249
Epoch: [ 4], Batch: [ 0/ 539] | Total Time: 1h 12m 33s
d_loss: 1.3870, g_loss: 31.8877, const_loss: 0.0021, l1_loss: 22.0329, fm_loss: 0.0151, perc_loss: 8.9388, edge: 0.2059
Checkpoint saved at step 2200
Epoch: [ 4], Batch: [ 100/ 539] | Total Time: 1h 15m 54s
d_loss: 1.3875, g_loss: 28.9344, const_loss: 0.0026, l1_loss: 20.3851, fm_loss: 0.0124, perc_loss: 7.6579, edge: 0.1835
Checkpoint saved at step 2300
Epoch: [ 4], Batch: [ 200/ 539] | Total Time: 1h 19m 15s
d_loss: 1.3871, g_loss: 31.7046, const_loss: 0.0031, l1_loss: 21.8693, fm_loss: 0.0127, perc_loss: 8.9188, edge: 0.2078
Checkpoint saved at step 2400
Epoch: [ 4], Batch: [ 300/ 539] | Total Time: 1h 22m 35s
d_loss: 1.3868, g_loss: 28.3571, const_loss: 0.0022, l1_loss: 19.8277, fm_loss: 0.0114, perc_loss: 7.6308, edge: 0.1922
Checkpoint saved at step 2500
Epoch: [ 4], Batch: [ 400/ 539] | Total Time: 1h 25m 57s
d_loss: 1.3872, g_loss: 28.2590, const_loss: 0.0020, l1_loss: 19.7076, fm_loss: 0.0107, perc_loss: 7.6521, edge: 0.1938
Checkpoint saved at step 2600
Epoch: [ 4], Batch: [ 500/ 539] | Total Time: 1h 29m 18s
d_loss: 1.3867, g_loss: 29.8049, const_loss: 0.0031, l1_loss: 20.6670, fm_loss: 0.0122, perc_loss: 8.2178, edge: 0.2121
--- End of Epoch 4 --- Time: 1080.9s ---
LR Scheduler stepped. Current LR G: 0.000245, LR D: 0.000245
Epoch: [ 5], Batch: [ 0/ 539] | Total Time: 1h 30m 34s
d_loss: 1.3937, g_loss: 27.9783, const_loss: 0.0023, l1_loss: 19.5465, fm_loss: 0.0106, perc_loss: 7.5408, edge: 0.1853
Checkpoint saved at step 2700
from checkpoint 140 to 142

log:
Epoch: [ 0], Batch: [ 0/ 539] | Total Time: 4s
d_loss: 1.3868, g_loss: 27.6352, const_loss: 0.0018, l1_loss: 19.2288, fm_loss: 0.0109, perc_loss: 7.5069, edge: 0.1939
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 539] | Total Time: 3m 27s
d_loss: 1.3875, g_loss: 27.4069, const_loss: 0.0019, l1_loss: 19.1377, fm_loss: 0.0106, perc_loss: 7.3862, edge: 0.1776
Epoch: [ 0], Batch: [ 200/ 539] | Total Time: 6m 57s
d_loss: 1.3868, g_loss: 25.8526, const_loss: 0.0022, l1_loss: 18.0339, fm_loss: 0.0097, perc_loss: 6.9374, edge: 0.1765
Epoch: [ 0], Batch: [ 300/ 539] | Total Time: 10m 28s
d_loss: 1.3868, g_loss: 29.6243, const_loss: 0.0026, l1_loss: 20.2513, fm_loss: 0.0116, perc_loss: 8.4587, edge: 0.2073
Epoch: [ 0], Batch: [ 400/ 539] | Total Time: 13m 59s
d_loss: 1.3871, g_loss: 27.9067, const_loss: 0.0023, l1_loss: 19.3014, fm_loss: 0.0115, perc_loss: 7.6972, edge: 0.2015
Epoch: [ 0], Batch: [ 500/ 539] | Total Time: 17m 30s
d_loss: 1.3868, g_loss: 26.1496, const_loss: 0.0017, l1_loss: 18.1285, fm_loss: 0.0101, perc_loss: 7.1305, edge: 0.1860
--- End of Epoch 0 --- Time: 1127.9s ---
LR Scheduler stepped. Current LR G: 0.000249, LR D: 0.000249
Epoch: [ 1], Batch: [ 0/ 539] | Total Time: 18m 49s
d_loss: 1.3878, g_loss: 29.4611, const_loss: 0.0019, l1_loss: 20.1881, fm_loss: 0.0128, perc_loss: 8.3535, edge: 0.2120
Epoch: [ 1], Batch: [ 100/ 539] | Total Time: 22m 20s
d_loss: 1.3874, g_loss: 26.7745, const_loss: 0.0020, l1_loss: 18.5706, fm_loss: 0.0109, perc_loss: 7.3224, edge: 0.1758
Epoch: [ 1], Batch: [ 200/ 539] | Total Time: 25m 52s
d_loss: 1.3871, g_loss: 28.0098, const_loss: 0.0018, l1_loss: 19.4016, fm_loss: 0.0117, perc_loss: 7.7032, edge: 0.1987
Epoch: [ 1], Batch: [ 300/ 539] | Total Time: 29m 23s
d_loss: 1.3869, g_loss: 26.4153, const_loss: 0.0017, l1_loss: 18.3735, fm_loss: 0.0102, perc_loss: 7.1510, edge: 0.1860
Epoch: [ 1], Batch: [ 400/ 539] | Total Time: 32m 53s
d_loss: 1.3869, g_loss: 28.0377, const_loss: 0.0019, l1_loss: 19.4240, fm_loss: 0.0120, perc_loss: 7.7203, edge: 0.1866
Epoch: [ 1], Batch: [ 500/ 539] | Total Time: 36m 23s
d_loss: 1.3878, g_loss: 28.4380, const_loss: 0.0020, l1_loss: 19.6095, fm_loss: 0.0108, perc_loss: 7.9258, edge: 0.1971
--- End of Epoch 1 --- Time: 1133.1s ---
LR Scheduler stepped. Current LR G: 0.000247, LR D: 0.000247
Epoch: [ 2], Batch: [ 0/ 539] | Total Time: 37m 43s
d_loss: 1.3915, g_loss: 27.0562, const_loss: 0.0023, l1_loss: 18.7806, fm_loss: 0.0116, perc_loss: 7.3788, edge: 0.1901
Epoch: [ 2], Batch: [ 100/ 539] | Total Time: 41m 15s
d_loss: 1.3868, g_loss: 25.2828, const_loss: 0.0023, l1_loss: 17.6563, fm_loss: 0.0103, perc_loss: 6.7453, edge: 0.1758
Epoch: [ 2], Batch: [ 200/ 539] | Total Time: 44m 45s
d_loss: 1.3868, g_loss: 27.2939, const_loss: 0.0021, l1_loss: 18.8207, fm_loss: 0.0110, perc_loss: 7.5753, edge: 0.1918
Epoch: [ 2], Batch: [ 300/ 539] | Total Time: 48m 15s
d_loss: 1.3868, g_loss: 25.2481, const_loss: 0.0020, l1_loss: 17.7048, fm_loss: 0.0098, perc_loss: 6.6672, edge: 0.1714
Epoch: [ 2], Batch: [ 400/ 539] | Total Time: 51m 45s
d_loss: 1.3868, g_loss: 28.1506, const_loss: 0.0020, l1_loss: 19.3448, fm_loss: 0.0107, perc_loss: 7.9120, edge: 0.1882
Epoch: [ 2], Batch: [ 500/ 539] | Total Time: 55m 18s
d_loss: 1.3867, g_loss: 27.3616, const_loss: 0.0018, l1_loss: 18.9626, fm_loss: 0.0113, perc_loss: 7.5080, edge: 0.1850
--- End of Epoch 2 --- Time: 1137.7s ---
LR Scheduler stepped. Current LR G: 0.000246, LR D: 0.000246
Epoch: [ 3], Batch: [ 0/ 539] | Total Time: 56m 40s
d_loss: 1.3913, g_loss: 27.2776, const_loss: 0.0017, l1_loss: 18.7670, fm_loss: 0.0109, perc_loss: 7.6206, edge: 0.1845
Epoch: [ 3], Batch: [ 100/ 539] | Total Time: 1h 10s
d_loss: 1.3868, g_loss: 25.2849, const_loss: 0.0019, l1_loss: 17.6726, fm_loss: 0.0096, perc_loss: 6.7462, edge: 0.1617
Epoch: [ 3], Batch: [ 200/ 539] | Total Time: 1h 3m 42s
d_loss: 1.3872, g_loss: 25.7254, const_loss: 0.0015, l1_loss: 18.1201, fm_loss: 0.0107, perc_loss: 6.7239, edge: 0.1763
Epoch: [ 3], Batch: [ 300/ 539] | Total Time: 1h 7m 14s
d_loss: 1.3869, g_loss: 27.2495, const_loss: 0.0019, l1_loss: 18.8104, fm_loss: 0.0109, perc_loss: 7.5405, edge: 0.1929
Epoch: [ 3], Batch: [ 400/ 539] | Total Time: 1h 10m 44s
d_loss: 1.3870, g_loss: 25.7719, const_loss: 0.0018, l1_loss: 18.0369, fm_loss: 0.0100, perc_loss: 6.8667, edge: 0.1636
from 144 to 146
Model 144 loaded successfully
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 539] | Total Time: 2s
d_loss: 1.3873, g_loss: 24.7857, const_loss: 0.0011, l1_loss: 17.3995, fm_loss: 0.0086, perc_loss: 6.5231, edge: 0.1606
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 539] | Total Time: 3m 19s
d_loss: 1.3881, g_loss: 23.7409, const_loss: 0.0011, l1_loss: 16.7564, fm_loss: 0.0082, perc_loss: 6.1343, edge: 0.1480
Epoch: [ 0], Batch: [ 200/ 539] | Total Time: 6m 39s
d_loss: 1.3871, g_loss: 22.6453, const_loss: 0.0015, l1_loss: 15.8455, fm_loss: 0.0079, perc_loss: 5.9467, edge: 0.1508
Epoch: [ 0], Batch: [ 300/ 539] | Total Time: 9m 58s
d_loss: 1.3868, g_loss: 26.4910, const_loss: 0.0015, l1_loss: 18.3494, fm_loss: 0.0093, perc_loss: 7.2675, edge: 0.1705
Epoch: [ 0], Batch: [ 400/ 539] | Total Time: 13m 17s
d_loss: 1.3868, g_loss: 24.1711, const_loss: 0.0015, l1_loss: 16.9603, fm_loss: 0.0088, perc_loss: 6.3485, edge: 0.1593
Epoch: [ 0], Batch: [ 500/ 539] | Total Time: 16m 36s
d_loss: 1.3888, g_loss: 23.3878, const_loss: 0.0011, l1_loss: 16.3495, fm_loss: 0.0078, perc_loss: 6.1892, edge: 0.1473
--- End of Epoch 0 --- Time: 1069.7s ---
LR Scheduler stepped. Current LR G: 0.000241, LR D: 0.000241
Epoch: [ 1], Batch: [ 0/ 539] | Total Time: 17m 51s
d_loss: 1.3872, g_loss: 25.4127, const_loss: 0.0012, l1_loss: 17.7206, fm_loss: 0.0092, perc_loss: 6.8171, edge: 0.1717
Epoch: [ 1], Batch: [ 100/ 539] | Total Time: 21m 13s
d_loss: 1.3878, g_loss: 23.3626, const_loss: 0.0011, l1_loss: 16.4223, fm_loss: 0.0085, perc_loss: 6.0866, edge: 0.1512
Epoch: [ 1], Batch: [ 200/ 539] | Total Time: 24m 31s
d_loss: 1.3868, g_loss: 26.2142, const_loss: 0.0012, l1_loss: 18.3020, fm_loss: 0.0100, perc_loss: 7.0280, edge: 0.1801
Epoch: [ 1], Batch: [ 300/ 539] | Total Time: 27m 50s
d_loss: 1.3868, g_loss: 23.6794, const_loss: 0.0011, l1_loss: 16.6226, fm_loss: 0.0085, perc_loss: 6.2099, edge: 0.1444
Epoch: [ 1], Batch: [ 400/ 539] | Total Time: 31m 10s
d_loss: 1.3869, g_loss: 25.1345, const_loss: 0.0013, l1_loss: 17.6895, fm_loss: 0.0094, perc_loss: 6.5859, edge: 0.1555
Epoch: [ 1], Batch: [ 500/ 539] | Total Time: 34m 30s
d_loss: 1.3868, g_loss: 25.9830, const_loss: 0.0012, l1_loss: 18.2100, fm_loss: 0.0090, perc_loss: 6.9028, edge: 0.1671
--- End of Epoch 1 --- Time: 1073.8s ---
LR Scheduler stepped. Current LR G: 0.000240, LR D: 0.000240
Epoch: [ 2], Batch: [ 0/ 539] | Total Time: 35m 45s
d_loss: 1.3915, g_loss: 24.6864, const_loss: 0.0015, l1_loss: 17.3875, fm_loss: 0.0098, perc_loss: 6.4362, edge: 0.1585
Epoch: [ 2], Batch: [ 100/ 539] | Total Time: 39m 6s
d_loss: 1.3868, g_loss: 23.0979, const_loss: 0.0014, l1_loss: 16.3586, fm_loss: 0.0080, perc_loss: 5.8940, edge: 0.1430
Epoch: [ 2], Batch: [ 200/ 539] | Total Time: 42m 27s
d_loss: 1.3868, g_loss: 25.1246, const_loss: 0.0014, l1_loss: 17.4886, fm_loss: 0.0089, perc_loss: 6.7718, edge: 0.1609
Epoch: [ 2], Batch: [ 300/ 539] | Total Time: 45m 45s
d_loss: 1.3867, g_loss: 23.3252, const_loss: 0.0012, l1_loss: 16.5605, fm_loss: 0.0085, perc_loss: 5.9190, edge: 0.1431
Epoch: [ 2], Batch: [ 400/ 539] | Total Time: 49m 6s
d_loss: 1.3869, g_loss: 24.1600, const_loss: 0.0013, l1_loss: 17.0914, fm_loss: 0.0080, perc_loss: 6.2258, edge: 0.1406
Epoch: [ 2], Batch: [ 500/ 539] | Total Time: 52m 25s
d_loss: 1.3868, g_loss: 24.7493, const_loss: 0.0013, l1_loss: 17.3924, fm_loss: 0.0089, perc_loss: 6.4994, edge: 0.1544
--- End of Epoch 2 --- Time: 1078.2s ---
LR Scheduler stepped. Current LR G: 0.000238, LR D: 0.000238
Epoch: [ 3], Batch: [ 0/ 539] | Total Time: 53m 43s
d_loss: 1.3889, g_loss: 25.6862, const_loss: 0.0011, l1_loss: 17.8268, fm_loss: 0.0102, perc_loss: 6.9767, edge: 0.1784
Epoch: [ 3], Batch: [ 100/ 539] | Total Time: 57m 2s
d_loss: 1.3868, g_loss: 23.5199, const_loss: 0.0014, l1_loss: 16.6209, fm_loss: 0.0082, perc_loss: 6.0571, edge: 0.1393
Epoch: [ 3], Batch: [ 200/ 539] | Total Time: 1h 21s
d_loss: 1.3872, g_loss: 23.9307, const_loss: 0.0013, l1_loss: 17.0866, fm_loss: 0.0087, perc_loss: 5.9871, edge: 0.1542
Epoch: [ 3], Batch: [ 300/ 539] | Total Time: 1h 3m 41s
d_loss: 1.3868, g_loss: 25.2010, const_loss: 0.0012, l1_loss: 17.6444, fm_loss: 0.0089, perc_loss: 6.6888, edge: 0.1649
Epoch: [ 3], Batch: [ 400/ 539] | Total Time: 1h 6m 59s
d_loss: 1.3869, g_loss: 23.4013, const_loss: 0.0013, l1_loss: 16.6958, fm_loss: 0.0076, perc_loss: 5.8695, edge: 0.1343
Epoch: [ 3], Batch: [ 500/ 539] | Total Time: 1h 10m 18s
d_loss: 1.3877, g_loss: 24.3205, const_loss: 0.0009, l1_loss: 17.2084, fm_loss: 0.0079, perc_loss: 6.2661, edge: 0.1443
--- End of Epoch 3 --- Time: 1070.2s ---
LR Scheduler stepped. Current LR G: 0.000235, LR D: 0.000235
Epoch: [ 4], Batch: [ 0/ 539] | Total Time: 1h 11m 33s
d_loss: 1.3872, g_loss: 24.8026, const_loss: 0.0010, l1_loss: 17.1952, fm_loss: 0.0092, perc_loss: 6.7349, edge: 0.1694
Epoch: [ 4], Batch: [ 100/ 539] | Total Time: 1h 14m 53s
d_loss: 1.3903, g_loss: 22.8814, const_loss: 0.0011, l1_loss: 16.2688, fm_loss: 0.0083, perc_loss: 5.7715, edge: 0.1390
Epoch: [ 4], Batch: [ 200/ 539] | Total Time: 1h 18m 14s
d_loss: 1.3869, g_loss: 25.8655, const_loss: 0.0012, l1_loss: 18.1456, fm_loss: 0.0094, perc_loss: 6.8477, edge: 0.1688
Epoch: [ 4], Batch: [ 300/ 539] | Total Time: 1h 21m 34s
d_loss: 1.3868, g_loss: 22.5698, const_loss: 0.0010, l1_loss: 15.9690, fm_loss: 0.0071, perc_loss: 5.7672, edge: 0.1325
Epoch: [ 4], Batch: [ 400/ 539] | Total Time: 1h 24m 55s
d_loss: 1.3868, g_loss: 23.9370, const_loss: 0.0012, l1_loss: 16.8962, fm_loss: 0.0080, perc_loss: 6.1794, edge: 0.1593
Epoch: [ 4], Batch: [ 500/ 539] | Total Time: 1h 28m 14s
d_loss: 1.3870, g_loss: 24.8211, const_loss: 0.0013, l1_loss: 17.4893, fm_loss: 0.0088, perc_loss: 6.4653, edge: 0.1636
--- End of Epoch 4 --- Time: 1076.1s ---
LR Scheduler stepped. Current LR G: 0.000232, LR D: 0.000232
Epoch: [ 5], Batch: [ 0/ 539] | Total Time: 1h 29m 30s
d_loss: 1.3917, g_loss: 23.5068, const_loss: 0.0011, l1_loss: 16.5765, fm_loss: 0.0074, perc_loss: 6.0857, edge: 0.1432
Epoch: [ 5], Batch: [ 100/ 539] | Total Time: 1h 32m 48s
d_loss: 1.3868, g_loss: 24.6320, const_loss: 0.0009, l1_loss: 17.2147, fm_loss: 0.0083, perc_loss: 6.5547, edge: 0.1605
Epoch: [ 5], Batch: [ 200/ 539] | Total Time: 1h 36m 12s
d_loss: 1.3868, g_loss: 24.0861, const_loss: 0.0011, l1_loss: 16.9531, fm_loss: 0.0082, perc_loss: 6.2785, edge: 0.1524
Epoch: [ 5], Batch: [ 300/ 539] | Total Time: 1h 39m 31s
d_loss: 1.3873, g_loss: 23.1934, const_loss: 0.0009, l1_loss: 16.2319, fm_loss: 0.0077, perc_loss: 6.1021, edge: 0.1580
Epoch: [ 5], Batch: [ 400/ 539] | Total Time: 1h 42m 50s
d_loss: 1.3868, g_loss: 24.8955, const_loss: 0.0011, l1_loss: 17.4066, fm_loss: 0.0090, perc_loss: 6.6264, edge: 0.1596
Epoch: [ 5], Batch: [ 500/ 539] | Total Time: 1h 46m 9s
d_loss: 1.3873, g_loss: 23.3460, const_loss: 0.0012, l1_loss: 16.4712, fm_loss: 0.0081, perc_loss: 6.0240, edge: 0.1486
--- End of Epoch 5 --- Time: 1076.7s ---
LR Scheduler stepped. Current LR G: 0.000228, LR D: 0.000228
Epoch: [ 6], Batch: [ 0/ 539] | Total Time: 1h 47m 26s
d_loss: 1.3876, g_loss: 23.3415, const_loss: 0.0011, l1_loss: 16.4865, fm_loss: 0.0077, perc_loss: 6.0027, edge: 0.1507
Epoch: [ 6], Batch: [ 100/ 539] | Total Time: 1h 50m 46s
d_loss: 1.3867, g_loss: 23.2923, const_loss: 0.0008, l1_loss: 16.3127, fm_loss: 0.0076, perc_loss: 6.1280, edge: 0.1504
Epoch: [ 6], Batch: [ 200/ 539] | Total Time: 1h 54m 5s
d_loss: 1.3874, g_loss: 21.7876, const_loss: 0.0012, l1_loss: 15.4801, fm_loss: 0.0070, perc_loss: 5.4713, edge: 0.1351
Epoch: [ 6], Batch: [ 300/ 539] | Total Time: 1h 57m 25s
d_loss: 1.3868, g_loss: 23.2062, const_loss: 0.0007, l1_loss: 16.4768, fm_loss: 0.0077, perc_loss: 5.8807, edge: 0.1475
Epoch: [ 6], Batch: [ 400/ 539] | Total Time: 2h 49s
d_loss: 1.3869, g_loss: 25.0717, const_loss: 0.0010, l1_loss: 17.7367, fm_loss: 0.0087, perc_loss: 6.4713, edge: 0.1610
Epoch: [ 6], Batch: [ 500/ 539] | Total Time: 2h 4m 13s
d_loss: 1.3868, g_loss: 23.9635, const_loss: 0.0010, l1_loss: 16.7648, fm_loss: 0.0079, perc_loss: 6.3329, edge: 0.1640
--- End of Epoch 6 --- Time: 1082.2s ---
LR Scheduler stepped. Current LR G: 0.000223, LR D: 0.000223
Epoch: [ 7], Batch: [ 0/ 539] | Total Time: 2h 5m 28s
d_loss: 1.3879, g_loss: 24.5318, const_loss: 0.0009, l1_loss: 17.1944, fm_loss: 0.0083, perc_loss: 6.4723, edge: 0.1630
Epoch: [ 7], Batch: [ 100/ 539] | Total Time: 2h 8m 47s
d_loss: 1.3868, g_loss: 23.9418, const_loss: 0.0009, l1_loss: 16.9154, fm_loss: 0.0075, perc_loss: 6.1817, edge: 0.1434
Epoch: [ 7], Batch: [ 200/ 539] | Total Time: 2h 12m 11s
d_loss: 1.3872, g_loss: 22.4428, const_loss: 0.0009, l1_loss: 15.7667, fm_loss: 0.0075, perc_loss: 5.8394, edge: 0.1356
Epoch: [ 7], Batch: [ 300/ 539] | Total Time: 2h 15m 34s
d_loss: 1.3867, g_loss: 23.5539, const_loss: 0.0011, l1_loss: 16.5690, fm_loss: 0.0077, perc_loss: 6.1358, edge: 0.1475
Epoch: [ 7], Batch: [ 400/ 539] | Total Time: 2h 18m 53s
d_loss: 1.3868, g_loss: 22.3578, const_loss: 0.0012, l1_loss: 15.9982, fm_loss: 0.0073, perc_loss: 5.5201, edge: 0.1382
Epoch: [ 7], Batch: [ 500/ 539] | Total Time: 2h 22m 19s
d_loss: 1.3869, g_loss: 21.2837, const_loss: 0.0009, l1_loss: 15.1159, fm_loss: 0.0068, perc_loss: 5.3344, edge: 0.1329
--- End of Epoch 7 --- Time: 1087.3s ---
LR Scheduler stepped. Current LR G: 0.000218, LR D: 0.000218
Epoch: [ 8], Batch: [ 0/ 539] | Total Time: 2h 23m 36s
d_loss: 1.3909, g_loss: 22.8791, const_loss: 0.0009, l1_loss: 16.3431, fm_loss: 0.0079, perc_loss: 5.6952, edge: 0.1390
Epoch: [ 8], Batch: [ 100/ 539] | Total Time: 2h 26m 55s
d_loss: 1.3868, g_loss: 23.3262, const_loss: 0.0008, l1_loss: 16.6342, fm_loss: 0.0076, perc_loss: 5.8488, edge: 0.1419
Epoch: [ 8], Batch: [ 200/ 539] | Total Time: 2h 30m 18s
d_loss: 1.3868, g_loss: 23.7120, const_loss: 0.0007, l1_loss: 16.8614, fm_loss: 0.0082, perc_loss: 6.0020, edge: 0.1467
Epoch: [ 8], Batch: [ 300/ 539] | Total Time: 2h 33m 43s
d_loss: 1.3868, g_loss: 22.3260, const_loss: 0.0009, l1_loss: 15.9172, fm_loss: 0.0073, perc_loss: 5.5772, edge: 0.1305
Epoch: [ 8], Batch: [ 400/ 539] | Total Time: 2h 37m 2s
d_loss: 1.3868, g_loss: 22.3371, const_loss: 0.0009, l1_loss: 15.9032, fm_loss: 0.0073, perc_loss: 5.5926, edge: 0.1402
Epoch: [ 8], Batch: [ 500/ 539] | Total Time: 2h 40m 26s
d_loss: 1.3882, g_loss: 24.0996, const_loss: 0.0013, l1_loss: 17.0594, fm_loss: 0.0078, perc_loss: 6.1956, edge: 0.1427
--- End of Epoch 8 --- Time: 1084.6s ---
LR Scheduler stepped. Current LR G: 0.000212, LR D: 0.000212
Epoch: [ 9], Batch: [ 0/ 539] | Total Time: 2h 41m 40s
d_loss: 1.3884, g_loss: 23.5997, const_loss: 0.0010, l1_loss: 16.6671, fm_loss: 0.0079, perc_loss: 6.0883, edge: 0.1425
Epoch: [ 9], Batch: [ 100/ 539] | Total Time: 2h 45m 2s
d_loss: 1.3871, g_loss: 22.9047, const_loss: 0.0009, l1_loss: 16.4100, fm_loss: 0.0074, perc_loss: 5.6546, edge: 0.1389
Epoch: [ 9], Batch: [ 200/ 539] | Total Time: 2h 48m 19s
d_loss: 1.3869, g_loss: 23.8843, const_loss: 0.0010, l1_loss: 16.9978, fm_loss: 0.0079, perc_loss: 6.0404, edge: 0.1443
Epoch: [ 9], Batch: [ 300/ 539] | Total Time: 2h 51m 37s
d_loss: 1.3870, g_loss: 24.8205, const_loss: 0.0010, l1_loss: 17.4750, fm_loss: 0.0082, perc_loss: 6.4827, edge: 0.1606
Epoch: [ 9], Batch: [ 400/ 539] | Total Time: 2h 54m 58s
d_loss: 1.3867, g_loss: 25.2759, const_loss: 0.0009, l1_loss: 18.0040, fm_loss: 0.0091, perc_loss: 6.4145, edge: 0.1546
Epoch: [ 9], Batch: [ 500/ 539] | Total Time: 2h 58m 15s
d_loss: 1.3868, g_loss: 23.7570, const_loss: 0.0010, l1_loss: 16.8689, fm_loss: 0.0080, perc_loss: 6.0347, edge: 0.1516
--- End of Epoch 9 --- Time: 1069.7s ---
LR Scheduler stepped. Current LR G: 0.000206, LR D: 0.000206
Epoch: [ 10], Batch: [ 0/ 539] | Total Time: 2h 59m 30s
d_loss: 1.3873, g_loss: 24.8151, const_loss: 0.0009, l1_loss: 17.4759, fm_loss: 0.0083, perc_loss: 6.4788, edge: 0.1584
Epoch: [ 10], Batch: [ 100/ 539] | Total Time: 3h 2m 47s
d_loss: 1.3869, g_loss: 22.7026, const_loss: 0.0009, l1_loss: 16.1079, fm_loss: 0.0070, perc_loss: 5.7610, edge: 0.1328
Epoch: [ 10], Batch: [ 200/ 539] | Total Time: 3h 6m 5s
d_loss: 1.3869, g_loss: 23.2754, const_loss: 0.0008, l1_loss: 16.5910, fm_loss: 0.0073, perc_loss: 5.8490, edge: 0.1344
Epoch: [ 10], Batch: [ 300/ 539] | Total Time: 3h 9m 23s
d_loss: 1.3867, g_loss: 23.9133, const_loss: 0.0007, l1_loss: 16.8494, fm_loss: 0.0078, perc_loss: 6.2175, edge: 0.1451
Epoch: [ 10], Batch: [ 400/ 539] | Total Time: 3h 12m 43s
d_loss: 1.3868, g_loss: 21.6337, const_loss: 0.0012, l1_loss: 15.5556, fm_loss: 0.0073, perc_loss: 5.2498, edge: 0.1269
Epoch: [ 10], Batch: [ 500/ 539] | Total Time: 3h 16m 0s
d_loss: 1.3867, g_loss: 22.4980, const_loss: 0.0006, l1_loss: 16.0534, fm_loss: 0.0067, perc_loss: 5.6144, edge: 0.1299
--- End of Epoch 10 --- Time: 1066.9s ---
LR Scheduler stepped. Current LR G: 0.000199, LR D: 0.000199
Epoch: [ 11], Batch: [ 0/ 539] | Total Time: 3h 17m 17s
d_loss: 1.3879, g_loss: 24.3287, const_loss: 0.0007, l1_loss: 16.9949, fm_loss: 0.0078, perc_loss: 6.4732, edge: 0.1592
Epoch: [ 11], Batch: [ 100/ 539] | Total Time: 3h 20m 39s
d_loss: 1.3868, g_loss: 23.2878, const_loss: 0.0010, l1_loss: 16.6520, fm_loss: 0.0072, perc_loss: 5.8007, edge: 0.1341
Epoch: [ 11], Batch: [ 200/ 539] | Total Time: 3h 23m 59s
d_loss: 1.3869, g_loss: 20.6968, const_loss: 0.0005, l1_loss: 14.9046, fm_loss: 0.0060, perc_loss: 4.9707, edge: 0.1220
Epoch: [ 11], Batch: [ 300/ 539] | Total Time: 3h 27m 17s
d_loss: 1.3867, g_loss: 22.6680, const_loss: 0.0009, l1_loss: 16.0656, fm_loss: 0.0068, perc_loss: 5.7612, edge: 0.1407
Epoch: [ 11], Batch: [ 400/ 539] | Total Time: 3h 30m 39s
d_loss: 1.3867, g_loss: 22.9263, const_loss: 0.0009, l1_loss: 16.2856, fm_loss: 0.0072, perc_loss: 5.8085, edge: 0.1311
Epoch: [ 11], Batch: [ 500/ 539] | Total Time: 3h 34m 3s
d_loss: 1.3870, g_loss: 24.0474, const_loss: 0.0009, l1_loss: 17.0039, fm_loss: 0.0079, perc_loss: 6.1856, edge: 0.1562
--- End of Epoch 11 --- Time: 1080.4s ---
LR Scheduler stepped. Current LR G: 0.000192, LR D: 0.000192
Epoch: [ 12], Batch: [ 0/ 539] | Total Time: 3h 35m 17s
d_loss: 1.3891, g_loss: 23.6108, const_loss: 0.0007, l1_loss: 16.6363, fm_loss: 0.0071, perc_loss: 6.1293, edge: 0.1446
Epoch: [ 12], Batch: [ 100/ 539] | Total Time: 3h 38m 35s
d_loss: 1.3868, g_loss: 22.1143, const_loss: 0.0008, l1_loss: 15.8174, fm_loss: 0.0068, perc_loss: 5.4762, edge: 0.1203
Epoch: [ 12], Batch: [ 200/ 539] | Total Time: 3h 41m 53s
d_loss: 1.3868, g_loss: 22.4958, const_loss: 0.0010, l1_loss: 16.1022, fm_loss: 0.0072, perc_loss: 5.5542, edge: 0.1384
Epoch: [ 12], Batch: [ 300/ 539] | Total Time: 3h 45m 10s
d_loss: 1.3868, g_loss: 21.5193, const_loss: 0.0007, l1_loss: 15.3706, fm_loss: 0.0066, perc_loss: 5.3259, edge: 0.1226
Epoch: [ 12], Batch: [ 400/ 539] | Total Time: 3h 48m 28s
d_loss: 1.3868, g_loss: 23.3455, const_loss: 0.0009, l1_loss: 16.4680, fm_loss: 0.0075, perc_loss: 6.0356, edge: 0.1407
from 146 to 148 figure:

log:
Epoch: [ 0], Batch: [ 0/ 577] | Total Time: 2s
d_loss: 1.3867, g_loss: 23.8926, const_loss: 0.0005, l1_loss: 16.9147, fm_loss: 0.0076, perc_loss: 6.1113, edge: 0.1656
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 577] | Total Time: 3m 27s
d_loss: 1.3871, g_loss: 24.6563, const_loss: 0.0008, l1_loss: 17.3607, fm_loss: 0.0085, perc_loss: 6.4368, edge: 0.1567
Epoch: [ 0], Batch: [ 200/ 577] | Total Time: 7m 0s
d_loss: 1.3868, g_loss: 24.4695, const_loss: 0.0006, l1_loss: 17.3493, fm_loss: 0.0073, perc_loss: 6.2618, edge: 0.1576
Epoch: [ 0], Batch: [ 300/ 577] | Total Time: 10m 28s
d_loss: 1.3868, g_loss: 21.8947, const_loss: 0.0008, l1_loss: 15.8135, fm_loss: 0.0065, perc_loss: 5.2509, edge: 0.1302
Epoch: [ 0], Batch: [ 400/ 577] | Total Time: 13m 55s
d_loss: 1.3869, g_loss: 20.8563, const_loss: 0.0006, l1_loss: 14.8864, fm_loss: 0.0066, perc_loss: 5.1494, edge: 0.1204
Epoch: [ 0], Batch: [ 500/ 577] | Total Time: 17m 25s
d_loss: 1.3868, g_loss: 23.7847, const_loss: 0.0008, l1_loss: 16.9137, fm_loss: 0.0075, perc_loss: 6.0187, edge: 0.1512
--- End of Epoch 0 --- Time: 1200.7s ---
LR Scheduler stepped. Current LR G: 0.000219, LR D: 0.000219
Epoch: [ 1], Batch: [ 0/ 577] | Total Time: 20m 2s
d_loss: 1.3868, g_loss: 24.0359, const_loss: 0.0006, l1_loss: 17.0231, fm_loss: 0.0077, perc_loss: 6.1639, edge: 0.1479
Epoch: [ 1], Batch: [ 100/ 577] | Total Time: 23m 29s
d_loss: 1.3874, g_loss: 24.1934, const_loss: 0.0006, l1_loss: 16.8978, fm_loss: 0.0077, perc_loss: 6.4318, edge: 0.1626
Epoch: [ 1], Batch: [ 200/ 577] | Total Time: 26m 56s
d_loss: 1.3868, g_loss: 23.0206, const_loss: 0.0006, l1_loss: 16.3228, fm_loss: 0.0071, perc_loss: 5.8475, edge: 0.1496
Epoch: [ 1], Batch: [ 300/ 577] | Total Time: 30m 23s
d_loss: 1.3871, g_loss: 24.6371, const_loss: 0.0006, l1_loss: 17.4794, fm_loss: 0.0081, perc_loss: 6.3019, edge: 0.1543
Epoch: [ 1], Batch: [ 400/ 577] | Total Time: 33m 50s
d_loss: 1.3869, g_loss: 25.1392, const_loss: 0.0007, l1_loss: 17.6653, fm_loss: 0.0079, perc_loss: 6.6046, edge: 0.1678
Epoch: [ 1], Batch: [ 500/ 577] | Total Time: 37m 17s
d_loss: 1.3868, g_loss: 23.7119, const_loss: 0.0008, l1_loss: 16.8718, fm_loss: 0.0074, perc_loss: 5.9925, edge: 0.1465
--- End of Epoch 1 --- Time: 1193.6s ---
LR Scheduler stepped. Current LR G: 0.000218, LR D: 0.000218
Epoch: [ 2], Batch: [ 0/ 577] | Total Time: 39m 56s
d_loss: 1.3878, g_loss: 24.2973, const_loss: 0.0007, l1_loss: 17.2008, fm_loss: 0.0077, perc_loss: 6.2333, edge: 0.1620
Epoch: [ 2], Batch: [ 100/ 577] | Total Time: 43m 23s
d_loss: 1.3869, g_loss: 22.9387, const_loss: 0.0005, l1_loss: 16.3614, fm_loss: 0.0072, perc_loss: 5.7381, edge: 0.1385
Epoch: [ 2], Batch: [ 200/ 577] | Total Time: 46m 52s
d_loss: 1.3868, g_loss: 22.8259, const_loss: 0.0006, l1_loss: 16.1648, fm_loss: 0.0070, perc_loss: 5.8123, edge: 0.1483
Epoch: [ 2], Batch: [ 300/ 577] | Total Time: 50m 21s
d_loss: 1.3868, g_loss: 24.0368, const_loss: 0.0007, l1_loss: 17.0265, fm_loss: 0.0077, perc_loss: 6.1591, edge: 0.1499
Epoch: [ 2], Batch: [ 400/ 577] | Total Time: 53m 48s
d_loss: 1.3868, g_loss: 24.0249, const_loss: 0.0006, l1_loss: 17.0640, fm_loss: 0.0075, perc_loss: 6.1141, edge: 0.1458
Epoch: [ 2], Batch: [ 500/ 577] | Total Time: 57m 14s
d_loss: 1.3868, g_loss: 25.4215, const_loss: 0.0005, l1_loss: 17.8108, fm_loss: 0.0082, perc_loss: 6.7346, edge: 0.1745
--- End of Epoch 2 --- Time: 1198.1s ---
LR Scheduler stepped. Current LR G: 0.000216, LR D: 0.000216
Epoch: [ 3], Batch: [ 0/ 577] | Total Time: 59m 54s
d_loss: 1.3869, g_loss: 24.4774, const_loss: 0.0005, l1_loss: 17.2918, fm_loss: 0.0079, perc_loss: 6.3219, edge: 0.1625
Epoch: [ 3], Batch: [ 100/ 577] | Total Time: 1h 3m 21s
d_loss: 1.3868, g_loss: 24.4484, const_loss: 0.0007, l1_loss: 17.2877, fm_loss: 0.0086, perc_loss: 6.3086, edge: 0.1500
Epoch: [ 3], Batch: [ 200/ 577] | Total Time: 1h 6m 47s
d_loss: 1.3869, g_loss: 24.1055, const_loss: 0.0005, l1_loss: 17.0208, fm_loss: 0.0076, perc_loss: 6.2334, edge: 0.1503
Epoch: [ 3], Batch: [ 300/ 577] | Total Time: 1h 10m 19s
d_loss: 1.3867, g_loss: 22.6854, const_loss: 0.0006, l1_loss: 16.1635, fm_loss: 0.0066, perc_loss: 5.6796, edge: 0.1422
Epoch: [ 3], Batch: [ 400/ 577] | Total Time: 1h 13m 46s
d_loss: 1.3868, g_loss: 21.7298, const_loss: 0.0004, l1_loss: 15.5020, fm_loss: 0.0064, perc_loss: 5.3933, edge: 0.1349
Epoch: [ 3], Batch: [ 500/ 577] | Total Time: 1h 17m 14s
d_loss: 1.3870, g_loss: 24.0253, const_loss: 0.0005, l1_loss: 17.1230, fm_loss: 0.0072, perc_loss: 6.0452, edge: 0.1565
--- End of Epoch 3 --- Time: 1199.2s ---
LR Scheduler stepped. Current LR G: 0.000214, LR D: 0.000214
Epoch: [ 4], Batch: [ 0/ 577] | Total Time: 1h 19m 53s
d_loss: 1.3867, g_loss: 24.6351, const_loss: 0.0005, l1_loss: 17.3537, fm_loss: 0.0078, perc_loss: 6.4213, edge: 0.1589
Epoch: [ 4], Batch: [ 100/ 577] | Total Time: 1h 23m 21s
d_loss: 1.3868, g_loss: 23.6512, const_loss: 0.0005, l1_loss: 16.6363, fm_loss: 0.0070, perc_loss: 6.1551, edge: 0.1595
Epoch: [ 4], Batch: [ 200/ 577] | Total Time: 1h 26m 48s
d_loss: 1.3867, g_loss: 23.3488, const_loss: 0.0006, l1_loss: 16.5379, fm_loss: 0.0070, perc_loss: 5.9640, edge: 0.1463
Epoch: [ 4], Batch: [ 300/ 577] | Total Time: 1h 30m 15s
d_loss: 1.3870, g_loss: 23.3648, const_loss: 0.0007, l1_loss: 16.3824, fm_loss: 0.0073, perc_loss: 6.1304, edge: 0.1511
Epoch: [ 4], Batch: [ 400/ 577] | Total Time: 1h 33m 46s
d_loss: 1.3868, g_loss: 22.8586, const_loss: 0.0004, l1_loss: 16.0976, fm_loss: 0.0071, perc_loss: 5.9168, edge: 0.1438
Epoch: [ 4], Batch: [ 500/ 577] | Total Time: 1h 37m 17s
d_loss: 1.3872, g_loss: 23.2653, const_loss: 0.0006, l1_loss: 16.5778, fm_loss: 0.0071, perc_loss: 5.8390, edge: 0.1480
--- End of Epoch 4 --- Time: 1200.5s ---
LR Scheduler stepped. Current LR G: 0.000211, LR D: 0.000211
Epoch: [ 5], Batch: [ 0/ 577] | Total Time: 1h 39m 54s
d_loss: 1.3868, g_loss: 22.6721, const_loss: 0.0005, l1_loss: 16.1458, fm_loss: 0.0071, perc_loss: 5.6867, edge: 0.1390
Epoch: [ 5], Batch: [ 100/ 577] | Total Time: 1h 43m 21s
d_loss: 1.3867, g_loss: 21.9167, const_loss: 0.0006, l1_loss: 15.6595, fm_loss: 0.0065, perc_loss: 5.4184, edge: 0.1388
Epoch: [ 5], Batch: [ 200/ 577] | Total Time: 1h 46m 48s
d_loss: 1.3868, g_loss: 25.1715, const_loss: 0.0005, l1_loss: 17.5510, fm_loss: 0.0080, perc_loss: 6.7535, edge: 0.1656
Epoch: [ 5], Batch: [ 300/ 577] | Total Time: 1h 50m 22s
d_loss: 1.3868, g_loss: 22.8280, const_loss: 0.0006, l1_loss: 16.1982, fm_loss: 0.0071, perc_loss: 5.7898, edge: 0.1394
Epoch: [ 5], Batch: [ 400/ 577] | Total Time: 1h 53m 54s
d_loss: 1.3868, g_loss: 24.3217, const_loss: 0.0006, l1_loss: 17.1401, fm_loss: 0.0079, perc_loss: 6.3298, edge: 0.1505
Epoch: [ 5], Batch: [ 500/ 577] | Total Time: 1h 57m 20s
d_loss: 1.3868, g_loss: 24.2688, const_loss: 0.0005, l1_loss: 17.1618, fm_loss: 0.0080, perc_loss: 6.2570, edge: 0.1485
from 148 to 150, 感覺相關數值很難再下降, figure:

/gdrive/My Drive/Colab Notebooks/zi2zi-pytorch
資料目錄:experiments/data-maruko-regular-384-tc
檢查點目錄:experiments/checkpoint
initialize network with normal
initialize network with normal
Model set to TRAIN mode.
---------- Networks initialized -------------
UNetGenerator(
(model): UnetSkipConnectionBlock(
(down): Sequential(
(0): Conv2d(1, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(128, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(1, 1, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(1, 1, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(1, 1, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): Tanh()
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 64, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 64, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 64, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 128, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 128, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 128, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 256, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 256, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 256, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(1024, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 512, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 512, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 512, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(dropout): Dropout(p=0.3, inplace=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(1024, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 512, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 512, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 512, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(dropout): Dropout(p=0.3, inplace=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(1024, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 512, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 512, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 512, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(dropout): Dropout(p=0.3, inplace=False)
)
(submodule): UnetSkipConnectionBlock(
(down): Sequential(
(0): SiLU(inplace=True)
(1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(up): Sequential(
(0): SiLU(inplace=True)
(1): PixelShuffleUpBlock(
(conv): Conv2d(512, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(post_conv): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): GroupNorm(8, 512, eps=1e-05, affine=True)
(2): SiLU()
(3): ResSkip(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 512, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 512, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
(transformer_block): TransformerBlock(
(attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True)
)
(ffn): Sequential(
(0): Linear(in_features=512, out_features=2048, bias=True)
(1): SiLU()
(2): Linear(in_features=2048, out_features=512, bias=True)
)
(norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
)
(film): FiLMModulation(
(film): Linear(in_features=64, out_features=1024, bias=True)
)
)
)
)
(attn_block): SelfAttention(
(query): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
(key): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
(value): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
)
(res_skip): ResSkip(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 512, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 512, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
(res_skip): ResSkip(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 256, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 256, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
(res_skip): ResSkip(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 128, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 128, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
(res_skip): ResSkip(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm1): GroupNorm(8, 64, eps=1e-05, affine=True)
(act1): SiLU()
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(norm2): GroupNorm(8, 64, eps=1e-05, affine=True)
(act2): SiLU()
(skip): Identity()
)
)
)
(embedder): Embedding(2, 64)
)
[Network G] Total number of parameters : 136.617 M
Discriminator(
(model): Sequential(
(0): Conv2d(2, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): SiLU(inplace=True)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(4): SiLU(inplace=True)
(5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): SiLU(inplace=True)
(8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): SiLU(inplace=True)
)
(output_conv): Sequential(
(0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1))
(1): Tanh()
)
(category_pool): AdaptiveAvgPool2d(output_size=(4, 4))
(category_fc): Linear(in_features=8192, out_features=2, bias=True)
)
[Network D] Total number of parameters : 2.781 M
-----------------------------------------------
Resumed model from step/epoch: 148
Model 148 loaded successfully
unpickled total 8072 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 539] | Total Time: 2s
d_loss: 1.3868, g_loss: 23.6869, const_loss: 0.0005, l1_loss: 16.8700, fm_loss: 0.0077, perc_loss: 5.9676, edge: 0.1482
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 539] | Total Time: 3m 15s
d_loss: 1.3881, g_loss: 22.7134, const_loss: 0.0006, l1_loss: 16.2709, fm_loss: 0.0071, perc_loss: 5.6062, edge: 0.1357
Epoch: [ 0], Batch: [ 200/ 539] | Total Time: 6m 36s
d_loss: 1.3869, g_loss: 21.2265, const_loss: 0.0006, l1_loss: 15.1397, fm_loss: 0.0065, perc_loss: 5.2585, edge: 0.1283
Epoch: [ 0], Batch: [ 300/ 539] | Total Time: 9m 52s
d_loss: 1.3868, g_loss: 24.7209, const_loss: 0.0006, l1_loss: 17.4894, fm_loss: 0.0076, perc_loss: 6.3732, edge: 0.1573
Epoch: [ 0], Batch: [ 400/ 539] | Total Time: 13m 10s
d_loss: 1.3867, g_loss: 22.2969, const_loss: 0.0005, l1_loss: 16.0220, fm_loss: 0.0070, perc_loss: 5.4431, edge: 0.1315
Epoch: [ 0], Batch: [ 500/ 539] | Total Time: 16m 26s
d_loss: 1.3867, g_loss: 21.9203, const_loss: 0.0004, l1_loss: 15.6139, fm_loss: 0.0064, perc_loss: 5.4729, edge: 0.1337
--- End of Epoch 0 --- Time: 1059.1s ---
LR Scheduler stepped. Current LR G: 0.000210, LR D: 0.000210
Epoch: [ 1], Batch: [ 0/ 539] | Total Time: 17m 41s
d_loss: 1.3870, g_loss: 22.9106, const_loss: 0.0004, l1_loss: 16.4878, fm_loss: 0.0066, perc_loss: 5.5885, edge: 0.1344
Epoch: [ 1], Batch: [ 100/ 539] | Total Time: 20m 57s
d_loss: 1.3876, g_loss: 22.2591, const_loss: 0.0005, l1_loss: 15.9098, fm_loss: 0.0067, perc_loss: 5.5124, edge: 0.1368
Epoch: [ 1], Batch: [ 200/ 539] | Total Time: 24m 13s
d_loss: 1.3868, g_loss: 24.0608, const_loss: 0.0006, l1_loss: 17.2254, fm_loss: 0.0074, perc_loss: 5.9887, edge: 0.1459
Epoch: [ 1], Batch: [ 300/ 539] | Total Time: 27m 29s
d_loss: 1.3869, g_loss: 22.2813, const_loss: 0.0005, l1_loss: 15.9373, fm_loss: 0.0070, perc_loss: 5.5045, edge: 0.1391
Epoch: [ 1], Batch: [ 400/ 539] | Total Time: 30m 45s
d_loss: 1.3868, g_loss: 23.9840, const_loss: 0.0006, l1_loss: 17.1064, fm_loss: 0.0078, perc_loss: 6.0311, edge: 0.1452
Epoch: [ 1], Batch: [ 500/ 539] | Total Time: 34m 1s
d_loss: 1.3869, g_loss: 24.3225, const_loss: 0.0006, l1_loss: 17.4268, fm_loss: 0.0073, perc_loss: 6.0392, edge: 0.1557
--- End of Epoch 1 --- Time: 1054.9s ---
LR Scheduler stepped. Current LR G: 0.000209, LR D: 0.000209
Epoch: [ 2], Batch: [ 0/ 539] | Total Time: 35m 15s
d_loss: 1.3919, g_loss: 22.6266, const_loss: 0.0007, l1_loss: 16.3726, fm_loss: 0.0073, perc_loss: 5.4200, edge: 0.1331
Epoch: [ 2], Batch: [ 100/ 539] | Total Time: 38m 32s
d_loss: 1.3868, g_loss: 21.7272, const_loss: 0.0006, l1_loss: 15.7010, fm_loss: 0.0064, perc_loss: 5.2013, edge: 0.1250
Epoch: [ 2], Batch: [ 200/ 539] | Total Time: 41m 48s
d_loss: 1.3868, g_loss: 23.1676, const_loss: 0.0006, l1_loss: 16.5028, fm_loss: 0.0074, perc_loss: 5.8227, edge: 0.1413
Epoch: [ 2], Batch: [ 300/ 539] | Total Time: 45m 5s
d_loss: 1.3868, g_loss: 21.5845, const_loss: 0.0006, l1_loss: 15.6991, fm_loss: 0.0064, perc_loss: 5.0633, edge: 0.1223
Epoch: [ 2], Batch: [ 400/ 539] | Total Time: 48m 24s
d_loss: 1.3869, g_loss: 22.2207, const_loss: 0.0005, l1_loss: 16.1378, fm_loss: 0.0065, perc_loss: 5.2577, edge: 0.1254
Epoch: [ 2], Batch: [ 500/ 539] | Total Time: 51m 40s
d_loss: 1.3868, g_loss: 23.6552, const_loss: 0.0005, l1_loss: 16.8736, fm_loss: 0.0076, perc_loss: 5.9315, edge: 0.1492
--- End of Epoch 2 --- Time: 1065.2s ---
LR Scheduler stepped. Current LR G: 0.000207, LR D: 0.000207
Epoch: [ 3], Batch: [ 0/ 539] | Total Time: 53m 1s
d_loss: 1.3872, g_loss: 23.6268, const_loss: 0.0006, l1_loss: 16.8254, fm_loss: 0.0077, perc_loss: 5.9609, edge: 0.1394
Epoch: [ 3], Batch: [ 100/ 539] | Total Time: 56m 17s
d_loss: 1.3868, g_loss: 22.3956, const_loss: 0.0006, l1_loss: 16.0948, fm_loss: 0.0066, perc_loss: 5.4660, edge: 0.1348
Epoch: [ 3], Batch: [ 200/ 539] | Total Time: 59m 34s
d_loss: 1.3869, g_loss: 22.2962, const_loss: 0.0005, l1_loss: 16.2927, fm_loss: 0.0065, perc_loss: 5.1813, edge: 0.1223
Epoch: [ 3], Batch: [ 300/ 539] | Total Time: 1h 2m 50s
d_loss: 1.3867, g_loss: 23.8200, const_loss: 0.0006, l1_loss: 17.0087, fm_loss: 0.0068, perc_loss: 5.9757, edge: 0.1354
Epoch: [ 3], Batch: [ 400/ 539] | Total Time: 1h 6m 6s
d_loss: 1.3868, g_loss: 22.2856, const_loss: 0.0006, l1_loss: 16.1853, fm_loss: 0.0064, perc_loss: 5.2767, edge: 0.1238
Epoch: [ 3], Batch: [ 500/ 539] | Total Time: 1h 9m 22s
d_loss: 1.3870, g_loss: 22.8542, const_loss: 0.0005, l1_loss: 16.5015, fm_loss: 0.0062, perc_loss: 5.5272, edge: 0.1260
--- End of Epoch 3 --- Time: 1055.3s ---
LR Scheduler stepped. Current LR G: 0.000205, LR D: 0.000205
Epoch: [ 4], Batch: [ 0/ 539] | Total Time: 1h 10m 36s
d_loss: 1.3869, g_loss: 22.6774, const_loss: 0.0005, l1_loss: 16.1751, fm_loss: 0.0066, perc_loss: 5.6633, edge: 0.1390
Epoch: [ 4], Batch: [ 100/ 539] | Total Time: 1h 13m 52s
d_loss: 1.3868, g_loss: 22.3223, const_loss: 0.0004, l1_loss: 16.0107, fm_loss: 0.0069, perc_loss: 5.4752, edge: 0.1364
Epoch: [ 4], Batch: [ 200/ 539] | Total Time: 1h 17m 10s
d_loss: 1.3873, g_loss: 24.2922, const_loss: 0.0006, l1_loss: 17.3754, fm_loss: 0.0074, perc_loss: 6.0698, edge: 0.1460
Epoch: [ 4], Batch: [ 300/ 539] | Total Time: 1h 20m 28s
d_loss: 1.3867, g_loss: 21.7493, const_loss: 0.0004, l1_loss: 15.5913, fm_loss: 0.0063, perc_loss: 5.3296, edge: 0.1288
Epoch: [ 4], Batch: [ 400/ 539] | Total Time: 1h 23m 44s
d_loss: 1.3868, g_loss: 22.1036, const_loss: 0.0005, l1_loss: 15.9839, fm_loss: 0.0065, perc_loss: 5.2925, edge: 0.1273
Epoch: [ 4], Batch: [ 500/ 539] | Total Time: 1h 27m 1s
d_loss: 1.3867, g_loss: 23.9381, const_loss: 0.0005, l1_loss: 17.1201, fm_loss: 0.0073, perc_loss: 5.9678, edge: 0.1496
--- End of Epoch 4 --- Time: 1058.7s ---
LR Scheduler stepped. Current LR G: 0.000202, LR D: 0.000202
Epoch: [ 5], Batch: [ 0/ 539] | Total Time: 1h 28m 15s
d_loss: 1.3879, g_loss: 22.3072, const_loss: 0.0005, l1_loss: 16.0373, fm_loss: 0.0065, perc_loss: 5.4373, edge: 0.1328
Epoch: [ 5], Batch: [ 100/ 539] | Total Time: 1h 31m 32s
d_loss: 1.3868, g_loss: 22.6129, const_loss: 0.0005, l1_loss: 16.2681, fm_loss: 0.0064, perc_loss: 5.5181, edge: 0.1269
Epoch: [ 5], Batch: [ 200/ 539] | Total Time: 1h 34m 48s
d_loss: 1.3868, g_loss: 22.1280, const_loss: 0.0005, l1_loss: 16.0016, fm_loss: 0.0062, perc_loss: 5.3008, edge: 0.1260
Epoch: [ 5], Batch: [ 300/ 539] | Total Time: 1h 38m 5s
d_loss: 1.3874, g_loss: 21.4571, const_loss: 0.0004, l1_loss: 15.3778, fm_loss: 0.0059, perc_loss: 5.2611, edge: 0.1190
Epoch: [ 5], Batch: [ 400/ 539] | Total Time: 1h 41m 21s
d_loss: 1.3868, g_loss: 22.6484, const_loss: 0.0005, l1_loss: 16.2628, fm_loss: 0.0073, perc_loss: 5.5522, edge: 0.1327
Epoch: [ 5], Batch: [ 500/ 539] | Total Time: 1h 44m 38s
d_loss: 1.3872, g_loss: 21.4822, const_loss: 0.0006, l1_loss: 15.5656, fm_loss: 0.0068, perc_loss: 5.0919, edge: 0.1245
--- End of Epoch 5 --- Time: 1060.3s ---
LR Scheduler stepped. Current LR G: 0.000199, LR D: 0.000199
Epoch: [ 6], Batch: [ 0/ 539] | Total Time: 1h 45m 55s
d_loss: 1.3877, g_loss: 23.0036, const_loss: 0.0005, l1_loss: 16.3570, fm_loss: 0.0074, perc_loss: 5.7865, edge: 0.1594
Epoch: [ 6], Batch: [ 100/ 539] | Total Time: 1h 49m 11s
d_loss: 1.3867, g_loss: 21.5929, const_loss: 0.0004, l1_loss: 15.4656, fm_loss: 0.0063, perc_loss: 5.3008, edge: 0.1271
Epoch: [ 6], Batch: [ 200/ 539] | Total Time: 1h 52m 29s
d_loss: 1.3871, g_loss: 20.5678, const_loss: 0.0006, l1_loss: 14.9081, fm_loss: 0.0061, perc_loss: 4.8402, edge: 0.1199
Epoch: [ 6], Batch: [ 300/ 539] | Total Time: 1h 55m 46s
d_loss: 1.3868, g_loss: 21.4418, const_loss: 0.0004, l1_loss: 15.6355, fm_loss: 0.0063, perc_loss: 4.9896, edge: 0.1170
Epoch: [ 6], Batch: [ 400/ 539] | Total Time: 1h 59m 3s
d_loss: 1.3868, g_loss: 23.1725, const_loss: 0.0005, l1_loss: 16.7455, fm_loss: 0.0071, perc_loss: 5.5912, edge: 0.1355
Epoch: [ 6], Batch: [ 500/ 539] | Total Time: 2h 2m 19s
d_loss: 1.3868, g_loss: 22.3511, const_loss: 0.0005, l1_loss: 15.9466, fm_loss: 0.0069, perc_loss: 5.5582, edge: 0.1461
--- End of Epoch 6 --- Time: 1058.5s ---
LR Scheduler stepped. Current LR G: 0.000195, LR D: 0.000195
Epoch: [ 7], Batch: [ 0/ 539] | Total Time: 2h 3m 33s
d_loss: 1.3875, g_loss: 23.1503, const_loss: 0.0006, l1_loss: 16.5541, fm_loss: 0.0072, perc_loss: 5.7569, edge: 0.1387
Epoch: [ 7], Batch: [ 100/ 539] | Total Time: 2h 6m 50s
d_loss: 1.3869, g_loss: 22.8574, const_loss: 0.0004, l1_loss: 16.4060, fm_loss: 0.0063, perc_loss: 5.6172, edge: 0.1346
from 150 to 152

log:
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 577] | Total Time: 4s
d_loss: 1.3867, g_loss: 22.1635, const_loss: 0.0003, l1_loss: 16.0731, fm_loss: 0.0066, perc_loss: 5.2622, edge: 0.1285
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 577] | Total Time: 3m 26s
d_loss: 1.3868, g_loss: 23.2854, const_loss: 0.0006, l1_loss: 16.6572, fm_loss: 0.0071, perc_loss: 5.7780, edge: 0.1495
Epoch: [ 0], Batch: [ 200/ 577] | Total Time: 7m 2s
d_loss: 1.3867, g_loss: 23.6561, const_loss: 0.0004, l1_loss: 16.9669, fm_loss: 0.0069, perc_loss: 5.8430, edge: 0.1460
Epoch: [ 0], Batch: [ 300/ 577] | Total Time: 10m 33s
d_loss: 1.3868, g_loss: 21.5246, const_loss: 0.0004, l1_loss: 15.6695, fm_loss: 0.0067, perc_loss: 5.0189, edge: 0.1362
Epoch: [ 0], Batch: [ 400/ 577] | Total Time: 14m 5s
d_loss: 1.3868, g_loss: 20.3987, const_loss: 0.0004, l1_loss: 14.6535, fm_loss: 0.0064, perc_loss: 4.9219, edge: 0.1236
Epoch: [ 0], Batch: [ 500/ 577] | Total Time: 17m 37s
d_loss: 1.3868, g_loss: 22.6517, const_loss: 0.0006, l1_loss: 16.3629, fm_loss: 0.0069, perc_loss: 5.4536, edge: 0.1349
--- End of Epoch 0 --- Time: 1215.4s ---
LR Scheduler stepped. Current LR G: 0.000205, LR D: 0.000205
Epoch: [ 1], Batch: [ 0/ 577] | Total Time: 20m 17s
d_loss: 1.3870, g_loss: 23.0250, const_loss: 0.0004, l1_loss: 16.5241, fm_loss: 0.0068, perc_loss: 5.6602, edge: 0.1406
Epoch: [ 1], Batch: [ 100/ 577] | Total Time: 23m 48s
d_loss: 1.3872, g_loss: 22.6120, const_loss: 0.0004, l1_loss: 16.1213, fm_loss: 0.0063, perc_loss: 5.6586, edge: 0.1326
Epoch: [ 1], Batch: [ 200/ 577] | Total Time: 27m 19s
d_loss: 1.3868, g_loss: 22.5684, const_loss: 0.0004, l1_loss: 16.0960, fm_loss: 0.0068, perc_loss: 5.6257, edge: 0.1467
Epoch: [ 1], Batch: [ 300/ 577] | Total Time: 30m 50s
d_loss: 1.3868, g_loss: 23.9069, const_loss: 0.0004, l1_loss: 17.0992, fm_loss: 0.0073, perc_loss: 5.9628, edge: 0.1443
Epoch: [ 1], Batch: [ 400/ 577] | Total Time: 34m 22s
d_loss: 1.3870, g_loss: 24.3117, const_loss: 0.0004, l1_loss: 17.2895, fm_loss: 0.0071, perc_loss: 6.1658, edge: 0.1560
Epoch: [ 1], Batch: [ 500/ 577] | Total Time: 37m 55s
d_loss: 1.3868, g_loss: 22.7799, const_loss: 0.0005, l1_loss: 16.3905, fm_loss: 0.0062, perc_loss: 5.5560, edge: 0.1338
--- End of Epoch 1 --- Time: 1220.2s ---
LR Scheduler stepped. Current LR G: 0.000204, LR D: 0.000204
Epoch: [ 2], Batch: [ 0/ 577] | Total Time: 40m 37s
d_loss: 1.3870, g_loss: 23.7699, const_loss: 0.0005, l1_loss: 16.9359, fm_loss: 0.0072, perc_loss: 5.9785, edge: 0.1549
Epoch: [ 2], Batch: [ 100/ 577] | Total Time: 44m 8s
d_loss: 1.3869, g_loss: 22.4249, const_loss: 0.0004, l1_loss: 16.1000, fm_loss: 0.0063, perc_loss: 5.4921, edge: 0.1331
Epoch: [ 2], Batch: [ 200/ 577] | Total Time: 47m 40s
d_loss: 1.3869, g_loss: 22.2351, const_loss: 0.0006, l1_loss: 15.8852, fm_loss: 0.0066, perc_loss: 5.5055, edge: 0.1442
Epoch: [ 2], Batch: [ 300/ 577] | Total Time: 51m 11s
d_loss: 1.3868, g_loss: 23.0597, const_loss: 0.0005, l1_loss: 16.5132, fm_loss: 0.0064, perc_loss: 5.7043, edge: 0.1425
Epoch: [ 2], Batch: [ 400/ 577] | Total Time: 54m 42s
d_loss: 1.3867, g_loss: 23.2849, const_loss: 0.0004, l1_loss: 16.6989, fm_loss: 0.0072, perc_loss: 5.7380, edge: 0.1475
Epoch: [ 2], Batch: [ 500/ 577] | Total Time: 58m 14s
d_loss: 1.3868, g_loss: 24.5472, const_loss: 0.0003, l1_loss: 17.3882, fm_loss: 0.0071, perc_loss: 6.3063, edge: 0.1524
--- End of Epoch 2 --- Time: 1218.6s ---
LR Scheduler stepped. Current LR G: 0.000202, LR D: 0.000202
Epoch: [ 3], Batch: [ 0/ 577] | Total Time: 1h 56s
d_loss: 1.3867, g_loss: 24.0224, const_loss: 0.0004, l1_loss: 17.0663, fm_loss: 0.0071, perc_loss: 6.1019, edge: 0.1538
Epoch: [ 3], Batch: [ 100/ 577] | Total Time: 1h 4m 27s
d_loss: 1.3868, g_loss: 23.5032, const_loss: 0.0006, l1_loss: 16.8101, fm_loss: 0.0074, perc_loss: 5.8549, edge: 0.1374
Epoch: [ 3], Batch: [ 200/ 577] | Total Time: 1h 7m 58s
d_loss: 1.3869, g_loss: 23.4160, const_loss: 0.0005, l1_loss: 16.6922, fm_loss: 0.0067, perc_loss: 5.8770, edge: 0.1467
Epoch: [ 3], Batch: [ 300/ 577] | Total Time: 1h 11m 29s
d_loss: 1.3867, g_loss: 22.5027, const_loss: 0.0004, l1_loss: 16.0714, fm_loss: 0.0063, perc_loss: 5.5927, edge: 0.1391
Epoch: [ 3], Batch: [ 400/ 577] | Total Time: 1h 15m 2s
d_loss: 1.3867, g_loss: 22.0247, const_loss: 0.0003, l1_loss: 15.6488, fm_loss: 0.0063, perc_loss: 5.5256, edge: 0.1508
Epoch: [ 3], Batch: [ 500/ 577] | Total Time: 1h 18m 33s
d_loss: 1.3869, g_loss: 23.2218, const_loss: 0.0004, l1_loss: 16.7031, fm_loss: 0.0068, perc_loss: 5.6720, edge: 0.1466
--- End of Epoch 3 --- Time: 1219.4s ---
LR Scheduler stepped. Current LR G: 0.000200, LR D: 0.000200
Epoch: [ 4], Batch: [ 0/ 577] | Total Time: 1h 21m 15s
d_loss: 1.3867, g_loss: 23.9131, const_loss: 0.0005, l1_loss: 17.0090, fm_loss: 0.0069, perc_loss: 6.0508, edge: 0.1531
Epoch: [ 4], Batch: [ 100/ 577] | Total Time: 1h 24m 46s
d_loss: 1.3868, g_loss: 22.1629, const_loss: 0.0004, l1_loss: 15.8624, fm_loss: 0.0062, perc_loss: 5.4510, edge: 0.1500
Epoch: [ 4], Batch: [ 200/ 577] | Total Time: 1h 28m 18s
d_loss: 1.3867, g_loss: 22.9434, const_loss: 0.0004, l1_loss: 16.3507, fm_loss: 0.0066, perc_loss: 5.7459, edge: 0.1470
Epoch: [ 4], Batch: [ 300/ 577] | Total Time: 1h 31m 49s
d_loss: 1.3867, g_loss: 21.8768, const_loss: 0.0004, l1_loss: 15.6528, fm_loss: 0.0061, perc_loss: 5.3912, edge: 0.1334
Epoch: [ 4], Batch: [ 400/ 577] | Total Time: 1h 35m 20s
d_loss: 1.3868, g_loss: 21.7751, const_loss: 0.0004, l1_loss: 15.5465, fm_loss: 0.0061, perc_loss: 5.3964, edge: 0.1329
Epoch: [ 4], Batch: [ 500/ 577] | Total Time: 1h 38m 51s
d_loss: 1.3871, g_loss: 22.7875, const_loss: 0.0004, l1_loss: 16.3517, fm_loss: 0.0061, perc_loss: 5.5973, edge: 0.1391
--- End of Epoch 4 --- Time: 1216.2s ---
LR Scheduler stepped. Current LR G: 0.000197, LR D: 0.000197
Epoch: [ 5], Batch: [ 0/ 577] | Total Time: 1h 41m 31s
d_loss: 1.3867, g_loss: 22.0169, const_loss: 0.0004, l1_loss: 15.8157, fm_loss: 0.0061, perc_loss: 5.3706, edge: 0.1311
Epoch: [ 5], Batch: [ 100/ 577] | Total Time: 1h 45m 2s
d_loss: 1.3867, g_loss: 21.4678, const_loss: 0.0005, l1_loss: 15.4316, fm_loss: 0.0057, perc_loss: 5.2028, edge: 0.1343
from 152 to 154, 微調訓練內容, 之前都是使用 cjktc, 改訓練 cjkjp 外形:

結論來說, 數值都略長, 因為大約有 1/20 內容會有沖突.
log:
unpickled total 6644 examples
Starting training from epoch 0/39...
Epoch: [ 0], Batch: [ 0/ 475] | Total Time: 4s
d_loss: 1.3868, g_loss: 24.6630, const_loss: 0.0004, l1_loss: 17.7577, fm_loss: 0.0077, perc_loss: 6.0461, edge: 0.1583
Checkpoint step 100 reached, but saving starts after step 200.
Epoch: [ 0], Batch: [ 100/ 475] | Total Time: 3m 25s
d_loss: 1.3875, g_loss: 23.5263, const_loss: 0.0004, l1_loss: 16.4816, fm_loss: 0.0078, perc_loss: 6.1961, edge: 0.1475
Epoch: [ 0], Batch: [ 200/ 475] | Total Time: 6m 55s
d_loss: 1.3867, g_loss: 23.2463, const_loss: 0.0004, l1_loss: 16.3148, fm_loss: 0.0069, perc_loss: 6.0717, edge: 0.1596
Epoch: [ 0], Batch: [ 300/ 475] | Total Time: 10m 19s
d_loss: 1.3881, g_loss: 22.9070, const_loss: 0.0005, l1_loss: 16.2283, fm_loss: 0.0074, perc_loss: 5.8389, edge: 0.1390
Epoch: [ 0], Batch: [ 400/ 475] | Total Time: 13m 48s
d_loss: 1.3868, g_loss: 22.4051, const_loss: 0.0005, l1_loss: 16.1276, fm_loss: 0.0064, perc_loss: 5.4451, edge: 0.1327
--- End of Epoch 0 --- Time: 977.5s ---
LR Scheduler stepped. Current LR G: 0.000195, LR D: 0.000195
Epoch: [ 1], Batch: [ 0/ 475] | Total Time: 16m 19s
d_loss: 1.3867, g_loss: 25.1150, const_loss: 0.0004, l1_loss: 17.5614, fm_loss: 0.0082, perc_loss: 6.6848, edge: 0.1673
Epoch: [ 1], Batch: [ 100/ 475] | Total Time: 19m 45s
d_loss: 1.3868, g_loss: 23.3561, const_loss: 0.0004, l1_loss: 16.6856, fm_loss: 0.0069, perc_loss: 5.8200, edge: 0.1503
Epoch: [ 1], Batch: [ 200/ 475] | Total Time: 23m 14s
d_loss: 1.3868, g_loss: 22.1533, const_loss: 0.0004, l1_loss: 15.9537, fm_loss: 0.0063, perc_loss: 5.3610, edge: 0.1389
Epoch: [ 1], Batch: [ 300/ 475] | Total Time: 26m 41s
d_loss: 1.3867, g_loss: 21.8895, const_loss: 0.0005, l1_loss: 15.6579, fm_loss: 0.0063, perc_loss: 5.3943, edge: 0.1376
Epoch: [ 1], Batch: [ 400/ 475] | Total Time: 30m 7s
d_loss: 1.3868, g_loss: 24.9580, const_loss: 0.0004, l1_loss: 17.7289, fm_loss: 0.0079, perc_loss: 6.3652, edge: 0.1628
--- End of Epoch 1 --- Time: 981.4s ---
LR Scheduler stepped. Current LR G: 0.000194, LR D: 0.000194
Epoch: [ 2], Batch: [ 0/ 475] | Total Time: 32m 40s
d_loss: 1.3872, g_loss: 21.2696, const_loss: 0.0005, l1_loss: 15.3222, fm_loss: 0.0058, perc_loss: 5.1153, edge: 0.1328
Epoch: [ 2], Batch: [ 100/ 475] | Total Time: 36m 9s
d_loss: 1.3868, g_loss: 23.8383, const_loss: 0.0004, l1_loss: 16.9578, fm_loss: 0.0069, perc_loss: 6.0245, edge: 0.1557
Epoch: [ 2], Batch: [ 200/ 475] | Total Time: 39m 34s
d_loss: 1.3868, g_loss: 24.3363, const_loss: 0.0003, l1_loss: 17.4194, fm_loss: 0.0074, perc_loss: 6.0639, edge: 0.1524
Epoch: [ 2], Batch: [ 300/ 475] | Total Time: 43m 0s
d_loss: 1.3869, g_loss: 22.5384, const_loss: 0.0004, l1_loss: 15.9934, fm_loss: 0.0063, perc_loss: 5.7064, edge: 0.1391
Epoch: [ 2], Batch: [ 400/ 475] | Total Time: 46m 25s
d_loss: 1.3869, g_loss: 26.0111, const_loss: 0.0004, l1_loss: 18.5338, fm_loss: 0.0089, perc_loss: 6.6115, edge: 0.1637
--- End of Epoch 2 --- Time: 977.9s ---
LR Scheduler stepped. Current LR G: 0.000192, LR D: 0.000192
Epoch: [ 3], Batch: [ 0/ 475] | Total Time: 48m 58s
d_loss: 1.3870, g_loss: 22.9527, const_loss: 0.0005, l1_loss: 16.4387, fm_loss: 0.0069, perc_loss: 5.6678, edge: 0.1459
Epoch: [ 3], Batch: [ 100/ 475] | Total Time: 52m 27s
d_loss: 1.3867, g_loss: 23.9656, const_loss: 0.0006, l1_loss: 17.1773, fm_loss: 0.0074, perc_loss: 5.9341, edge: 0.1534
Epoch: [ 3], Batch: [ 200/ 475] | Total Time: 55m 51s
d_loss: 1.3867, g_loss: 22.7531, const_loss: 0.0004, l1_loss: 16.2796, fm_loss: 0.0073, perc_loss: 5.6123, edge: 0.1606
Epoch: [ 3], Batch: [ 300/ 475] | Total Time: 59m 16s
d_loss: 1.3871, g_loss: 22.9224, const_loss: 0.0005, l1_loss: 16.3843, fm_loss: 0.0068, perc_loss: 5.6950, edge: 0.1428
Epoch: [ 3], Batch: [ 400/ 475] | Total Time: 1h 2m 41s
d_loss: 1.3867, g_loss: 24.0115, const_loss: 0.0003, l1_loss: 17.1144, fm_loss: 0.0066, perc_loss: 6.0495, edge: 0.1479
--- End of Epoch 3 --- Time: 980.4s ---
LR Scheduler stepped. Current LR G: 0.000190, LR D: 0.000190
Epoch: [ 4], Batch: [ 0/ 475] | Total Time: 1h 5m 19s
d_loss: 1.3869, g_loss: 25.6486, const_loss: 0.0005, l1_loss: 18.3618, fm_loss: 0.0081, perc_loss: 6.4109, edge: 0.1745
Epoch: [ 4], Batch: [ 100/ 475] | Total Time: 1h 8m 47s
d_loss: 1.3867, g_loss: 22.1625, const_loss: 0.0004, l1_loss: 15.9222, fm_loss: 0.0063, perc_loss: 5.3983, edge: 0.1424
Epoch: [ 4], Batch: [ 200/ 475] | Total Time: 1h 12m 12s
d_loss: 1.3869, g_loss: 23.4055, const_loss: 0.0004, l1_loss: 17.0520, fm_loss: 0.0075, perc_loss: 5.5126, edge: 0.1401
Epoch: [ 4], Batch: [ 300/ 475] | Total Time: 1h 15m 36s
d_loss: 1.3874, g_loss: 23.8039, const_loss: 0.0004, l1_loss: 16.8310, fm_loss: 0.0071, perc_loss: 6.1229, edge: 0.1497
Epoch: [ 4], Batch: [ 400/ 475] | Total Time: 1h 19m 1s
d_loss: 1.3867, g_loss: 20.5552, const_loss: 0.0005, l1_loss: 14.8597, fm_loss: 0.0055, perc_loss: 4.8778, edge: 0.1189
--- End of Epoch 4 --- Time: 973.6s ---
LR Scheduler stepped. Current LR G: 0.000188, LR D: 0.000188
Epoch: [ 5], Batch: [ 0/ 475] | Total Time: 1h 21m 32s
d_loss: 1.3868, g_loss: 21.9120, const_loss: 0.0006, l1_loss: 15.6591, fm_loss: 0.0067, perc_loss: 5.4177, edge: 0.1348
Epoch: [ 5], Batch: [ 100/ 475] | Total Time: 1h 24m 59s
d_loss: 1.3868, g_loss: 22.8019, const_loss: 0.0003, l1_loss: 16.4309, fm_loss: 0.0067, perc_loss: 5.5348, edge: 0.1364
Epoch: [ 5], Batch: [ 200/ 475] | Total Time: 1h 28m 24s
d_loss: 1.3869, g_loss: 23.5692, const_loss: 0.0005, l1_loss: 16.9082, fm_loss: 0.0066, perc_loss: 5.8124, edge: 0.1485
Epoch: [ 5], Batch: [ 300/ 475] | Total Time: 1h 31m 51s
d_loss: 1.3868, g_loss: 21.7953, const_loss: 0.0003, l1_loss: 15.6526, fm_loss: 0.0061, perc_loss: 5.3119, edge: 0.1315
Epoch: [ 5], Batch: [ 400/ 475] | Total Time: 1h 35m 18s
d_loss: 1.3873, g_loss: 22.1625, const_loss: 0.0005, l1_loss: 15.9327, fm_loss: 0.0063, perc_loss: 5.3913, edge: 0.1388
--- End of Epoch 5 --- Time: 979.4s ---
LR Scheduler stepped. Current LR G: 0.000184, LR D: 0.000184
Epoch: [ 6], Batch: [ 0/ 475] | Total Time: 1h 37m 52s
d_loss: 1.3868, g_loss: 22.6143, const_loss: 0.0003, l1_loss: 16.1026, fm_loss: 0.0068, perc_loss: 5.6695, edge: 0.1421
Epoch: [ 6], Batch: [ 100/ 475] | Total Time: 1h 41m 22s
d_loss: 1.3869, g_loss: 23.4992, const_loss: 0.0004, l1_loss: 16.9205, fm_loss: 0.0072, perc_loss: 5.7299, edge: 0.1484
Epoch: [ 6], Batch: [ 200/ 475] | Total Time: 1h 44m 47s
d_loss: 1.3867, g_loss: 23.4079, const_loss: 0.0004, l1_loss: 16.8711, fm_loss: 0.0069, perc_loss: 5.6895, edge: 0.1471
Epoch: [ 6], Batch: [ 300/ 475] | Total Time: 1h 48m 13s
d_loss: 1.3870, g_loss: 21.4839, const_loss: 0.0005, l1_loss: 15.3340, fm_loss: 0.0061, perc_loss: 5.3143, edge: 0.1362
Epoch: [ 6], Batch: [ 400/ 475] | Total Time: 1h 51m 38s
d_loss: 1.3867, g_loss: 22.6833, const_loss: 0.0003, l1_loss: 16.2367, fm_loss: 0.0064, perc_loss: 5.5974, edge: 0.1497
--- End of Epoch 6 --- Time: 980.5s ---
LR Scheduler stepped. Current LR G: 0.000181, LR D: 0.000181
Epoch: [ 7], Batch: [ 0/ 475] | Total Time: 1h 54m 12s
d_loss: 1.3867, g_loss: 23.7668, const_loss: 0.0004, l1_loss: 16.9549, fm_loss: 0.0072, perc_loss: 5.9685, edge: 0.1429
Epoch: [ 7], Batch: [ 100/ 475] | Total Time: 1h 57m 41s
d_loss: 1.3867, g_loss: 21.7087, const_loss: 0.0005, l1_loss: 15.7596, fm_loss: 0.0065, perc_loss: 5.1215, edge: 0.1277
Epoch: [ 7], Batch: [ 200/ 475] | Total Time: 2h 1m 6s
d_loss: 1.3868, g_loss: 21.6203, const_loss: 0.0004, l1_loss: 15.7124, fm_loss: 0.0061, perc_loss: 5.0799, edge: 0.1286
Epoch: [ 7], Batch: [ 300/ 475] | Total Time: 2h 4m 31s
d_loss: 1.3868, g_loss: 23.7126, const_loss: 0.0004, l1_loss: 16.9072, fm_loss: 0.0074, perc_loss: 5.9538, edge: 0.1509
Epoch: [ 7], Batch: [ 400/ 475] | Total Time: 2h 7m 57s
d_loss: 1.3871, g_loss: 23.0371, const_loss: 0.0004, l1_loss: 16.6908, fm_loss: 0.0064, perc_loss: 5.5059, edge: 0.1406
--- End of Epoch 7 --- Time: 980.5s ---
LR Scheduler stepped. Current LR G: 0.000176, LR D: 0.000176
Epoch: [ 8], Batch: [ 0/ 475] | Total Time: 2h 10m 33s
d_loss: 1.3868, g_loss: 21.6412, const_loss: 0.0003, l1_loss: 15.5636, fm_loss: 0.0060, perc_loss: 5.2445, edge: 0.1339
Epoch: [ 8], Batch: [ 100/ 475] | Total Time: 2h 13m 59s
d_loss: 1.3868, g_loss: 23.1313, const_loss: 0.0003, l1_loss: 16.5654, fm_loss: 0.0065, perc_loss: 5.7201, edge: 0.1462
Epoch: [ 8], Batch: [ 200/ 475] | Total Time: 2h 17m 24s
d_loss: 1.3868, g_loss: 23.5202, const_loss: 0.0002, l1_loss: 16.8394, fm_loss: 0.0072, perc_loss: 5.8264, edge: 0.1541
Epoch: [ 8], Batch: [ 300/ 475] | Total Time: 2h 20m 49s
d_loss: 1.3867, g_loss: 23.1064, const_loss: 0.0004, l1_loss: 16.4616, fm_loss: 0.0069, perc_loss: 5.7959, edge: 0.1488
Epoch: [ 8], Batch: [ 400/ 475] | Total Time: 2h 24m 14s
d_loss: 1.3868, g_loss: 22.1781, const_loss: 0.0004, l1_loss: 16.0474, fm_loss: 0.0063, perc_loss: 5.2971, edge: 0.1341
--- End of Epoch 8 --- Time: 972.3s ---
LR Scheduler stepped. Current LR G: 0.000172, LR D: 0.000172
Epoch: [ 9], Batch: [ 0/ 475] | Total Time: 2h 26m 45s
d_loss: 1.3869, g_loss: 22.4954, const_loss: 0.0004, l1_loss: 15.9695, fm_loss: 0.0064, perc_loss: 5.6910, edge: 0.1352
Epoch: [ 9], Batch: [ 100/ 475] | Total Time: 2h 30m 12s
d_loss: 1.3868, g_loss: 23.7449, const_loss: 0.0003, l1_loss: 16.9108, fm_loss: 0.0065, perc_loss: 5.9756, edge: 0.1588
Epoch: [ 9], Batch: [ 200/ 475] | Total Time: 2h 33m 37s
d_loss: 1.3867, g_loss: 20.5476, const_loss: 0.0003, l1_loss: 14.9320, fm_loss: 0.0053, perc_loss: 4.7936, edge: 0.1236
Epoch: [ 9], Batch: [ 300/ 475] | Total Time: 2h 37m 2s
d_loss: 1.3867, g_loss: 24.2586, const_loss: 0.0004, l1_loss: 17.5439, fm_loss: 0.0067, perc_loss: 5.8621, edge: 0.1526
Epoch: [ 9], Batch: [ 400/ 475] | Total Time: 2h 40m 27s
d_loss: 1.3868, g_loss: 20.5910, const_loss: 0.0004, l1_loss: 14.7844, fm_loss: 0.0068, perc_loss: 4.9724, edge: 0.1342
--- End of Epoch 9 --- Time: 975.5s ---
LR Scheduler stepped. Current LR G: 0.000167, LR D: 0.000167
Epoch: [ 10], Batch: [ 0/ 475] | Total Time: 2h 43m 0s
d_loss: 1.3868, g_loss: 24.6132, const_loss: 0.0005, l1_loss: 17.4697, fm_loss: 0.0070, perc_loss: 6.2851, edge: 0.1580
Epoch: [ 10], Batch: [ 100/ 475] | Total Time: 2h 46m 31s
d_loss: 1.3867, g_loss: 21.0514, const_loss: 0.0005, l1_loss: 15.1781, fm_loss: 0.0062, perc_loss: 5.0468, edge: 0.1270
Epoch: [ 10], Batch: [ 200/ 475] | Total Time: 2h 49m 56s
d_loss: 1.3868, g_loss: 21.5565, const_loss: 0.0004, l1_loss: 15.5218, fm_loss: 0.0061, perc_loss: 5.1963, edge: 0.1390
Epoch: [ 10], Batch: [ 300/ 475] | Total Time: 2h 53m 23s
d_loss: 1.3867, g_loss: 23.0324, const_loss: 0.0004, l1_loss: 16.6124, fm_loss: 0.0069, perc_loss: 5.5751, edge: 0.1447
Epoch: [ 10], Batch: [ 400/ 475] | Total Time: 2h 56m 51s
d_loss: 1.3869, g_loss: 22.5445, const_loss: 0.0003, l1_loss: 16.0341, fm_loss: 0.0065, perc_loss: 5.6718, edge: 0.1389
--- End of Epoch 10 --- Time: 988.2s ---
LR Scheduler stepped. Current LR G: 0.000161, LR D: 0.000161
Epoch: [ 11], Batch: [ 0/ 475] | Total Time: 2h 59m 29s
d_loss: 1.3867, g_loss: 21.5679, const_loss: 0.0004, l1_loss: 15.3572, fm_loss: 0.0061, perc_loss: 5.3730, edge: 0.1383
Epoch: [ 11], Batch: [ 100/ 475] | Total Time: 3h 2m 54s
d_loss: 1.3868, g_loss: 22.2422, const_loss: 0.0003, l1_loss: 16.1831, fm_loss: 0.0059, perc_loss: 5.2377, edge: 0.1222
Epoch: [ 11], Batch: [ 200/ 475] | Total Time: 3h 6m 19s
d_loss: 1.3868, g_loss: 22.6786, const_loss: 0.0005, l1_loss: 16.4309, fm_loss: 0.0066, perc_loss: 5.4115, edge: 0.1362
Epoch: [ 11], Batch: [ 300/ 475] | Total Time: 3h 9m 44s
d_loss: 1.3868, g_loss: 23.5292, const_loss: 0.0004, l1_loss: 17.0231, fm_loss: 0.0069, perc_loss: 5.6643, edge: 0.1417
Epoch: [ 11], Batch: [ 400/ 475] | Total Time: 3h 13m 9s
d_loss: 1.3868, g_loss: 22.9211, const_loss: 0.0003, l1_loss: 16.6543, fm_loss: 0.0066, perc_loss: 5.4233, edge: 0.1437
--- End of Epoch 11 --- Time: 973.5s ---
LR Scheduler stepped. Current LR G: 0.000155, LR D: 0.000155
Epoch: [ 12], Batch: [ 0/ 475] | Total Time: 3h 15m 42s
d_loss: 1.3868, g_loss: 22.0271, const_loss: 0.0006, l1_loss: 15.9112, fm_loss: 0.0069, perc_loss: 5.2845, edge: 0.1310
Epoch: [ 12], Batch: [ 100/ 475] | Total Time: 3h 19m 10s
d_loss: 1.3868, g_loss: 23.2075, const_loss: 0.0003, l1_loss: 16.7580, fm_loss: 0.0067, perc_loss: 5.6047, edge: 0.1449
Epoch: [ 12], Batch: [ 200/ 475] | Total Time: 3h 22m 40s
d_loss: 1.3869, g_loss: 22.5616, const_loss: 0.0003, l1_loss: 16.1698, fm_loss: 0.0066, perc_loss: 5.5527, edge: 0.1393
Epoch: [ 12], Batch: [ 300/ 475] | Total Time: 3h 26m 10s
d_loss: 1.3867, g_loss: 23.3800, const_loss: 0.0004, l1_loss: 16.7198, fm_loss: 0.0064, perc_loss: 5.8147, edge: 0.1459