zi2zi-pytorch 針對相同圖片在不同解析度下訓練的影響

主要影響就是訓練花費時間, 512×512 是 256×256 的3倍時間, 還有 batch size 的關係, 所以 512×512 造成的偏差會比較大一點.

除了調整 resolution, 也可以試著增強G 的 final_channels, 畫質也會變好.


256×256 regular style 的 console log:

Epoch: [ 0], [   0/  45] time: 26.14, d_loss: 1.40957, g_loss: 7.95551, category_loss: 0.00000, cheat_loss: 0.77449, const_loss: 0.00980, l1_loss: 7.17123
Epoch: [ 1], [ 0/ 45] time: 431.05, d_loss: 4.29400, g_loss: 8.10423, category_loss: 0.00001, cheat_loss: 1.28892, const_loss: 0.00732, l1_loss: 6.80797
Epoch: [ 2], [ 0/ 45] time: 843.68, d_loss: 1.88517, g_loss: 7.31769, category_loss: 0.00158, cheat_loss: 1.19841, const_loss: 0.00523, l1_loss: 6.11405
Epoch: [ 3], [ 0/ 45] time: 1255.93, d_loss: 2.35987, g_loss: 7.44909, category_loss: 0.00002, cheat_loss: 1.31800, const_loss: 0.00856, l1_loss: 6.12253
Epoch: [ 4], [ 0/ 45] time: 1668.46, d_loss: 2.54748, g_loss: 7.11578, category_loss: 0.00015, cheat_loss: 0.84949, const_loss: 0.00585, l1_loss: 6.26042
Epoch: [ 5], [ 0/ 45] time: 2080.87, d_loss: 2.13288, g_loss: 6.81926, category_loss: 0.00000, cheat_loss: 0.63917, const_loss: 0.00627, l1_loss: 6.17382
Epoch: [ 6], [ 0/ 45] time: 2493.44, d_loss: 1.47338, g_loss: 6.43606, category_loss: 0.00000, cheat_loss: 0.54430, const_loss: 0.00460, l1_loss: 5.88716
Epoch: [ 7], [ 0/ 45] time: 2906.28, d_loss: 1.56289, g_loss: 6.46395, category_loss: 0.00000, cheat_loss: 0.80159, const_loss: 0.00400, l1_loss: 5.65836
Epoch: [ 8], [ 0/ 45] time: 3318.75, d_loss: 1.54652, g_loss: 6.90791, category_loss: 0.00000, cheat_loss: 1.14259, const_loss: 0.00441, l1_loss: 5.76091
Epoch: [ 9], [ 0/ 45] time: 3732.71, d_loss: 1.72232, g_loss: 5.73225, category_loss: 0.00000, cheat_loss: 0.40109, const_loss: 0.00444, l1_loss: 5.32672
Epoch: [10], [ 0/ 45] time: 4145.09, d_loss: 1.63648, g_loss: 6.76048, category_loss: 0.00000, cheat_loss: 0.69323, const_loss: 0.00576, l1_loss: 6.06149
Epoch: [11], [ 0/ 45] time: 4557.65, d_loss: 1.39904, g_loss: 6.41119, category_loss: 0.00000, cheat_loss: 0.73149, const_loss: 0.00420, l1_loss: 5.67549
Epoch: [12], [ 0/ 45] time: 4970.41, d_loss: 1.35578, g_loss: 6.41434, category_loss: 0.00000, cheat_loss: 0.79072, const_loss: 0.00421, l1_loss: 5.61941
Epoch: [13], [ 0/ 45] time: 5383.24, d_loss: 1.46898, g_loss: 6.39344, category_loss: 0.00000, cheat_loss: 0.77842, const_loss: 0.00322, l1_loss: 5.61179
Epoch: [14], [ 0/ 45] time: 5801.82, d_loss: 1.42688, g_loss: 6.04748, category_loss: 0.00000, cheat_loss: 0.64107, const_loss: 0.00423, l1_loss: 5.40218
Epoch: [15], [ 0/ 45] time: 6214.77, d_loss: 1.52659, g_loss: 5.95127, category_loss: 0.00000, cheat_loss: 0.64333, const_loss: 0.00312, l1_loss: 5.30482
Epoch: [16], [ 0/ 45] time: 6627.24, d_loss: 1.40160, g_loss: 6.65553, category_loss: 0.00000, cheat_loss: 0.78474, const_loss: 0.00339, l1_loss: 5.86741
Epoch: [17], [ 0/ 45] time: 7039.85, d_loss: 1.37407, g_loss: 6.09667, category_loss: 0.00000, cheat_loss: 0.73831, const_loss: 0.00325, l1_loss: 5.35511
Epoch: [18], [ 0/ 45] time: 7453.68, d_loss: 1.43629, g_loss: 6.14127, category_loss: 0.00000, cheat_loss: 0.78559, const_loss: 0.00333, l1_loss: 5.35235
Epoch: [19], [ 0/ 45] time: 7865.81, d_loss: 1.46511, g_loss: 6.16847, category_loss: 0.00001, cheat_loss: 0.75095, const_loss: 0.00318, l1_loss: 5.41435
Epoch: [20], [ 0/ 45] time: 8278.32, d_loss: 1.40473, g_loss: 5.65997, category_loss: 0.00000, cheat_loss: 0.61385, const_loss: 0.00304, l1_loss: 5.04308
Epoch: [21], [ 0/ 45] time: 8690.71, d_loss: 1.34687, g_loss: 5.76431, category_loss: 0.00000, cheat_loss: 0.77681, const_loss: 0.00308, l1_loss: 4.98442
Epoch: [22], [ 0/ 45] time: 9103.41, d_loss: 1.33141, g_loss: 6.14065, category_loss: 0.00000, cheat_loss: 0.87175, const_loss: 0.00385, l1_loss: 5.26505
Epoch: [23], [ 0/ 45] time: 9517.54, d_loss: 1.42019, g_loss: 5.76008, category_loss: 0.00000, cheat_loss: 0.65872, const_loss: 0.00289, l1_loss: 5.09846

512×512 regular style:

Epoch: [ 0], [ 100/ 215] time: 606.98, d_loss: 0.31174, g_loss: 13.93359, category_loss: 0.00002, cheat_loss: 3.24874, const_loss: 0.23946, l1_loss: 10.44538
Epoch: [ 0], [ 200/ 215] time: 1212.25, d_loss: 0.45079, g_loss: 14.34433, category_loss: 0.00001, cheat_loss: 2.75728, const_loss: 0.25732, l1_loss: 11.32973
Epoch: [ 1], [ 0/ 215] time: 1305.91, d_loss: 0.18158, g_loss: 15.42831, category_loss: 0.00002, cheat_loss: 4.07555, const_loss: 0.25275, l1_loss: 11.10001
Epoch: [ 1], [ 100/ 215] time: 1912.50, d_loss: 0.25792, g_loss: 12.77551, category_loss: 0.00002, cheat_loss: 1.18476, const_loss: 0.26111, l1_loss: 11.32963
Epoch: [ 1], [ 200/ 215] time: 2519.25, d_loss: 0.08402, g_loss: 17.10276, category_loss: 0.00002, cheat_loss: 4.76643, const_loss: 0.26600, l1_loss: 12.07032
Epoch: [ 2], [ 0/ 215] time: 2606.57, d_loss: 0.16091, g_loss: 16.14471, category_loss: 0.00003, cheat_loss: 4.35494, const_loss: 0.28869, l1_loss: 11.50108
Epoch: [ 2], [ 100/ 215] time: 3216.68, d_loss: 0.17906, g_loss: 14.70552, category_loss: 0.00002, cheat_loss: 3.53643, const_loss: 0.28399, l1_loss: 10.88509
Epoch: [ 2], [ 200/ 215] time: 3829.41, d_loss: 0.93324, g_loss: 14.33587, category_loss: 0.00001, cheat_loss: 2.01551, const_loss: 0.27378, l1_loss: 12.04657
Epoch: [ 3], [ 0/ 215] time: 3917.06, d_loss: 1.41822, g_loss: 11.95745, category_loss: 0.00001, cheat_loss: 1.14869, const_loss: 0.25951, l1_loss: 10.54925
Epoch: [ 3], [ 100/ 215] time: 4528.22, d_loss: 0.16337, g_loss: 15.73500, category_loss: 0.00002, cheat_loss: 3.10425, const_loss: 0.30736, l1_loss: 12.32339
Epoch: [ 3], [ 200/ 215] time: 5133.97, d_loss: 0.24355, g_loss: 16.40785, category_loss: 0.00003, cheat_loss: 3.99776, const_loss: 0.27812, l1_loss: 12.13196
Epoch: [ 4], [ 0/ 215] time: 5220.26, d_loss: 0.12751, g_loss: 16.04850, category_loss: 0.00002, cheat_loss: 4.45992, const_loss: 0.29583, l1_loss: 11.29275
Epoch: [ 4], [ 100/ 215] time: 5820.05, d_loss: 0.23068, g_loss: 14.54076, category_loss: 0.00002, cheat_loss: 2.81461, const_loss: 0.31861, l1_loss: 11.40752
Epoch: [ 4], [ 200/ 215] time: 6424.60, d_loss: 0.33250, g_loss: 12.77896, category_loss: 0.00002, cheat_loss: 1.03109, const_loss: 0.27153, l1_loss: 11.47633
Epoch: [ 5], [ 0/ 215] time: 6511.78, d_loss: 0.35609, g_loss: 17.78728, category_loss: 0.00002, cheat_loss: 5.55148, const_loss: 0.26762, l1_loss: 11.96817
Epoch: [ 5], [ 100/ 215] time: 7119.03, d_loss: 0.20058, g_loss: 11.88620, category_loss: 0.00002, cheat_loss: 1.46443, const_loss: 0.28879, l1_loss: 10.13297
Epoch: [ 5], [ 200/ 215] time: 7732.42, d_loss: 0.02682, g_loss: 15.94793, category_loss: 0.00002, cheat_loss: 3.92855, const_loss: 0.25941, l1_loss: 11.75997
Epoch: [ 6], [ 0/ 215] time: 7820.20, d_loss: 0.25289, g_loss: 13.19630, category_loss: 0.00002, cheat_loss: 1.58104, const_loss: 0.25611, l1_loss: 11.35915
Epoch: [ 6], [ 100/ 215] time: 8428.00, d_loss: 0.07260, g_loss: 14.08982, category_loss: 0.00002, cheat_loss: 2.64803, const_loss: 0.25747, l1_loss: 11.18432
Epoch: [ 6], [ 200/ 215] time: 9039.00, d_loss: 0.19923, g_loss: 13.51740, category_loss: 0.00001, cheat_loss: 2.52061, const_loss: 0.25316, l1_loss: 10.74363
Epoch: [ 7], [ 0/ 215] time: 9129.58, d_loss: 0.10501, g_loss: 14.10990, category_loss: 0.00001, cheat_loss: 3.65700, const_loss: 0.31224, l1_loss: 10.14066
Epoch: [ 7], [ 100/ 215] time: 9740.36, d_loss: 0.37006, g_loss: 13.18886, category_loss: 0.00002, cheat_loss: 1.39241, const_loss: 0.26942, l1_loss: 11.52702
Epoch: [ 7], [ 200/ 215] time: 10352.41, d_loss: 0.03985, g_loss: 16.22905, category_loss: 0.00002, cheat_loss: 4.23743, const_loss: 0.26772, l1_loss: 11.72391
Epoch: [ 8], [ 0/ 215] time: 10440.27, d_loss: 0.03505, g_loss: 16.62378, category_loss: 0.00002, cheat_loss: 3.80674, const_loss: 0.26524, l1_loss: 12.55178

512×512 thin style:

Epoch: [ 0], [ 100/ 221] time: 606.40, d_loss: 0.06194, g_loss: 16.90748, category_loss: 0.00002, cheat_loss: 2.46379, const_loss: 0.28582, l1_loss: 14.15786
Epoch: [ 0], [ 200/ 221] time: 1184.78, d_loss: 1.69232, g_loss: 22.57175, category_loss: 0.00002, cheat_loss: 7.01814, const_loss: 0.37845, l1_loss: 15.17514
Epoch: [ 1], [ 0/ 221] time: 1310.44, d_loss: 0.10237, g_loss: 20.51014, category_loss: 0.00004, cheat_loss: 3.75793, const_loss: 0.38718, l1_loss: 16.36501
Epoch: [ 1], [ 100/ 221] time: 1893.25, d_loss: 1.28714, g_loss: 28.07741, category_loss: 0.00002, cheat_loss: 11.71363, const_loss: 0.34063, l1_loss: 16.02314
Epoch: [ 1], [ 200/ 221] time: 2479.76, d_loss: 0.21078, g_loss: 19.89846, category_loss: 0.00005, cheat_loss: 3.27212, const_loss: 0.40745, l1_loss: 16.21886
Epoch: [ 2], [ 0/ 221] time: 2597.86, d_loss: 0.03788, g_loss: 22.53658, category_loss: 0.00007, cheat_loss: 5.92009, const_loss: 0.39226, l1_loss: 16.22420
Epoch: [ 2], [ 100/ 221] time: 3180.87, d_loss: 0.28448, g_loss: 17.30076, category_loss: 0.00002, cheat_loss: 2.33146, const_loss: 0.47490, l1_loss: 14.49439
Epoch: [ 2], [ 200/ 221] time: 3762.27, d_loss: 0.03560, g_loss: 19.19828, category_loss: 0.00007, cheat_loss: 4.20593, const_loss: 0.36600, l1_loss: 14.62633
Epoch: [ 3], [ 0/ 221] time: 3880.43, d_loss: 0.02630, g_loss: 18.93944, category_loss: 0.00004, cheat_loss: 3.58121, const_loss: 0.41637, l1_loss: 14.94183
Epoch: [ 3], [ 100/ 221] time: 4464.79, d_loss: 0.57211, g_loss: 17.58680, category_loss: 0.00009, cheat_loss: 1.91106, const_loss: 0.37507, l1_loss: 15.30064
Epoch: [ 3], [ 200/ 221] time: 5047.76, d_loss: 2.81330, g_loss: 15.15771, category_loss: 0.00002, cheat_loss: 0.15222, const_loss: 0.39393, l1_loss: 14.61155
Epoch: [ 4], [ 0/ 221] time: 5165.35, d_loss: 0.56849, g_loss: 23.05492, category_loss: 0.00006, cheat_loss: 6.34440, const_loss: 0.37937, l1_loss: 16.33110
Epoch: [ 4], [ 100/ 221] time: 5744.85, d_loss: 0.17806, g_loss: 17.27935, category_loss: 0.00007, cheat_loss: 3.38671, const_loss: 0.34699, l1_loss: 13.54560
Epoch: [ 4], [ 200/ 221] time: 6322.63, d_loss: 0.18185, g_loss: 17.78687, category_loss: 0.00002, cheat_loss: 2.08401, const_loss: 0.43055, l1_loss: 15.27229
Epoch: [ 5], [ 0/ 221] time: 6442.41, d_loss: 2.13744, g_loss: 21.42826, category_loss: 0.00004, cheat_loss: 5.15124, const_loss: 0.37891, l1_loss: 15.89808
Epoch: [ 5], [ 100/ 221] time: 7021.45, d_loss: 0.08028, g_loss: 19.47682, category_loss: 0.00008, cheat_loss: 3.71971, const_loss: 0.36993, l1_loss: 15.38715
Epoch: [ 5], [ 200/ 221] time: 7601.96, d_loss: 0.02521, g_loss: 19.58899, category_loss: 0.00004, cheat_loss: 3.72400, const_loss: 0.37101, l1_loss: 15.49395
Epoch: [ 6], [ 0/ 221] time: 7719.85, d_loss: 0.09147, g_loss: 21.11709, category_loss: 0.00005, cheat_loss: 4.08192, const_loss: 0.35442, l1_loss: 16.68073
Epoch: [ 6], [ 100/ 221] time: 8303.10, d_loss: 0.02701, g_loss: 16.91349, category_loss: 0.00002, cheat_loss: 2.31061, const_loss: 0.36774, l1_loss: 14.23512
Epoch: [ 6], [ 200/ 221] time: 8882.61, d_loss: 0.15102, g_loss: 19.19582, category_loss: 0.00002, cheat_loss: 3.81039, const_loss: 0.35015, l1_loss: 15.03527
Epoch: [ 7], [ 0/ 221] time: 9000.20, d_loss: 0.02309, g_loss: 22.37927, category_loss: 0.00005, cheat_loss: 5.76501, const_loss: 0.32940, l1_loss: 16.28485
Epoch: [ 7], [ 100/ 221] time: 9581.13, d_loss: 0.02461, g_loss: 22.49588, category_loss: 0.00006, cheat_loss: 5.07542, const_loss: 0.35766, l1_loss: 17.06278
Epoch: [ 7], [ 200/ 221] time: 10160.70, d_loss: 0.55793, g_loss: 19.72526, category_loss: 0.00004, cheat_loss: 4.79006, const_loss: 0.33927, l1_loss: 14.59591
Epoch: [ 8], [ 0/ 221] time: 10278.53, d_loss: 0.02843, g_loss: 18.65423, category_loss: 0.00004, cheat_loss: 4.81907, const_loss: 0.31892, l1_loss: 13.51622
Epoch: [ 8], [ 100/ 221] time: 10858.93, d_loss: 0.03031, g_loss: 20.73404, category_loss: 0.00008, cheat_loss: 5.38640, const_loss: 0.32427, l1_loss: 15.02334
Epoch: [ 8], [ 200/ 221] time: 11441.91, d_loss: 0.04380, g_loss: 19.52982, category_loss: 0.00003, cheat_loss: 4.72400, const_loss: 0.33755, l1_loss: 14.46826
Epoch: [ 9], [ 0/ 221] time: 11559.79, d_loss: 0.11075, g_loss: 18.61131, category_loss: 0.00004, cheat_loss: 2.50399, const_loss: 0.35531, l1_loss: 15.75200
Epoch: [ 9], [ 100/ 221] time: 12142.98, d_loss: 0.04427, g_loss: 21.76070, category_loss: 0.00002, cheat_loss: 4.73563, const_loss: 0.31739, l1_loss: 16.70768
Epoch: [ 9], [ 200/ 221] time: 12726.48, d_loss: 0.01125, g_loss: 22.93324, category_loss: 0.00004, cheat_loss: 6.95069, const_loss: 0.32098, l1_loss: 15.66155
Epoch: [10], [ 0/ 221] time: 12853.06, d_loss: 0.15521, g_loss: 18.61334, category_loss: 0.00007, cheat_loss: 4.33218, const_loss: 0.31655, l1_loss: 13.96458
Epoch: [10], [ 100/ 221] time: 13436.05, d_loss: 0.07787, g_loss: 20.53145, category_loss: 0.00007, cheat_loss: 5.18724, const_loss: 0.33911, l1_loss: 15.00507
Epoch: [10], [ 200/ 221] time: 14016.40, d_loss: 0.08501, g_loss: 20.79124, category_loss: 0.00004, cheat_loss: 5.51355, const_loss: 0.31750, l1_loss: 14.96017

256×256 thin style, 在 final_channels = 256 情況下,

Epoch: [ 1], [   0/  47] time: 360.09, d_loss: 1.81648, g_loss: 8.05295, category_loss: 0.00008, cheat_loss: 0.49621, const_loss: 0.00571, l1_loss: 7.55099
Epoch: [ 2], [ 0/ 47] time: 715.98, d_loss: 1.51517, g_loss: 8.47806, category_loss: 0.00969, cheat_loss: 1.26263, const_loss: 0.00942, l1_loss: 7.20600
Epoch: [ 3], [ 0/ 47] time: 1076.88, d_loss: 1.39151, g_loss: 9.45060, category_loss: 0.00330, cheat_loss: 1.59625, const_loss: 0.00699, l1_loss: 7.84735
Epoch: [ 4], [ 0/ 47] time: 1432.70, d_loss: 1.77968, g_loss: 11.19139, category_loss: 0.00004, cheat_loss: 3.08767, const_loss: 0.01036, l1_loss: 8.09336
Epoch: [ 5], [ 0/ 47] time: 1791.91, d_loss: 1.28329, g_loss: 9.48213, category_loss: 0.00007, cheat_loss: 1.64778, const_loss: 0.01058, l1_loss: 7.82376
Epoch: [ 6], [ 0/ 47] time: 2148.23, d_loss: 1.65145, g_loss: 8.50315, category_loss: 0.00051, cheat_loss: 0.59454, const_loss: 0.00790, l1_loss: 7.89413
Epoch: [ 7], [ 0/ 47] time: 2505.78, d_loss: 1.28920, g_loss: 8.97056, category_loss: 0.00000, cheat_loss: 1.22283, const_loss: 0.00823, l1_loss: 7.73950
Epoch: [ 8], [ 0/ 47] time: 2861.63, d_loss: 1.56068, g_loss: 9.06203, category_loss: 0.00219, cheat_loss: 1.63755, const_loss: 0.00642, l1_loss: 7.41806
Epoch: [ 9], [ 0/ 47] time: 3221.33, d_loss: 1.53459, g_loss: 10.47424, category_loss: 0.00001, cheat_loss: 2.48992, const_loss: 0.00709, l1_loss: 7.97724
Decay net_D learning rate from 0.00050 to 0.00025.
Decay net_G learning rate from 0.00050 to 0.00025.
Epoch: [10], [ 0/ 47] time: 3577.16, d_loss: 2.60345, g_loss: 9.48926, category_loss: 0.00114, cheat_loss: 1.03732, const_loss: 0.01239, l1_loss: 8.43954
Epoch: [11], [ 0/ 47] time: 3940.63, d_loss: 1.34707, g_loss: 8.19656, category_loss: 0.00000, cheat_loss: 0.87049, const_loss: 0.00583, l1_loss: 7.32025
Epoch: [12], [ 0/ 47] time: 4296.30, d_loss: 0.87700, g_loss: 10.14889, category_loss: 0.00126, cheat_loss: 2.26760, const_loss: 0.00385, l1_loss: 7.87743
Epoch: [13], [ 0/ 47] time: 4655.38, d_loss: 1.65686, g_loss: 8.37238, category_loss: 0.00000, cheat_loss: 0.80900, const_loss: 0.00472, l1_loss: 7.55866
Epoch: [14], [ 0/ 47] time: 5011.44, d_loss: 0.78299, g_loss: 10.14690, category_loss: 0.00000, cheat_loss: 2.28178, const_loss: 0.00397, l1_loss: 7.86115
Epoch: [15], [ 0/ 47] time: 5369.26, d_loss: 0.77644, g_loss: 9.11055, category_loss: 0.00001, cheat_loss: 1.03779, const_loss: 0.00357, l1_loss: 8.06919
Epoch: [16], [ 0/ 47] time: 5725.36, d_loss: 1.03227, g_loss: 8.24855, category_loss: 0.00000, cheat_loss: 0.61025, const_loss: 0.00747, l1_loss: 7.63083
Epoch: [17], [ 0/ 47] time: 6081.56, d_loss: 0.84785, g_loss: 8.97112, category_loss: 0.00000, cheat_loss: 1.49794, const_loss: 0.00429, l1_loss: 7.46889
Epoch: [18], [ 0/ 47] time: 6443.13, d_loss: 0.82620, g_loss: 8.52151, category_loss: 0.00000, cheat_loss: 1.33499, const_loss: 0.00764, l1_loss: 7.17889
Epoch: [19], [ 0/ 47] time: 6799.21, d_loss: 0.70555, g_loss: 10.75724, category_loss: 0.00000, cheat_loss: 2.19755, const_loss: 0.00824, l1_loss: 8.55145
Decay net_D learning rate from 0.00025 to 0.00013.
Decay net_G learning rate from 0.00025 to 0.00013.
Epoch: [20], [ 0/ 47] time: 7156.61, d_loss: 2.74217, g_loss: 10.73975, category_loss: 0.00000, cheat_loss: 2.18354, const_loss: 0.01205, l1_loss: 8.54416
Epoch: [21], [ 0/ 47] time: 7512.87, d_loss: 0.90747, g_loss: 9.45579, category_loss: 0.00000, cheat_loss: 1.69721, const_loss: 0.00558, l1_loss: 7.75300
Epoch: [22], [ 0/ 47] time: 7870.22, d_loss: 0.58048, g_loss: 10.02434, category_loss: 0.00004, cheat_loss: 1.84721, const_loss: 0.00537, l1_loss: 8.17176
Epoch: [23], [ 0/ 47] time: 8226.16, d_loss: 0.85992, g_loss: 8.82779, category_loss: 0.00000, cheat_loss: 0.76111, const_loss: 0.00522, l1_loss: 8.06147
Epoch: [24], [ 0/ 47] time: 8586.65, d_loss: 0.48779, g_loss: 9.96469, category_loss: 0.00000, cheat_loss: 1.76908, const_loss: 0.00574, l1_loss: 8.18987
Epoch: [25], [ 0/ 47] time: 8942.70, d_loss: 0.76323, g_loss: 11.38750, category_loss: 0.00000, cheat_loss: 3.29595, const_loss: 0.00522, l1_loss: 8.08633
Epoch: [26], [ 0/ 47] time: 9300.20, d_loss: 0.62175, g_loss: 9.00046, category_loss: 0.00000, cheat_loss: 1.32260, const_loss: 0.00590, l1_loss: 7.67196
Epoch: [27], [ 0/ 47] time: 9656.29, d_loss: 0.41603, g_loss: 10.93949, category_loss: 0.00000, cheat_loss: 2.48615, const_loss: 0.00598, l1_loss: 8.44735

滿有趣的,, 原來每次的 Decay learning rate 會讓 d_loss 突然再長大.

256×256 thin style, 在 final_channels = 256 改成 512 情況下:

Epoch: [ 1], [ 0/ 47] time: 361.85, d_loss: 8.39628, g_loss: 8.69999, category_loss: 0.00000, cheat_loss: 0.31410, const_loss: 0.01050, l1_loss: 8.37539
Epoch: [ 2], [ 0/ 47] time: 719.53, d_loss: 4.36914, g_loss: 11.11966, category_loss: 0.00000, cheat_loss: 3.60200, const_loss: 0.00824, l1_loss: 7.50941
Epoch: [ 3], [ 0/ 47] time: 1080.39, d_loss: 4.45263, g_loss: 8.06205, category_loss: 0.00000, cheat_loss: 0.47213, const_loss: 0.00634, l1_loss: 7.58358
Epoch: [ 4], [ 0/ 47] time: 1438.55, d_loss: 8.08992, g_loss: 13.24943, category_loss: 0.00008, cheat_loss: 5.26327, const_loss: 0.01430, l1_loss: 7.97185
Epoch: [ 5], [ 0/ 47] time: 1798.56, d_loss: 3.60792, g_loss: 9.88743, category_loss: 0.09016, cheat_loss: 2.15218, const_loss: 0.00835, l1_loss: 7.72689
Epoch: [ 6], [ 0/ 47] time: 2156.66, d_loss: 2.63320, g_loss: 8.65966, category_loss: 0.03130, cheat_loss: 0.41417, const_loss: 0.01390, l1_loss: 8.23007
Epoch: [ 7], [ 0/ 47] time: 2517.04, d_loss: 2.33885, g_loss: 8.99310, category_loss: 0.00000, cheat_loss: 0.96819, const_loss: 0.00884, l1_loss: 8.01606
Epoch: [ 8], [ 0/ 47] time: 2874.65, d_loss: 3.12392, g_loss: 9.07584, category_loss: 0.00826, cheat_loss: 1.57318, const_loss: 0.01214, l1_loss: 7.49052
Epoch: [ 9], [ 0/ 47] time: 3239.68, d_loss: 4.68467, g_loss: 8.65565, category_loss: 0.74174, cheat_loss: 0.56121, const_loss: 0.01337, l1_loss: 8.07804
Decay net_D learning rate from 0.00050 to 0.00025.
Decay net_G learning rate from 0.00050 to 0.00025.
Epoch: [10], [ 0/ 47] time: 3597.89, d_loss: 2.75090, g_loss: 9.95582, category_loss: 0.33848, cheat_loss: 1.28554, const_loss: 0.01345, l1_loss: 8.62938
Epoch: [11], [ 0/ 47] time: 3963.18, d_loss: 2.89727, g_loss: 9.85377, category_loss: 0.01938, cheat_loss: 1.51332, const_loss: 0.00609, l1_loss: 8.33435
Epoch: [12], [ 0/ 47] time: 4349.42, d_loss: 1.54349, g_loss: 8.00276, category_loss: 0.00802, cheat_loss: 0.84313, const_loss: 0.00565, l1_loss: 7.15398
Epoch: [13], [ 0/ 47] time: 4738.65, d_loss: 2.28097, g_loss: 9.60725, category_loss: 0.00000, cheat_loss: 1.86188, const_loss: 0.00477, l1_loss: 7.74061

大約 100 個 Epoch 之後, light style 的 d_loss 介於 1.5 ~ 5.0, 感覺推論的效果差不多, 效果是差強人意, 好消息是推論出來的結果還沒有崩掉, 之前的 final_channels=1, 持續訓練會產生雜點, 線條會產生波紋, 筆觸的細節容易消失, 就是圓頭變成方頭, 而且這時候 d_loss 低於 0.01 而且愈訓練, 還愈持續下降, 但推論出來的結果反而愈差!

256×256 thin style, 在 final_channels = 512, 再接著訓練 50分鐘, 7個 epoch.

Epoch: [ 1], [ 0/ 49] time: 465.70, d_loss: 2.23387, g_loss: 8.52372, category_loss: 0.00467, cheat_loss: 1.42327, const_loss: 0.00785, l1_loss: 7.09260
Epoch: [ 2], [ 0/ 49] time: 924.72, d_loss: 2.96535, g_loss: 10.29011, category_loss: 0.00000, cheat_loss: 2.88320, const_loss: 0.00995, l1_loss: 7.39182
Epoch: [ 3], [ 0/ 49] time: 1381.85, d_loss: 2.98210, g_loss: 8.13968, category_loss: 0.00000, cheat_loss: 0.77989, const_loss: 0.00660, l1_loss: 7.35319
Epoch: [ 4], [ 0/ 49] time: 1828.72, d_loss: 1.38947, g_loss: 8.37554, category_loss: 0.00770, cheat_loss: 1.35332, const_loss: 0.00569, l1_loss: 7.01502
Epoch: [ 5], [ 0/ 49] time: 2282.06, d_loss: 1.52924, g_loss: 8.68131, category_loss: 0.00000, cheat_loss: 1.28559, const_loss: 0.00723, l1_loss: 7.38849
Epoch: [ 6], [ 0/ 49] time: 2728.68, d_loss: 1.57393, g_loss: 10.14317, category_loss: 0.00340, cheat_loss: 1.39852, const_loss: 0.04536, l1_loss: 8.69929
Epoch: [ 7], [ 0/ 49] time: 3181.39, d_loss: 1.27554, g_loss: 9.30466, category_loss: 0.00881, cheat_loss: 1.36761, const_loss: 0.02494, l1_loss: 7.91210

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *