Finished | ||||||
---|---|---|---|---|---|---|
World | Reckless | likely-fail-low-2 | diff | 8.0+0.08 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 50734 W: 11931 L: 11881 D: 26922 Ptnml(0-2): 213, 6058, 12754, 6150, 192 | |
World | Reckless | history-pruning-6 | diff | 8.0+0.08 | LLR: -2.37 (-2.25, 2.89) [0.00, 4.00] Games: 49504 W: 11576 L: 11535 D: 26393 Ptnml(0-2): 209, 5860, 12543, 5961, 179 | tale 6, depth <= 2 |
World | Reckless | lmr-improvement-2 | diff | 8.0+0.08 | LLR: -2.28 (-2.25, 2.89) [0.00, 4.00] Games: 19230 W: 4481 L: 4527 D: 10222 Ptnml(0-2): 75, 2368, 4772, 2328, 72 | reduction += 512 - improvement |
World | Reckless | history-pruning-4 | diff | 8.0+0.08 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 12112 W: 2773 L: 2839 D: 6500 Ptnml(0-2): 46, 1480, 3082, 1390, 58 | take 4, only conthists |
World | Reckless | lmr-improvement | diff | 8.0+0.08 | LLR: -2.29 (-2.25, 2.89) [0.00, 4.00] Games: 23086 W: 5406 L: 5441 D: 12239 Ptnml(0-2): 89, 2863, 5699, 2778, 114 | reduction += 928 - improvement / 2 |
World | Reckless | fp-improving | diff | 8.0+0.08 | LLR: -2.29 (-2.25, 2.89) [0.00, 4.00] Games: 10768 W: 2470 L: 2542 D: 5756 Ptnml(0-2): 48, 1324, 2715, 1246, 51 | less futility pruning when improving (100 * improving) |
World | Reckless | history-pruning-3 | diff | 8.0+0.08 | LLR: -2.29 (-2.25, 2.89) [0.00, 4.00] Games: 10758 W: 2505 L: 2577 D: 5676 Ptnml(0-2): 56, 1326, 2662, 1304, 31 | history < -1000 * depth - 1000 |
World | Reckless | more-rfp | diff | 8.0+0.08 | LLR: -2.25 (-2.25, 2.89) [0.00, 4.00] Games: 5614 W: 1271 L: 1357 D: 2986 Ptnml(0-2): 26, 724, 1387, 650, 20 | increase depth limit from 8 to 14 |
World | Reckless | v14-46e0bb9a | diff | N=25000 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 24642 W: 7673 L: 7711 D: 9258 Ptnml(0-2): 790, 2899, 4978, 2867, 787 | test fixed nodes for Styx's passed net |
World | Reckless | fix-probcut-2 | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [-5.00, 0.00] Games: 3464 W: 751 L: 848 D: 1865 Ptnml(0-2): 26, 454, 852, 391, 9 | generate quiets when in check |
World | Reckless | history-quant | diff | 8.0+0.08 | LLR: -2.04 (-2.25, 2.89) [0.00, 4.00] Games: 18688 W: 4416 L: 4453 D: 9819 Ptnml(0-2): 105, 2270, 4611, 2273, 85 | test sg's idea on history quantization |
World | Reckless | lmr-depth | diff | 8.0+0.08 | LLR: -2.03 (-2.25, 2.89) [0.00, 4.00] Games: 4638 W: 1048 L: 1127 D: 2463 Ptnml(0-2): 20, 610, 1132, 543, 14 | perform lmr for depth >= 2 |
World | Reckless | styx_v2_s1_150 | diff | 8.0+0.08 | LLR: -1.36 (-2.25, 2.89) [0.00, 4.00] Games: 28194 W: 7016 L: 6992 D: 14186 Ptnml(0-2): 197, 3357, 6938, 3435, 170 | S1_E150 vs. main |
World | Reckless | improving-strict-comparison | diff | 8.0+0.08 | LLR: -2.45 (-2.25, 2.89) [0.00, 4.00] Games: 12182 W: 2831 L: 2906 D: 6445 Ptnml(0-2): 61, 1477, 3083, 1416, 54 | apply the improving reduction strictly if we get worse |
World | Reckless | limit-history-move-count | diff | 8.0+0.08 | LLR: -1.63 (-2.25, 2.89) [0.00, 4.00] Games: 11314 W: 2614 L: 2654 D: 6046 Ptnml(0-2): 50, 1367, 2856, 1341, 43 | limit the maximum number of moves for applying history to (8 + depth) |
World | Reckless | increase-cutoff-formula-limit | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 13650 W: 3184 L: 3246 D: 7220 Ptnml(0-2): 62, 1665, 3426, 1617, 55 | 8 -> 16 |
World | Reckless | asp-weighted-average | diff | 8.0+0.08 | LLR: -2.34 (-2.25, 2.89) [0.00, 4.00] Games: 20086 W: 4711 L: 4757 D: 10618 Ptnml(0-2): 83, 2450, 5031, 2388, 91 | use (2 * avg + score) / 3 in aspiration windows |
World | Reckless | s1_150_only | diff | 8.0+0.08 | Elo: -14.59 +- 5.83 (95%) [N=4000] Games: 4002 W: 907 L: 1075 D: 2020 Ptnml(0-2): 23, 581, 962, 411, 24 | |
World | Reckless | s1_150_only | diff | N=25000 | LLR: -2.37 (-2.25, 2.89) [0.00, 4.00] Games: 7206 W: 2254 L: 2370 D: 2582 Ptnml(0-2): 257, 844, 1488, 786, 228 | |
World | Reckless | styx_v2_s1_150 | diff | N=25000 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 45046 W: 14285 L: 14241 D: 16520 Ptnml(0-2): 1514, 5168, 9107, 5228, 1506 | better on fixed nodes S1_E150 vs. main? |
World | Reckless | hindsight-lmr | diff | 8.0+0.08 | LLR: 3.04 (-2.25, 2.89) [0.00, 4.00] Games: 26350 W: 6332 L: 6113 D: 13905 Ptnml(0-2): 111, 3092, 6554, 3303, 115 | decrease node depth if static score in LMR search gets worse |
World | Reckless | linear-cutoff-count-lmr | diff | 8.0+0.08 | LLR: 3.05 (-2.25, 2.89) [0.00, 4.00] Games: 19856 W: 4816 L: 4615 D: 10425 Ptnml(0-2): 101, 2292, 4932, 2511, 92 | |
World | Reckless | lmr-history-table | diff | 8.0+0.08 | LLR: -2.42 (-2.25, 2.89) [0.00, 4.00] Games: 36118 W: 8625 L: 8626 D: 18867 Ptnml(0-2): 178, 4311, 9070, 4334, 166 | history table for LMR results (stm-from-to) |
World | Reckless | styx_v1 | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 12604 W: 3115 L: 3183 D: 6306 Ptnml(0-2): 93, 1559, 3050, 1523, 77 | Styx's first network (retraining on latest bullet + rounding in quantization) |
World | Reckless | styx_v1 | diff | N=25000 | LLR: -2.30 (-2.25, 2.89) [0.00, 4.00] Games: 73364 W: 23453 L: 23295 D: 26616 Ptnml(0-2): 2489, 8357, 14897, 8385, 2554 | Styx's first network (retraining on latest bullet + rounding in quantization) |