Finished | ||||||
---|---|---|---|---|---|---|
World | Reckless | improving-strict-comparison | diff | 8.0+0.08 | LLR: -2.45 (-2.25, 2.89) [0.00, 4.00] Games: 12182 W: 2831 L: 2906 D: 6445 Ptnml(0-2): 61, 1477, 3083, 1416, 54 | apply the improving reduction strictly if we get worse |
World | Reckless | limit-history-move-count | diff | 8.0+0.08 | LLR: -1.63 (-2.25, 2.89) [0.00, 4.00] Games: 11314 W: 2614 L: 2654 D: 6046 Ptnml(0-2): 50, 1367, 2856, 1341, 43 | limit the maximum number of moves for applying history to (8 + depth) |
World | Reckless | increase-cutoff-formula-limit | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 13650 W: 3184 L: 3246 D: 7220 Ptnml(0-2): 62, 1665, 3426, 1617, 55 | 8 -> 16 |
World | Reckless | asp-weighted-average | diff | 8.0+0.08 | LLR: -2.34 (-2.25, 2.89) [0.00, 4.00] Games: 20086 W: 4711 L: 4757 D: 10618 Ptnml(0-2): 83, 2450, 5031, 2388, 91 | use (2 * avg + score) / 3 in aspiration windows |
World | Reckless | s1_150_only | diff | 8.0+0.08 | Elo: -14.59 +- 5.83 (95%) [N=4000] Games: 4002 W: 907 L: 1075 D: 2020 Ptnml(0-2): 23, 581, 962, 411, 24 | |
World | Reckless | s1_150_only | diff | N=25000 | LLR: -2.37 (-2.25, 2.89) [0.00, 4.00] Games: 7206 W: 2254 L: 2370 D: 2582 Ptnml(0-2): 257, 844, 1488, 786, 228 | |
World | Reckless | styx_v2_s1_150 | diff | N=25000 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 45046 W: 14285 L: 14241 D: 16520 Ptnml(0-2): 1514, 5168, 9107, 5228, 1506 | better on fixed nodes S1_E150 vs. main? |
World | Reckless | hindsight-lmr | diff | 8.0+0.08 | LLR: 3.04 (-2.25, 2.89) [0.00, 4.00] Games: 26350 W: 6332 L: 6113 D: 13905 Ptnml(0-2): 111, 3092, 6554, 3303, 115 | decrease node depth if static score in LMR search gets worse |
World | Reckless | linear-cutoff-count-lmr | diff | 8.0+0.08 | LLR: 3.05 (-2.25, 2.89) [0.00, 4.00] Games: 19856 W: 4816 L: 4615 D: 10425 Ptnml(0-2): 101, 2292, 4932, 2511, 92 | |
World | Reckless | lmr-history-table | diff | 8.0+0.08 | LLR: -2.42 (-2.25, 2.89) [0.00, 4.00] Games: 36118 W: 8625 L: 8626 D: 18867 Ptnml(0-2): 178, 4311, 9070, 4334, 166 | history table for LMR results (stm-from-to) |
World | Reckless | styx_v1 | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 12604 W: 3115 L: 3183 D: 6306 Ptnml(0-2): 93, 1559, 3050, 1523, 77 | Styx's first network (retraining on latest bullet + rounding in quantization) |
World | Reckless | styx_v1 | diff | N=25000 | LLR: -2.30 (-2.25, 2.89) [0.00, 4.00] Games: 73364 W: 23453 L: 23295 D: 26616 Ptnml(0-2): 2489, 8357, 14897, 8385, 2554 | Styx's first network (retraining on latest bullet + rounding in quantization) |
World | Reckless | likely-fail-high | diff | 8.0+0.08 | LLR: -2.28 (-2.25, 2.89) [0.00, 4.00] Games: 32714 W: 7774 L: 7779 D: 17161 Ptnml(0-2): 162, 3857, 8311, 3878, 149 | apply cut node reduction for a potential fail high |
World | Reckless | pawn-hist2 | diff | 8.0+0.08 | LLR: -1.79 (-2.25, 2.89) [0.00, 4.00] Games: 11518 W: 2710 L: 2757 D: 6051 Ptnml(0-2): 50, 1438, 2825, 1401, 45 | static eval pawn history updates |
World | Reckless | nmp-history-2 | diff | 8.0+0.08 | LLR: -2.63 (-2.25, 2.89) [0.00, 4.00] Games: 24740 W: 5978 L: 6024 D: 12738 Ptnml(0-2): 124, 3042, 6078, 3008, 118 | increase history range with offset as compensation |
World | Reckless | likely-fail-high-2 | diff | 8.0+0.08 | LLR: -2.10 (-2.25, 2.89) [0.00, 4.00] Games: 9526 W: 2279 L: 2347 D: 4900 Ptnml(0-2): 52, 1200, 2313, 1160, 38 | more reduction for a potential fail high |
World | Reckless | likely-fail-low | diff | 8.0+0.08 | LLR: -2.38 (-2.25, 2.89) [0.00, 4.00] Games: 4420 W: 1029 L: 1125 D: 2266 Ptnml(0-2): 29, 565, 1104, 497, 15 | no ttpv reduction for a potential fail low |
World | Reckless | granular-ultimate-patch-2 | diff | 8.0+0.08 | LLR: -2.04 (-2.25, 2.89) [0.00, 4.00] Games: 8754 W: 2079 L: 2147 D: 4528 Ptnml(0-2): 43, 1127, 2097, 1075, 35 | scale the ultimate patch linearly - take 2 |
World | Reckless | pawn-hist3 | diff | 8.0+0.08 | LLR: -2.42 (-2.25, 2.89) [0.00, 4.00] Games: 9610 W: 2289 L: 2370 D: 4951 Ptnml(0-2): 42, 1201, 2393, 1134, 35 | factorize pawn history on piece-to |
World | Reckless | granular-ultimate-patch | diff | 8.0+0.08 | LLR: -1.56 (-2.25, 2.89) [0.00, 4.00] Games: 6616 W: 1565 L: 1616 D: 3435 Ptnml(0-2): 18, 840, 1655, 765, 30 | scale the ultimate patch linearly |
World | Reckless | nmp-butterfly-and-pawn-corrhist | diff | 8.0+0.08 | LLR: -1.65 (-2.25, 2.89) [0.00, 4.00] Games: 27170 W: 6546 L: 6539 D: 14085 Ptnml(0-2): 122, 3264, 6799, 3285, 115 | augment butterfly history with correction history based on the pawn structure |
World | Reckless | pv-history-bucket | diff | 8.0+0.08 | LLR: -1.74 (-2.25, 2.89) [0.00, 4.00] Games: 5836 W: 1362 L: 1424 D: 3050 Ptnml(0-2): 31, 729, 1453, 681, 24 | add pv bucket to quiet history |
World | Reckless | pawn-hist | diff | 8.0+0.08 | LLR: -1.01 (-2.25, 2.89) [0.00, 4.00] Games: 6568 W: 1571 L: 1598 D: 3399 Ptnml(0-2): 31, 824, 1611, 777, 41 | pawn history with Pere's fix |
World | Reckless | nmp-butterfly-and-pawn-corrhist-2 | diff | 8.0+0.08 | LLR: -2.10 (-2.25, 2.89) [0.00, 4.00] Games: 14340 W: 3449 L: 3502 D: 7389 Ptnml(0-2): 72, 1747, 3588, 1688, 75 | same as nmp-butterfly-and-pawn-corrhist, but sums histories instead of averaging |
World | Reckless | more-nmp-history | diff | 8.0+0.08 | LLR: -1.44 (-2.25, 2.89) [0.00, 4.00] Games: 22010 W: 5273 L: 5272 D: 11465 Ptnml(0-2): 105, 2639, 5511, 2650, 100 | increase maximum correction |