| Finished | ||||||
|---|---|---|---|---|---|---|
| World | Reckless | lmp-qsearch | diff | 8.0+0.08 | LLR: 2.96 (-2.25, 2.89) [0.00, 4.00] Games: 10770 W: 2661 L: 2490 D: 5619 Ptnml(0-2): 52, 1246, 2625, 1403, 59 | search 2 moves in qsearch at most |
| World | Reckless | l1-640 | diff | 8.0+0.08 | LLR: 2.89 (-2.25, 2.89) [0.00, 4.00] Games: 6676 W: 1758 L: 1603 D: 3315 Ptnml(0-2): 36, 725, 1668, 866, 43 | |
| World | Reckless | null-move-history-3 | diff | 8.0+0.08 | LLR: -1.21 (-2.25, 2.89) [0.00, 4.00] Games: 9438 W: 2227 L: 2254 D: 4957 Ptnml(0-2): 42, 1184, 2302, 1141, 50 | |
| World | Reckless | history-alpha-raise-2 | diff | 8.0+0.08 | LLR: -0.80 (-2.25, 2.89) [0.00, 4.00] Games: 5200 W: 1241 L: 1262 D: 2697 Ptnml(0-2): 27, 631, 1313, 594, 35 | update quiet history on exact nodes too |
| World | Reckless | history-alpha-raise | diff | 8.0+0.08 | LLR: -2.27 (-2.25, 2.89) [0.00, 4.00] Games: 2704 W: 596 L: 691 D: 1417 Ptnml(0-2): 11, 367, 693, 268, 13 | update all histories on exact nodes too |
| World | Reckless | null-move-history-2 | diff | 8.0+0.08 | LLR: -0.86 (-2.25, 2.89) [0.00, 4.00] Games: 10002 W: 2407 L: 2416 D: 5179 Ptnml(0-2): 53, 1201, 2503, 1190, 54 | adjust nmp margin by null move history (indexed by stm & previous move) with a clamp (-64, 32) |
| World | Reckless | nmp-improving-2 | diff | 8.0+0.08 | LLR: -1.46 (-2.25, 2.89) [0.00, 4.00] Games: 10652 W: 2491 L: 2525 D: 5636 Ptnml(0-2): 50, 1276, 2699, 1260, 41 | reduce nmp margin when improving by 60 |
| World | Reckless | simd-l1-640 | diff | 8.0+0.08 | LLR: 2.39 (-2.25, 2.89) [0.00, 4.00] Games: 8044 W: 2051 L: 1915 D: 4078 Ptnml(0-2): 43, 904, 1999, 1026, 50 | L1=640 with manual AVX2 |
| World | Reckless | simd-force-v3 | diff | 8.0+0.08 | LLR: 2.92 (-2.25, 2.89) [-5.00, 0.00] Games: 8926 W: 2128 L: 2059 D: 4739 Ptnml(0-2): 34, 939, 2447, 1010, 33 | test AVX2 against main (enforce v3) |
| World | Reckless | simd-force-v2 | diff | 8.0+0.08 | LLR: 2.89 (-2.25, 2.89) [0.00, 4.00] Games: 1328 W: 387 L: 250 D: 691 Ptnml(0-2): 3, 92, 346, 211, 12 | test SSE4.1 against main (enforce v2) |
| World | Reckless | razoring-improving-3 | diff | 8.0+0.08 | LLR: -1.93 (-2.25, 2.89) [0.00, 4.00] Games: 4546 W: 1050 L: 1126 D: 2370 Ptnml(0-2): 28, 593, 1095, 541, 16 | clamp(improvement, -512, 512) / 3 |
| World | Reckless | nmp_r_corrplex4 | diff | 8.0+0.08 | LLR: -1.73 (-2.25, 2.89) [0.00, 4.00] Games: 2172 W: 481 L: 555 D: 1136 Ptnml(0-2): 12, 304, 524, 238, 8 | 5 - correction_value / 150 after rebase |
| World | Reckless | razoring-improving-2 | diff | 8.0+0.08 | LLR: -1.16 (-2.25, 2.89) [0.00, 4.00] Games: 5898 W: 1382 L: 1417 D: 3099 Ptnml(0-2): 23, 739, 1464, 696, 27 | 250 * depth * improving |
| World | Reckless | pcm-prior-reduction | diff | 8.0+0.08 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 45682 W: 10695 L: 10660 D: 24327 Ptnml(0-2): 178, 5505, 11453, 5514, 191 | as Pere mentioned the depth is reduced in the current ply, so undo it proportionally to the prior reduction |
| World | Reckless | two-killers | diff | 8.0+0.08 | LLR: -2.25 (-2.25, 2.89) [0.00, 4.00] Games: 14214 W: 3322 L: 3382 D: 7510 Ptnml(0-2): 62, 1767, 3502, 1721, 55 | |
| World | Reckless | more-base-reduction | diff | 8.0+0.08 | LLR: -2.25 (-2.25, 2.89) [0.00, 4.00] Games: 17298 W: 4076 L: 4127 D: 9095 Ptnml(0-2): 88, 2119, 4278, 2084, 80 | increase base LMR reduction by 200 (1/5th depth) |
| World | Reckless | new-depth-thing | diff | 8.0+0.08 | LLR: -2.25 (-2.25, 2.89) [0.00, 4.00] Games: 25446 W: 6040 L: 6066 D: 13340 Ptnml(0-2): 120, 3070, 6363, 3056, 114 | |
| World | Reckless | simplify-nmp-formula | diff | 8.0+0.08 | LLR: 2.90 (-2.25, 2.89) [-4.50, 0.00] Games: 11966 W: 2889 L: 2815 D: 6262 Ptnml(0-2): 50, 1291, 3231, 1357, 54 | simplify NMP formula regarding TT move |
| World | Reckless | L1-640 | diff | 40.0+0.40 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 6692 W: 1569 L: 1649 D: 3474 Ptnml(0-2): 16, 845, 1696, 781, 8 | |
| World | Reckless | lmr-cn-tt-move | diff | 8.0+0.08 | LLR: -2.40 (-2.25, 2.89) [0.00, 4.00] Games: 5672 W: 1311 L: 1405 D: 2956 Ptnml(0-2): 27, 747, 1382, 653, 27 | |
| World | Reckless | lmr-neg-ext | diff | 8.0+0.08 | LLR: -2.33 (-2.25, 2.89) [0.00, 4.00] Games: 5088 W: 1147 L: 1238 D: 2703 Ptnml(0-2): 29, 645, 1276, 576, 18 | naively use se results in lmr |
| World | Reckless | eval-scale | diff | 8.0+0.08 | LLR: -2.26 (-2.25, 2.89) [0.00, 4.00] Games: 6530 W: 1505 L: 1589 D: 3436 Ptnml(0-2): 29, 840, 1611, 756, 29 | use 90% of eval |
| World | Reckless | rfp-tt-quiet | diff | 8.0+0.08 | LLR: -1.48 (-2.25, 2.89) [0.00, 4.00] Games: 3142 W: 703 L: 763 D: 1676 Ptnml(0-2): 25, 411, 744, 381, 10 | guard RFP when tt move is quiet |
| World | Reckless | L1-640 | diff | 8.0+0.08 | LLR: -1.71 (-2.25, 2.89) [0.00, 4.00] Games: 2276 W: 522 L: 593 D: 1161 Ptnml(0-2): 9, 303, 589, 224, 13 | train NNUE with 128 more neurons in L1 |
| World | Reckless | L1-640 | diff | N=25000 | LLR: 2.91 (-2.25, 2.89) [0.00, 4.00] Games: 4374 W: 1474 L: 1274 D: 1626 Ptnml(0-2): 135, 455, 841, 587, 169 | train NNUE with 128 more neurons in L1 |