
After the idea of letting marge accept unexpected perf improvements and looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759 which failed because of a single test, for a single build flavour crossing the improvement threshold where CI fails after rebasing I wondered. When would accepting a unexpected perf improvement ever backfire? In practice I either have a patch that I expect to improve performance for some things so I want to accept whatever gains I get. Or I don't expect improvements so it's *maybe* worth failing CI for in case I optimized away some code I shouldn't or something of that sort. How could this be actionable? Perhaps having a set of indicator for CI of "Accept allocation decreases" "Accept residency decreases" Would be saner. I have personally *never* gotten value out of the requirement to list the indivial tests that improve. Usually a whole lot of them do. Some cross the threshold so I add them. If I'm unlucky I have to rebase and a new one might make it across the threshold. Being able to accept improvements (but not regressions) wholesale might be a reasonable alternative. Opinions?