“At a conference a few months before the pandemic, a scholar told me how, in his department, everyone wrote lengthy pre-analysis plans that would, in theory, constrain P-hacking. In practice, he admitted, researchers could give cherry picking free rein, counting on the fact that no one has the time or patience to read a 100-page pre-analysis plan and compare it with the later publication.”
This is from an opinion piece in Nature about reproducibility of science in general, but the sentiment holds for clinical research (arguably, clinical research is even worse, for those hailing results of a clinical trial as game-changing often can’t be bothered with reviewing even a brief ClinicalTrials.gov registration and its history of changes, let alone a 100-page document). You could argue that open analysis plans at least give authors a reason to fear, as a deviation from the plan could result in negative letters to the Editor, retractions, or worse yet, shaming on social networks. In practice, it is those who call out deviations who are more afraid of name calling, Twitter mobs, and employer @-mentions for… being too aggressive, I guess?
Once you incentivize scientists to produce hundreds of pages of text that no human will ever read and no algorithm will parse just so a box or two can be checked, do not be surprised at the outcome.