Too much precision not good enough? Application of statistical methods using sound judgement
The mathematician Abraham Wald described statistics as a way of organizing data that allows reasonable decisions to be made in the event of uncertainty. The term “allow” implies that the application of statistical methods will not necessarily lead to reasonable decisions. By choosing statistical methods and a methodical approach, we influence how unerring or uncertain the journey will be. I am deliberately leaving out the aspect of "ignorance" here. Only this much: GxP inspectors are rarely pleased when ignorance is the cornerstone of a validation.
In this blog post, I will use a case study to show how important statistical procedures are for a target-oriented evaluation and, at the same time, how treacherous they are in the validation and verification of analytical methods. Fact is: If there is a very good precision (= very low scatter) of the measurement results, this is a fact to be evaluated positively when viewed objectively. In this case study, you will be confronted with the thankless situation when statistical tests "punish" excellent precision. You will find answers to the question: How can these unnecessary complications be avoided already in the choice of acceptance criteria?