Consider the following story from Wikipedia’s Likelihood Principle:
'An engineer draws a random sample of electron tubes and measures their voltage. The measurements range from 75 to 99 volts. A statistician computes the sample mean and a confidence interval for the true mean. Later the statistician discovers that the voltmeter reads only as far as 100, so the population appears to be 'censored'. This necessitates a new analysis, if the statistician is orthodox. However, the engineer says he has another meter reading to 1000 volts, which he would have used if any voltage had been over 100. This is a relief to the statistician, because it means the population was effectively uncensored after all. But, the next day the engineer informs the statistician that this second meter was not working at the time of the measuring. The statistician ascertains that the engineer would not have held up the measurements until the meter was fixed, and informs him that new measurements are required. The engineer is astounded. "Next you'll be asking about my oscilloscope".'
The statistician probably used a hypothesis test, where the whole distribution of possible outcomes does matter. If the theoretical distribution changes (even if at values that never happened), he gets a different answer. I.e. thinking about "as extreme or more extreme events" depends on the distribution of more extreme events too.
So besides the inability of frequentist statistics for making actual predictions about what is going to be the next observation (it only gives confidence intervals or tests), we also have this paradox.
There are two other school of statistics left, Bayesian or fiducial. They actually give prediction intervals for the next observation -- which is usually what we need -- and they are also asymptotically correct.
Nincsenek megjegyzések:
Megjegyzés küldése