How To Remove the Ad Hoc Features of Statistical Inference Within a Frequentist Paradigm

This page was last edited on 02/07/02 by Malcolm R Forster



Note: You need Adobe Acrobat Reader 3.0, or later, to read and print this article.  It is free.

frequentism(7).pdf (Submitted to BJPS, May 5, 2001, rejected July 3, 2001)

frequentismAppendix.pdf  (9 pages, 1000 equations.)

Table of Contents

  1. Introduction

  2. The arbitrary features of the Neyman-Pearson theory

  3. Estimation and regression to the mean

  4. Experiment types 

  5. Payoffs and decision problems

  6. Likelihoods  

  7. Decision functions  

  8. Ensembles of token experiments and optimality

  9. Sufficient and necessary conditions for optimality

  10. Optimality, best tests, and likelihood ratio tests

  11. Why Stopping Rules are Irrelevant

  12. Concluding remarks



Our aim is to develop a unified and general frequentist theory of decision-making.  The resulting unification of the seemingly unrelated theories of hypothesis testing and parameter estimation is based on a new definition of the optimality of a decision rule within an ensemble of token experiments. It is the introduction of ensembles that enables us to avoid the use of subjective Bayesian priors. We also consider three familiar problems with classical methods, the arbitrary features of Neyman-Pearson tests, the difficulties caused by regression to the mean, and the relevance of stopping rules, and show how these problems are solved in our extended and unified frequentist framework. 

Publication Data

Version 7  was rejected by the British Journal for the Philosophy of Science on July 3, 2001  You can read the decision and judge for yourself.

Hit Counter