UW Philosophy  

Phil. Sci. @UW

Email Me

Mailing Label

Curriculum Vitae

Personal

Malcolm R. Forster

 

Last updated on 01/31/07.
ARTICLES ONLINE

Kiwi Philosophers Overseas

 

 

 

 


A Philosopher's Guide to Empirical Success (PSA 2006, in press):  The simple question—What is empirical success?—turns out to have a surprisingly complicated answer.  We need to distinguish between meritorious fit and “fudged fit”, which is akin to the distinction between prediction and accommodation.  The final proposal is that empirical success emerges in a theory-dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed.

The Whewell-Mill Debate in a Nutshell (6 pages):  In the mid 1800's, William Whewell and John Stuart Mill argued about the nature of scientific induction.  Mill's view is the standard philosophical view epitomized by simple enumerative induction, while Whewell's view was designed to fit the patterns he saw in the history of science.  In 6 pages, I try to explain why it is more than merely a terminological dispute.

Counterexamples to a Likelihood Theory of Evidence  (Final version, July 14, 2006) The Likelihood Theory of Evidence (LTE) says, roughly, that only likelihoods matter to the evidential comparison of hypotheses (or models).  There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples demonstrate the power of other forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858). Bayesian and Likelihoodist philosophies of science are more limited in scope.

Unification and Evidence (March, 2005)  There's much that has been said about unification and explanation, but the connection between unification and evidence deserves more attention.

 March 2005

The Miraculous Consilience of Quantum MechanicsWhy do hidden variable interpretations of quantum mechanics fail?  Because they do not compete with the ability of QM to predict phenomena of one kind from phenomena of a very different kind (a feature of good theories that Whewell called the consilience of inductions).   (Older version).

In June 2006, I attended Error 2006, Blacksburg, Virginia.

From May 13 to 15, 2005, I was at the Assessing Evidence in Physics conference, from May 25 to 29 the Formal Epistemology Workshop (FEW).

In August, 2004, I was at the following conferences: Third International Summer School, Konstanz, Germany.  Amsterdam Workshop on Model Selection, The Netherlands.

Manuscript: Occam’s Razor and the Relational Nature of Evidence.
Chapter 1
: Introduction to Philosophy of Science (Last updated March 6, 2004)
Chapter 2: Theories, Models, and Curves (Last updated March 6, 2004)
Chapter 3: Simplicity and Unification in Model Selection (Last updated March 6, 2004)
References: (Last updated March 6, 2004)

Nov, 2003

 Philosophy of the Quantitative Sciences: Unification, Curve Fitting, and Cross Validation. (333 KB, pdf)  This is a 30 page summary of my view of confirmation in the quantitative sciences.  It attempts to tie together basic issues such as prediction versus accommodation, counterfactuals and the nature of laws, common cause explanation as an argument for realism, the value of diversified evidence, historical versus logical theories of confirmation, and positive heuristics in scientific research programs.

Reprint

 Forster, Malcolm R. (1988): “Sober’s Principle of Common Cause and the Problem of Incomplete Hypotheses.”  Philosophy of Science 55: 538‑59.

Reprint

 Forster, Malcolm R. (1986):  “Unification and Scientific Realism Revisited.”  In Arthur Fine and Peter Machamer (eds.), PSA 1986.  E. Lansing, Michigan:  Philosophy of Science Association.  Volume 1: 394‑405.

Draft, Oct, 2003

 Percolation:  A Simple Example of Renormalization (2 pages, pdf): Kenneth Wilson won the Nobel Prize in Physics in 1982 for using the renormalization group (originally developed in quantum field theory) to the predict critical exponents in statistical physics. Simpler examples of renormalization may make the philosophical significance of the new physics easier to understand.

Under construction

 Unification.  Wayne Myrvold (Philosophy of Science, April, 2003) has captured an important feature of unified theories, and he has done so in Bayesian terms.  What is not clear is whether the virtue of such unification is most clearly understood in terms of Bayesian confirmation.  I argue that the virtue of such unification is better understood in terms of other truth-related virtues such as predictive accuracy.

Reprint

  The Emergence of a Macro-World: A Study of Intertheory Relations in Classical and Quantum Mechanics.  Published in Philosophy of Science (Dec. 2003).

Econophysics:  A Simple Explanation of Two-Phase Behaviour.  This is a very short reply to a recent Brief Communication in Nature.

Reprint

   Forster, M. R. (1999): “How Do Simple Rules "Fit to Reality" in a Complex World?  Minds and Machines  9: 543-564.  A critique of Simple Heuristics that Make Us Smart.

Reprint

  With Eric Saidel (1994) Connectionism and the Fate of Folk Psychology shows how distributed representations in neural networks operate independently of one another.

Reprint

 Forster, M. R. (1988) Unification, Explanation, and the Composition of Causes in Newtonian Mechanics is a paper that applies William Whewell's consilience of inductions to Newton's argument for universal gravitation and contemporary problems in the philosophy of science (Cartwright and Ellis).  Unification is a relational property of a theory, but it is supported directly by relational properties of the evidence!  I've thought about calling the paper "How to be a Realist and an Empiricist at the Same Time."

Reprint

 Forster, M. R. and Elliott Sober (1994):  How to Tell When Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions.” The British Journal for the Philosophy of Science 45: 1 - 35.  Explains why simplicity (qua the paucity of adjustable parameters) helps to maximize the goal of predictive accuracy.JMPfig2.gif (6914 bytes)

Reprint

 Forster, M. R. (2000) Key Concepts in Model Selection: Performance and Generalizability.  Targeted at working scientists who are interested in comparing the different statistical methods of model selection. The page numbering is now the same as in the published version.  (Other papers in the same volume.)

The Einsteinian Prediction of the Precession of Mercury's Perihelion:  A case study in prediction versus accommodation.  Just 3 pages long (PDF 3.0)

Reprint

  Forster, M. R. (2001): “The New Science of Simplicity” in A. Zellner, H. A. Keuzenkamp, and M. McAleer (eds.) Simplicity, Inference and Modelling. Cambridge University Press, pp. 83-119.  A long and careful analysis of predictive accuracy.

Reprint

 Predictive Accuracy as an Achievable Goal of Science.  This is the final version of my presentation at the Akaike Symposium at the PSA 2000 meetings in Vancouver, Canada, Nov. 3, 2000, published in Philosophy of Science 2002.

Sept. 15, 2001

 Book Page: The Meaning of Temperature and EntropyThe Meaning of Temperature and Entropy in Statistical Mechanics is expanding into a book.

Forthcoming

 "Why Likelihood" with Elliott Sober.  Is the use of likelihoods to measure the evidence for hypotheses a fundamental postulate, as Fisher once claimed, or is there something more fundamental from which the "likelihood principle" follows?  Now with a reply to commentaries by Robert Boik and Mike Kruse.

Sept. 3, 2001

 The Meaning of Temperature and Entropy in Statistical Mechanics. I have this funny idea that everything in science is connected to predictive accuracy, so (naturally) I'm trying to prove that statistical mechanics is just curve fitting, where temperature is a curve fitting parameter, and entropy measures the degree of fit.  The goodness of fit is not always very good, as the diagram shows:

Rejected, July 3, 2001

 How to Remove the Ad Hoc Features of Statistical Inference Within a Frequentist Paradigm with I. A. Kieseppa.  Our aim is to develop a unified and general frequentist theory of decision-making.  The unification of the seemingly unrelated theories of hypothesis testing and parameter estimation is based on a new definition of the optimality of a decision rule within an ensemble of token experiments. It is the introduction of ensembles that enables us to avoid the use of subjective Bayesian priors. We also consider three familiar problems with classical methods, the arbitrary features of Neyman-Pearson tests, the difficulties caused by regression to the mean, and the relevance of stopping rules, and show how these problems are solved in our extended frequentist framework.

May. 12, 2001

  Many Kinds of Confirmation. (6-page PDF file.)  I examine two simple numerical examples, which contrast the difference between Bayesian confirmation and the kind of predictive confirmation rigorously defined in The Myth of Reduction.  The difference raises some questions about the role of chance probabilities and causal assumptions in confirmation, but don't look to me for the answers.

Conditionally accepted in 
Phil. of Sci.
, April 12, 2001.

 The Myth of Reduction: Or Why Macro-Probabilities Average over Counterfactual Hidden Variables.  Co-authored with  I. A. Kieseppä We argue that reduction in science does not work by the deducibility from macro-descriptions from micro-descriptions, or the supervenience of macrostates on microstates.  Our alternative view of reduction is based on a theorem that shows that a probabilistic average over possible, but not actual, hidden variable distributions maximizes predictive accuracy (defined in terms of the Kullback-Leibler discrepancy) within a context in which only the relative frequencies of hidden variables are known. 

Nov. 20, 2000

 Whewell's Theory of Hypothesis Testing and a Relational View of Evidence.  We still have a lot to learn from William Whewell about the nature and methodology of science.

Published in 2000

 Hard Problems in the Philosophy of Science: Idealization and Commensurability. The positive lessons for philosophy of science in Kuhn's Structure of Scientific Revolutions.

Published version

 .  Forster, Malcolm R. (1999):  Model Selection in Science: The Problem of Language Invariance,” British Journal for the Philosophy of Science 50: 83-102.

 Published in 2000

 Prediction and Accommodation in Evolutionary Psychology co-authored with Larry Shapiro.   This is a commentary on an article by Ketelaar and Ellis, "Are Evolutionary Explanations Unfalsifiable?: Evolutionary Psychology and the Lakatosian Philosophy of Science" in Psychological Inquiry.

spider.gif (26551 bytes)
Aug. 5, 1999

 How do jumping spiders catch up on their prey?   A Model for Pursuit Behaviour (Araneae; Salticidae).  Co-authored with L. M. Forster.

A Note on Deutsch's Quantum Mechanics and Decision.   David Deutsch claims, in an article, that Born's probabilistic interpretation of the wavefunction follows from non-probabilistic assumptions of rational decision making.  It strikes me that the same style of argument has implications for ordinary decision theory and probability.

The Evolution of Inference:  What is the evolutionary purpose of inductive and deductive inference, and are they related?   This incomplete draft aims to apply some of Skyrms's ideas in Evolution of the Social Contract to the meaning of 'if...then' statements.

Times Series and Curve-Fitting: How are they related? In what sense is time series modeling like curve-fitting? A somewhat technical topic, but explained in terms of an illustrated example.

Notice: No Free Lunches for Anyone, Bayesians Included.  The no-free-lunch theorems of machine learning show that there are no privileged ways of learning from experience, in the sense that they all have the same probability of success if all possible worlds are equally probable.  This implies that there is no a priori reason that Bayesian conditionalization is any better than any other way of updating probabilities.

Extrapolation Error (In progress).  This is a precise mathematical formulation of the problem of generalizability, which is mentioned Key Concepts of Model Selection.

Optional Stopping (in progress).  This is the name of a problem in the foundations of statistics, which I analyze using computer simulations. The results were surprising (to me), but fairly easy to explain in terms of an analogy.

The Asymmetry of Backwards and Forwards Regression (86KB PDF).  This one-page analysis of regression has "far-reaching" consequences for causal modeling.

Causation, Prediction, and Accommodation. (Draft, July 1997). This article is targeted at scientists and philosophers of science who are interested in inferring causes from correlations, especially using modeling techniques like path analysis.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Hit Counter since 9/16/01