The Psychology of Induction

Concept Learning and Concept Formation

**Example **1: I point to things saying either ‘oogle’ or ‘aagle’ each time. What does ‘oogle’ and ‘aagle’ mean? This is an example in which you must learn a concept.

You assume that these terms refer to the attributes of the objects to which I am pointing, like ‘metal’, ‘wooden’, ‘big’, ‘small’, ‘brown’, ‘rectangular’, and so on. In fact, there is no such connection because I say ‘oogle’ if my finger points up, and ‘oogle’ if it points down. This is an example in which not all associations are equally learnable.

**Example 2**: Mayer (1992) p. 83. You will be given a series of stimuli, individually, with each item varying in shape (circle or square), size (large or small), color (red or green), and number (one or two). Out of all possible stimuli (there are 16 of them), some are in a *target group* and some are not. You will be shown an item, and you have to predict whether it is in the group (initially it will just be a guess). Then I will give you the correct answer before going on to the next item. Here are 8 training examples:

1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |

1 red large square |
1 green large square |
2 red small squares |
2 red large circles |
1 green large circle |
1 red small circle |
1 green small circle |
1 red small square |

NO |
NO |
YES |
NO |
NO |
YES |
YES |
YES |

- After examples 1 and 2, you know that there are no single large squares in the target group.
- Examples 3 and 8 tell us that red small squares are in the group.
- Single small circles are also in the group, by examples 6 and 7.
- Example 5 tells us that a single green large circle is not in the group.

**Conjecture**: The group consists of all and only small objects (either 1 or 2, and either red or green).

Here are the 8 test examples:

9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |

2 green large squares |
1 red large circle |
2 green small circles |
2 red small circles |
2 green large circles |
2 green small squares |
2 red large squares |
1 green small circle |

NO |
NO |
YES |
YES |
NO |
YES |
NO |
YES |

- Our conjecture gives the correct predictions.

**Question**: Have we *proved* that our conjecture is right? Yes, assuming that the group membership does not change over time.

**Note**: Example 2 is harder than simple enumerative induction because the associations between ‘yes’ and ‘small’ are mixed in with *accidental* associations between ‘yes’ and other attributes. Those accidental associations are commonly referred to as *noise*.

**Example 3**: Here is a variation on the previous example. 8 training cases:

1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |

1 red large square |
1 green large square |
2 red small squares |
2 red large circles |
1 green large circle |
1 red small circle |
1 green small circle |
1 red small square |

YES |
NO |
YES |
YES |
NO |
NO |
NO |
NO |

**Conjecture**: The Much-Red hypothesis: 2 red objects of any size or 1 large red object.

8 test examples:

9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |

2 green large squares |
1 red large circle |
2 green small circles |
2 red small circles |
2 green large circles |
2 green small squares |
2 red large squares |
1 green small circle |

YES |
YES |
NO |
YES |
YES |
NO |
YES |
NO |

**Discussion**: The conjecture is proved false in example 3. The group actually consists of much-red objects plus very-much-green objects (2 large green things). Note that shape proved to be irrelevant to group membership. It’s as if the total ‘intensity’ of ‘redness’ has to be higher than a certain threshold in order to gain group membership, and while green objects have a certain degree of ‘redness’, they don’t have enough unless there are two large green things present.

**Scientific Analogy**:

- Suppose that you observe that in many cases that metallic objects sink while wooden things float.
- You conjecture at first, that all metal objects sink and all wooden things float.
- But then you are faced with two refutations. A ceramic bowl that floats and a piece of ebony that sinks. You must then invent a new classification of the objects.
- You may conjecture that objects will sink if and only if they have a density greater than water.
- This copes with the ebony counterexample, but not with the a ceramic bowl, because it has a density greater than water (which is why it sinks when placed into the water sideways).
- So, you end up with the third conjecture that an object will sink if and only if it weighs more than the water is displaces.
- This is still subject to the following refutation. A razor blade will float if carefully placed on water because of surface tension effects.

Concept Learning versus Concept Formation

- Example 2 is an example of
*concept learning*. The concept ‘small’ defining membership in the target group is already known. It need only be*identified*. - The Oogle-aargle problem, example 3, and the float-sink example are examples of
*concept formation*, in which the target is characterized by a new property. The nine-dot problem is another example.

Continuity and Noncontinuity Theories of Learning

Parallel to the distinction between concept learning and concept formation, we have different theories about how inductive learning takes place.

- Continuity theory says that learning is continuous, with no jumps, or leaps, or conjectures. It shares Hume’s view that learning is a kind of ‘habit formation’ in which a particular conclusion is
*slowly*reinforced as evidence for it accumulates - Noncontinuity theory says that learning is noncontinuous, with jumps, or leaps, to particular conjectures. These conjectures are held so long as they continue to predict correctly, at which time a leap is made to a different conjecture. This is what philosophers of science refer to as hypothetico-deductivism.

Simple Bayesianism

- A kind of marriage between inductive logic and eliminative induction
- Begins with a set of equally weighted hypotheses
- Updates those weights on the basis of observation

**Note**: In examples 2 and 3, there are 2^{16} or 6,5536 possible hypotheses concerning membership in the target group!

Simple Updating:

- Lower the probability of refuted hypotheses to zero
- Because the total probability is 1 at all times, the probability of surviving hypotheses goes up
- The relative weighting of surviving hypotheses remains the same.

**Question**: Does simple Bayesianism involve continuous or noncontinuous learning?

**Answer**:

- Either the list of hypotheses with nonzero probabilities includes the true hypothesis, or it does not.
- If it does, then the procedure will converge to the true hypothesis without jumps. But if it does not, then every hypothesis will be refuted eventually, which will force a Bayesian learner to jump to a new set of hypotheses.
- While this move is noncontinuous, it is
*not*Bayesian. - Therefore
*Bayesian*learning is continuous.

**Remark**: In all the recently discussed learning tasks, the observations have been fed to the learner. But in science, the learner has a great deal of control over what observations are made (though not over what the outcomes are). This adds another element in the psychological problem of induction—what strategies do inductive learners use in making such choices and what effect to they have effects do they have on learning rates? (See Mayer 1994, around p. 93).

What Makes Learning Hard?

- Learning by simple enumerative induction is very easy because only the observational premises relevant to the conclusion are listed as premises. In all the examples here, there is added noise, and the learner has to learn to distinguish between the two (e.g., is shape relevant to target group membership, is color relevant, and so on).
- Concept formation is harder then concept learning which does not involve the invention of new concepts. Simple enumerative induction is the easiest of all because there is no need to identify any concept at all!

**A look ahead**: The kind of inductive reasoning typically discussed by psychologists, which involves the invention of a new concept, is quite different from simple enumerative induction, which does not involve the learning of new concepts. The great influence of Hume and Mill has tended to steer philosophers away from this phenomenon, although William Whewell is one notable exception here.

What’s next?

We have looked at inductive inference according to philosophers, and we have looked at inductive inference according to logicians, and we have looked at inductive inference according to psychologists. Next, we look at inductive inference in science according to historians of science.