We're always learning from experience by seeing some examples and then applying them to situations that we've never seen before. A single frightening growl or bark may lead a baby to fear all dogs of similar size — or, even, animals of every kind. How do we make generalizations from fragmentary bits of evidence? A dog of mine was once hit by a car, and it never went down the same street again — but it never stopped chasing cars on other streets.
Philosophers of every period have tried to generalize about how we learn so much from our experiences. They have proposed many theories about this, using names like abstraction, induction, abduction, and so forth. But no one has found a way to make consistently correct generalizations — presumably because no such foolproof scheme exists, and whatever we learn may turn out to be wrong. In any case, we humans do not learn in accord with any fixed and constant set of principles; instead, we accumulate societies of learning-schemes that differ both in quality and kind.
We've already seen several ways to generalize. One way is to construct uniframes by formulating descriptions that suppress details we regard as insignificant. A related idea is built into our concept of a level-band. Yet another scheme is implicit in the concept of a polyneme, which tries to guess the character of things by combining expectations based upon some independent properties. In any case, there is an intimate relationship between how we represent what we already know and the generalizations that will seem most plausible. For example, when we first proposed a recognizer for a chair, we composed it from the polynemes for several already familiar ideas, namely seats, legs, and backs. We gave these features certain weights.
If we changed the values of those evidence weights, this would produce new recognizer-agents. For example, with a negative weight for back, the new agent would reject chairs but would accept benches, stools, or tables. If all the weights were increased (but the required total were kept the same), the new recognizer would accept a wider class of furniture or furniture with more parts hidden from view — as well as other objects that weren't furniture at all.
Why would there be any substantial likelihood that such variations would produce useful recognizers? That would be unlikely indeed, if we assembled new recognizers by combining old ones selected at random. But there is a much better chance for usefulness if each new recognizer is made by combining signals from agents that have already proven themselves useful in related contexts. As Douglas Hofstadter has explained:
Making variations on a theme is the crux of creativity. But it is not some magical, mysterious process that occurs when two indivisible concepts collide; it is a consequence of the divisibility of concepts into already significant subconceptual elements.