Here's a rule that's part of ordinary common sense: If A depends on B, and B depends on C, then — clearly — A depends on C. But what do such expressions mean? And why do we make the same kinds of inferences not only for dependency but also for implication and causality?
If A depends on B, and, also, B depends on C, then A depends on C. If A implies B, and, also, B implies C, then A implies C. If A causes B, and, also, B causes C, then A causes C.
What do all these different ideas have in common? All lend themselves to being linked into chainlike strings. Whenever we discover such sequences — however long they may be — we regard it as completely natural to compress them into single links, by deleting all but the beginning and the end. This lets us conclude, for example, that A depends on, implies, or causes C. We do this even with imaginary paths through time and space.
Floor holds Table holds Saucer holds Cup holds Tea Wheel turns Shaft turns Gear turns Shaft turns Gear Sometimes we even chain together different kinds of links:
House walk to Garage drive to Airport fly to Airport Owls are Birds, and Birds can Fly. So, Owls can Fly.
The chain containing walk, drive, and fly may appear to use several different kinds of links. But although they differ in regard to vehicles, they all refer to paths through space. And in the Owl-Bird example, are and can seem more different at first, but we can translate them both into a more uniform language by changing Owls are Birds into An Owl is a Typical-Bird and Birds can Fly into A Typical-Bird is a thing-which-can-Fly. Both sentences then share the same type of is a link, and this allows us to chain them together more easily.
For generations, scientists and philosophers have tried to explain ordinary reasoning in terms of logical principles — with virtually no success. I suspect this enterprise failed because it was looking in the wrong direction: common sense works so well not because it is an approximation of logic; logic is only a small part of our great accumulation of different, useful ways to chain things together. Many thinkers have assumed that logical necessity lies at the heart of our reasoning. But for the purposes of psychology, we'd do better to set aside the dubious ideal of faultless deduction and try, instead, to understand how people actually deal with what is usual or typical. To do this, we often think in terms of causes, similarities, and dependencies. What do all these forms of thinking share? They all use different ways to make chains.