When we watch a ball roll down a slope, we notice it seems to try to get around obstacles that lie in its path. If we didn't know about gravity, we might be tempted to think that the ball has the goal of moving down. But we know that the ball isn't trying to do anything; the impression of intention is only in the watcher's mind.
When we experiment with Builder we also get the sense that it has a goal. Whenever you take its blocks away, it reaches out and takes them back. Whenever you knock its tower down, it rebuilds it. It seems to want a tower there, and it perseveres until the tower is done. Certainly Builder seems smarter than the rolling ball because it overcomes more complicated obstacles. But once we know how Builder works, we see that it's not so different from that ball: all it does is keep on finding blocks and putting them on top of other blocks. Does Builder really have a goal?
One ingredient of having a goal is persistence. We wouldn't say that Builder wants a tower, if it didn't keep persisting in attempts to build one. But persistence alone is not enough — and neither Builder nor that rolling ball have any sense of where they want to go. The other critical ingredient of goal is to have some image or description of a wanted or desired state. Before we'd agree that Builder wants a tower, we'd have to make sure that it contains something like an image or a description of a tower. The idea of a difference-engine embodies both elements: a representation of some outcome and a mechanism to make it persist until that outcome is achieved.
Do difference-engines really want? It is futile to ask that kind of question because it seeks a distinction where none exists — except in some observer's mind. We can think of a ball as a perfectly passive object that merely reacts to external forces. But the eighteenth- century physicist Jean Le Rond d'Alembert showed that one can also perfectly predict the behavior of a rolling ball by describing it as a difference-engine whose goal is to reduce its own energy. We need not force ourselves to decide questions like whether machines can have goals or not. Words should be our servants, not our masters. The notion of goal makes it easy to describe certain aspects of what people and machines can do; it offers us the opportunity to use simple descriptions in terms of active purposes instead of using unmanageably cumbersome descriptions of machinery.
To be sure, this doesn't capture everything that people mean by having goals. We humans have so many ways of wanting things that no one scheme can embrace them all. Nevertheless, this idea has already led to many important developments both in Artificial Intelligence and in psychology. The difference-engine scheme remains the most useful conception of goal, purpose, or intention yet discovered.