« PREV
NEXT »
« Return to the main page
CHAPTER 6: COMMON SENSE
QUESTIONS, CRITICISMS, & SOLUTIONS
- Baby machines: If we don't know how to make programs that invent
novel representations, will programmers have to invent all the new
representations and build an adult machine instead?
- How do we generate possible goals to pursue?
- What representations might we employ when analyzing our own
thoughts?
- What representations do our supervising agents have of our
different ways to think?
- Maybe we store our ways to think as small programs, which
various mental agents write, refine, and debug.
- How much structure or information does a child's mind have by the
time it is born?
- It's not enough to count the bytes in DNA: much of the
information is in the machinery for reading and translating the
DNA, not just in the base pairs themselves. If you don't have
the infrastructure for understanding it, the information in DNA
is useless.
- It's also not enough to record how much data our senses receive,
because again there's a lot of information in the brain's machinery for
processing it.
- Should we keep track of the common sense beliefs among different
cultures? (Maybe we should try to catalogue the differences, or to
expunge them from the database so that our information is more
neutral)
- I recommend cataloging the differences, not trying to exclude
them. It's important to take culture into account --- just like
you can't think about thinking without thinking about thinking
about something, you can't think about common knowledge without
thinking about a community in which the knowledge is common.
- In childhood, we acquire a lot of novel common sense knowledge. Is
there a difference between how we apply common sense knowledge when
we first learn it, and then again (years later?) after we've
"internalized" it?
- I suspect there's a difference, because it would be dangerous to
automatically rely on the tentative rules we invent, without
putting them through a trial period first. So, I think we have a
trial period when we first posit a rule, which then leads to the
rule being rejected or adopted (and then not scrutinized as
much, if at all).
- Is there an agent which holds on to deprecated common sense facts
(e.g. theories like "some fish don't have legs" that are later
replaced by more effective rules), so that we avoid generating
them again? If we don't have such an agent, how do we weed out
ineffective common sense rules?
- What different kinds of common sense knowledge are there? For
example, the existence of autism suggests that some parts of the
brain are specialized for handling some kinds of social common
sense.
- I suspect that not all common sense information is represented
or processed in the same way. Thus, it's possible for some of
our ability to aquire and deploy common sense information to be
compromised, while others continue working. This might result
in a situation where a panalogy (like "give") might have its
social sense disabled, while other senses (like the physical
sense) remain intact.
- Can we use common sense knowledge to help a robot learn or
understand language the way that humans do? This might be a good
alternative or supplement to grammar-based theories of language.
- Criticism: This book explains the problem with naive approaches to
AI (like constructing long lists of If-Then rules), and
convincingly describes more promising approaches. But even though
we agree that a truly intelligent program must be able to (for
example) change representations and generate abstractions, we
don't know how to write a program that does so. How do we begin to
know how to do that?
- When using difference engines to achieve our goals, how do we
decide which differences to reduce?
- Since humans evolved to be able to acquire and master a large
amount of common sense information, wouldn't it be a good idea to
develop a computer program that /evolves/ the ability to handle
common sense in the same way?
- Even if this particular idea has problems, I think it would be
an interesting project to have a population of programs that
evolve all the time -- maybe they will produce interesting
things over the next few billion years.
- Are our towers of difference engines unique to individuals? How
much do they vary over time, and among individuals? Among
cultures? Among the human race?
- I suspect that goals related to survival are nearly universal,
however there are some times when humans (and other animals)
will prioritize something other than their own survival.
- I'm impressed by Evans's analogy program; Maybe instead of
searching for new ways to think, our programs could instead search
aggressively for new and better ways to generalize what they
already know?
- This approach might run into the problem of
over-generalization, and would probably give wrong answers in
situations are dramatically different from anything our
programs have seen before. Even so, it seems that analogy is
fundamental to how humans think; maybe this approach would work
in conjunction with some other techniques.
- If we have many ways of representing knowledge, does that mean
that we also have different strategies for processing each kind of
representation? How many of our ways to think are tailor-made for
specific representations, and how many are general-purpose?
- Having too many special-purpose strategies might lead to too
many things to try, as well as to overcrowding of valuable
brain space with niche hardware. On the other hand, I feel like
I have a number of different ways to think, some which are
specialized to formal logic, others of which can sift through
complicated inconsistent real-life data --- so there must be
some specialization.
- I know from experience that choosing the right representations can
make the difference between efficient, legible code and slow,
kludgy code, and I see that humans can retrieve, deploy and
combine common sense information incredibly effectively. So, I
want to ask: what representations enable us to handle common sense
knowledge so well?
- How do we interconnect representations that operate at different
levels of abstraction? What representations enable us to switch
between levels of abstraction so easily?
- How can we design a program that can carry out abstractions?
- The Internet reports the following strategies: Domain hiding,
Co-domain hiding, Domain reduction, and Domain aggression.
- Is art (e.g. playing music) more like a high level activity
or a low level one? It seems to rely on planning and effective
communication like many high level skills, but also on complicated
systems that manage nuances, like the systems that control fine
motor movement or those that enable us to read volumes of
information from subtle facial expressions.
- Many planning systems construct the entirety of their plans before
they execute them. However, in everyday life, our plans are often
not fully developed before we start, and exigencies often
intervene. How would your programming approach differ if you were
you write a program that ammends and elaborates on a plan as it
executes it? (for example, design a robot that tries to solve a
non-routine math problem, or that tries to find its way home from
the store based on things it noticed on the way to the store).
- I'm interested in the idea of learning from mistakes, and I wonder
how we process them to make it easier for us to learn from them
and use what we learn from them. What are our most effective
strategies for recognizing, cataloguing, representing,
consolidating, and generalizing mistakes?
- What processes might cause us to "recollect" details that never
happened? (confabulation)
- Suggestion: Humans have begun to pass down knowledge about
mistakes through anecdotes and cautionary tales. I think it would
be helpful if computers could do this, as well. I suspect that
computers might be even more effective than we are at doing this,
perhaps through the use of a central database of mistakes.
- Concern: We learn common sense information over time, which
enables us to process that information and relate it to other
things we know. But the way we generate common sense databases
now, I worry that we produce facts that lack these rich
interconnections. (I guess that's one of the appeals of building a
"baby machine".) How can we resolve this problem?
- What are the different competences encapsulated by the suitcase
word "Knowing"?
- I've read that there are at least two kinds of knowing: knowing
that, and knowing how. But is there a better distinction we
could make?
- Why don't we notice when we use suitcase words? In other words,
what processes enable humans to correctly disambiguate a suitcase
word, or otherwise to avoid noticing or being bothered by words
that don't carry any real meaning?
- Perhaps we use our ability to negotiate multiple representations
at once not only when we are trying to use what we already know,
but also when we are trying to learn something new, or to transfer
something we've learned in one domain to another.
- Experience seems like a critical component of common sense
reasoning --- because our reasoning depends so much on analogy
with things we've experienced before.
- Suggestion: In order for a program to be fully intelligent, it
should have (and be able to use) multiple senses. I think we
should focus on collecting and organizing common sense information
from particular senses, or combination of senses, like what cats
typically feel like, what desserts often smell like, or what it's
like to be at a fireworks show.
- What skills are required to give computers a three-dimensional view
of the world?
- Perhaps they need representations that include not just color
and illumination, but also depth and form.
- A lot of human behavior is determined by different types of Pain
and Pleasure. Are these concepts helpful/necessary for intelligent
computers? And are they evolutionary hacks for us, or are they
fundamentally useful for some reason?
- Criticism: I doubt we can program computers to do the sorts of
things that took millions of years to evolve in nature. Even if
you could, why would you want to handicap a computer by making it
as (in)capable as a newborn infant? Instead, we should program
computers to do the things that computers surpass humans at,
e.g. tabulating and grinding through large collections of data
without getting tired or making a mistake or being argumentative
or lazy.
- What will common sense systems enable us to do in the near future?
- Maybe we can curate a database of different perspectives on the
world, to broaden our cultural viewpoint.
- Maybe we can collect a list of goals many people have, and the
tricks or hacks people have discovered for effectively
achieving those goals. That would enable us to acquire a lot of
new interconnected common sense information about what things
can be used for.
- How effectively can we induce particular emotional states in
ourselves? What are good strategies for doing so? Relatedly, how do we manage to train children to feel
certain ways in certain situations? (e.g. happy at a wedding, sad
at a funeral.)
- How would you create a robot that appreciates music in the same
way we do? What processes might be required to appreciate music?
In particular, how is it that certain chords become associated with
certain moods?
- How would you give a computer the ability to feel "gut feelings"?
How much do gut feelings depend on environmental features (like
how darkness conveys a sense of foreboding), and how much on
physiological processes (like how making a certain facial
expression or noticing a surge of adrenaline contributes to the
feeling of fear), and how much on subconscious computations?
- What decision-making procedures do we use to choose appropriate
representations/domains? Do we try everything in parallel, or do
we try what worked last time? Is this search procedure for good
representations a methodical process, or more arbitrary and
random?
- How do we choose which representation to try next, when the
current representation turns out to be unproductive? Perhaps we
start a debugger to explain why the current representation failed,
then try to find a better one, or else we rely on some educated
guesswork.
- Why do young people tend to acquire information more quickly than
older people? Is this a societal tendency, or the result of
biological changes?
- This might be a result of the Investment Principle: Maybe we
spend the first years of our life trying to build a framework
that can explain the bewildering array of experiences we
have. Then, after we refine it for a while, we find that it can
explain pretty much everything, accurately enough, most of the
time, and our internal systems stop investing the energy to
make it better or to add new things to it. If this is true,
maybe the solution is to resist becoming complacent?
- Is the example of "the professor who couldn't remember which
concepts were hard" related to the example of "the child who can
walk, but who can't explain how it works"? That is, how do parts
of our minds manage to develop complicated algorithms without "us"
being conscious of how they work?
- How do we utilize our common sense knowledge to generate facts
on-demand---for example, that you can sit on a diving board or
that classrooms are unlikely to contain space shuttles? Is common
sense information stored in our brains in a way that makes it easy
to generate sentences like these? Is common sense information
stored in a way that resembles sentences like these?
- When making an abstraction, it's important to decide which
features are important. But how do we decide on a representation
in the first place, even before we decide on the features of the
representation that are meaningful?
- Why does society frown upon "stating the obvious" --- that is,
making common sense knowledge explicit? Shouldn't it be
enlightening to expose the assumptions and background knowledge we
have in common?
- How is the 6-layer division of the mind related to the division of
the mind into specialized domains of knowledge/representation? For
example, are there some domains of knowledge that exist only at
one level? Are there representations that span many levels? Or,
are subdomains of knowlege just another kind of resource that
control structures at any of the six levels might use?
- How do our minds internally represent the reliability of various
common sense facts? For example, do we use probabilities, or
qualitative descriptions (sometimes, rarely, usually, always)?
- Maybe we associate our common sense assumptions with "fallback
plans" for what to do if they fail, or memories of instances
when they failed. Then we can use this metadata either to infer
that the information is highly suspect, or to make reliability
an irrelevant point anyways (because you have a richly
connected fallback plan).
- Although eidetic memory (photographic recall) has no reliable
evidence nowadays, recent studies (> 2006) suggest that some
individuals have /hyperthymesia/, in which they have an eidetic
memory for "autobiographical" events, and that this might be the
result of time-space synesthesia. What do you think about this,
and what mechanisms might explain how hyperthymesia occurs?
- How might you design a program that can appropriately answer
questions like "what does this remind you of?" or "Have you seen
anything like this before"?
- Why have high-level/self-reflective difference engines not been
studied further? Is there more to learn from difference
engines---could they be an active area of research nowadays---or
do we basically understand them and their limitations?
- Do you think the brain actually uses something very similar to
difference engines?
- Although brain science may be too primitive nowadays, how do you
imagine it might be able to help AI research in the fear future?
- What sorts of abilities are children born with? How can we
experimentally determine what children can do, if some of their
abilities are kept in an internal "prototyping stage" without any
sort of behavioral manifestation?
- How is the quality of our decisions affected when we use "gut
feelings" instead of our high-level explicit planning, linguistic,
cognitive procedures?
- I suspect that our gut reactions must not be too irrational, or
else we would have died out. On the other hand, I know that
animals are often specialized for certain precise environments,
and they stop working when taken outside of them. (Consider how
a moth's navigational software breaks near a candle flame). In
any case, I'm suspicious of any software I can't analyze
myself, and I bet we'd make better decisions if we could
introspect and modify the programs responsible for our
rapid-fire decisions.
- To what extent do we perform mental hygiene: to forget useless
information, to clear out bad ideas, maladaptive habits, and
unproductive ways to think? It seems like if all our knowledge is
tightly interconnected, then overzealous cleaning would break too
many things. Maybe we're only able to make a succession of
superficial changes.
- Might computers enable us to perform better mental hygiene?
- Concern: Suppose we make a near-human-level intelligent
computer. Doesn't it interfere with the computer's autonomy if we
provided all of its common sense information, rather than letting
it acquire its own opinions?
- When utilizing multiple realms of thought, does one realm usually
dominate, or can we have several active at once? If we have
several active at once, doesn't that severely constrain the
resources that each realm can use?
- How would you program a computer whose goal is to find
regularities in the environment (and how would you prevent it from
making generalizations that are too large to be useful)?
- I don't understand how logic makes it hard to do reasoning with
analogies. What does that mean?
- Although evolution has obscured the inner workings of our minds
from us, should we design computers that are fully capable of
seeing and modifying even the lowest levels of their minds? (Or
would that be too dangerous for them? Maybe we should give them a
switch to turn on direct introspection after they've learned
enough.) Should we make programs that can indirectly modify their
behavior the way that we do (e.g. through music or caffeine or
imagining a peaceful/frustrating/melancholy/inspiring scenario)?
- How do the representations which children use differ from those
which adults use?
- Do children have different realms of expertise than adults?
Perhaps children are specialists in certain skills that adults
don't have or don't need.
- How do Frames and Difference Engines interact?
- To what extent does culture play a role in how general or specific
our metaphors are? Is understanding new metaphors a skill that we
are taught, or does it mostly rely on skills that we already have
in other areas? Are some culture's metaphors more abstract than
others --- or is there some universal consensus on how abstract
they generally are?
- It seems like Panalogies might sometimes result in duplicated
work, as multiple parts of the brain independently try to do the
same job simultaneously. How do brains confront this problem?
- Perhaps some parts of our minds are designed with single large
centralized processors that perform a unique specialized
task. This prevents reduplication of work by ensuring that
exactly one piece of hardware can do any particular job. Plus,
when making improvements, only one piece of hardware needs to
be modified in order for all processes that use it to be
affected.
- Perhaps panalogies are equipped with control structures that
avoid the problem either in advance by managing which parts of
the mind are doing which jobs, or on-the-fly by translating
results between different domains.
- What are the functions that enable us to acquire and use common
sense information?
- Perhaps we have rules that say "If you don't know something,
try to figure it out." Such primitive rules must be hardwired
in, rather than learned. As such, they might be difficult to
expose to scientific investigation, because we can't access
them with our more recently evolved cognitive and linguistic
agents.
- What do we use our common sense information for, and how could we
design programs to perform those functions?