« Return to the main page
These resources were developed in anticipation of Ed Fredkin's
lecture this Wednesday (27 Feb). You should look over the topics in
advance; the more opinions, questions, and criticisms you develop in
advance, the more you will gain from the lecture. In particular (see
topic #3), you may want to think of ways to avert a robot
apocalypse— just in case.
Preparing for intelligent machines
- Architecture. What exotic hardware is required to build
machines with human-level intelligence? For example, is parallelism
the key to human-like thinking? Is nondeterminism and/or randomness
and/or quantum mechanics?
Alternately, could our existing hardware support intelligent
programs if we knew how to design the right software?
- Technical requirements. The technical specs of the human
brain can give an idea of the technical requirements for intelligent
machines. How much information do we gather over the course of a
lifetime? How fast do nerve impulses travel? How much does the
memory capacity of the human genome limit the potential complexity of our brains?
- Avoiding apocalypse. Science fiction presents countless
stories
of homicidal
intelligent machines.
- What are the real risks involved with the invention of intelligent
machines? Which risks are not real?
- Might we prevent machines from becoming dangerous by building in
safety protocols or laws
to obey into their hardware? Or by isolating them physically, on
a separate network, on a separate power grid, etc.? What are the obstacles involved in doing
so? How do security issues with machines that can think differ from
security issues with, for
example, nuclear
reactors?
- What are the ethics of perscribing a different set of laws for
mechanical intelligence and biological intelligence? Might we prevent machines from
becoming dangerous by teaching them ethics in the same way that we
teach each other ethics? Might machines be in a position to teach
humans about ethics instead?
Suggested excellent reading
Architecture and technical requirements
- [link] In
his PhD thesis on the Connection Machine, Danny Hillis discusses the paradox that
computers seem so much faster than brains yet currently do so much
less. (See section 1.1, pages 6-8).
- [link]
Prof. Minsky argues that parallel computing makes it harder to do
many things at once, because the available resources must be
rationed among all available processes. (See the section called
"Fragmentation and the parallel paradox")
- [link]
This article analyzes the energy limits of the brain, showing that the
computational power of the brain must be similarly limited.
- [link]. Some
argue that the brain
is a quantum computer. The authors of this paper conclude that the argument is wrong-headed.
Avoiding apocalypse
- [link]. The
Machine
Intelligence Research Institute is an organization dedicated to developing
artificial intelligence software responsibly, and to
raising awareness about the benefits and dangers of creating
intelligent machines. This paper discusses the risks of
AI, and proposes some solutions.
- [link]. Eliezer
Yudkowsky, one of the co-founders of the Machine Intelligence
Research Institute, has proposed
and run the AI box experiment to challenge the assumption
that humans could successfully incarcerate a superhuman
intelligence.
- [link]
Aaron Sloman argues that it is unethical to impose different laws
for human and non-human intelligent beings.