| Last Update: Saturday, 22 October 2022 |
Note 1: Many of these items are online; links are given where they are known. Other items may also be online; an internet search should help you find them.
Note 2: In general, works are listed in chronological order. (This makes it easier to follow the historical development of ideas.)
On analog computation:
Three older encyclopedias are:
Introductions to Philosophy:
My favorite introduction to philosophy is:
My second favorite is:
On philosophical method, see:
On the nature of philosophy:
What Is Truth?
For more on correspondence theories, see:
The standard logical
analysis of a correspondence theory of truth is due to Alfred Tarski. For
an overview, see
Tarski's own version aimed at a general audience is:
On coherence theories, see:
See also the answers to two questions at AskPhilosophers.org:
Scientific Rationality:
On the experimental philosophy movement ("X-Phi"), see:
Whether or not X-Phi is really philosophy,
it is certainly an interesting and valuable branch of cognitive science.
For a useful discussion of naturalistic
philosophy, see:
And
For a debate on science vs. philosophy,
read:
in that order.
For a discussion of whether philosophy or science is "harder", see:
On why science needs philosophy, see:
Knowing How vs. Knowing That
The knowing-how/knowing-that distinction was first discussed in:
It is examined
in the context of computer programs in:
For surveys of recent work on
the distinction, see:
Is It Always Rational to be Rational?
An interesting discussion of the role — and limits — of
rationality in AI research is:
For more on the nature of rationality and its limits, see:
Philosophies of Anything and Everything
On the AI approach to existence and non-existence, see:
For the AI version of ontology, see
The Buffalo Ontology Site.
On knowledge representation, see:
and my bibliography at
Knowledge Representation and Reasoning: General Resources
A history of the phrase 'computer science' can be found in:
In a response
to a letter that appeared in one of the earliest issues of
Communications of the ACM, an editor (possibly Alan J. Perlis)
listed several, admittedly
"facetious", names, including 'turingineering', 'turology',
'applied meta-mathematics', and 'applied epistemology'
(DATA-LINK (1958). What’s in a name? Communications of the ACM, 1(4):6).
The first two are puns on the name of Alan
Turing, arguably the founder of the discipline, discussed in
Chapter 8. We discuss "applied epistemology" in §3.16.3 of the book.
In 1966, Peter Naur (a winner of the Turing Award)
suggested 'datalogy':
A useful discussion of these terms can be found in "About Names and
Labels", in
Paul Abrahams (1987). What is computer science? Communications of the
ACM, 30(6):472–473, says:
Determining Boundaries:
On Wittgenstein's notion of "game", see:
On categorization, see:
On the inability to carve nature into joints, see:
Computers Are Not Natural:
On the nature of
artifacts in general, see:
On artifacts in CS, see:
But this is not necessarily a positive analogy.
Now, if you search for
'Department of Microscopy' on the World Wide Web, you will, indeed, find
that there are some universities and museums that have one. But, if you
look closer, you will see that they are really departments of
microbiology. Non-biologists who use microscopes (such as some
geologists or even jewelers) are not found in departments of microscopy
today. What has happened, apparently, is that the use of this artifact
by scientists studying widely different phenomena was not
sufficient to keep them in the same academic discipline. The academic
discipline of microscopy splintered into those who use microscopes to
study biology, those who use it to study geology, and so on, as well
as those
who build new kinds of microscopes (who might be found in an engineering
or an optics department).
For over a hundred years, there was a
Quarterly Journal of Microscopical Science
(1853–1965), affiliated with "the Microscopical Society of London".
Its inaugural Preface said:
"The object of this Journal will be the diffusion of information
relating to all improvements in the construction of the
Microscope, and to record the most recent and important researches
made by its aid in different departments of science,
whether in this country or on the continent.
"It is, perhaps, hardly necessary to apologise for the title of
the Journal, as the term "Microscopical," however objectionable
in its origin, has acquired a conventional meaning by its
application to Societies having the cultivation of the use of the
Microscope in view, and so fully expresses the objects of the
Journal, that it immediately occurred as the best understood word
to employ. It will undoubtedly be a Journal of Microscopy and
Histology; but the first is a term but recently introduced into
our language, and the last would give but a contracted view
of the objects to which the Journal will be devoted."
(Preface. Quarterly Journal of Microscopical Science, 1(1)(1853):1–2.
The first issue of the journal
included (besides many articles on what we now call
microbiology) a paper on "Hints on the Subject of Collecting Objects
for Microscopical Examination" and a review of a book titled The
Microscopist; or a Complete Manual on the Use of the Microscope.
Here is a passage from that review:
But, based on the nature of many of their articles,
the March 1962 issue of the journal announced a change in focus
from microscopy to cytology,
thus apparently changing their interest from the tool to
what can be studied with it.
The change officially occurred in 1966, when the journal
changed its name to the Journal of Cell Science (and restarted
its volume numbers at~1).
(On the (subtle)
"Differences between Histology and Cytology", see
DifferenceBetween.com)
Could the same thing happen to computer science that happened
to microscope science? If so, what would fall under the heading of the
things that can be studied with computers?
A dean who oversaw the Department of Computer Science at my university
once predicted that
the same thing would happen to our department: The computer-theory
researchers would move into the mathematics department; the AI
researchers would find homes in psychology, linguistics, or philosophy;
those who built new kinds of
computers would move (back) into electrical engineering;
and so on. This hasn't happened yet
(although
McBride, N. (2007, 22 January). The death of computing. BCS: The Chartered
Institute for IT; Features,
Press and Policy,
suggests that it is already happening,
while
Mander, K. (2007, February). Demise of computer science exaggerated. BCS:
The Chartered Institute for
IT; Features, Press and Policy,
disagrees).
Nor do I forsee it happening in
the near future, if at all. After all, as the computer scientist George
Forsythe pointed out,
in order to teach "nontechnical students" about computers and
computational thinking, to teach "specialists in other technical fields"
about how to use computers as a tool (alongside "mathematics, English,
statistics"), and to teach "computer science specialists" about how to "lead
the future development of the subject",
On quantum computing, see:
On DNA computing, see:
Only Algorithms?
Besides Knuth, others who argue that CS studies algorithms include:
Tic-Tac-Toe (Noughts and Crosses):
There are at least two implementations of Michie's method online:
On the relationship between programming and teaching, see:
On the difference between classical symbolic programming (where the
programmer "teaches" the computer how to do something) and
machine-learning systems (where the computer "learns" how to do
something without being explicitly taught),
see:
On whether such systems "really" learn, see:
The two "systems" or "types" of thinking are discussed in much
greater detail in:
There, "Type~1" (or
"intuitive") processing is characterized as independent of working
memory (a kind of short-term, conscious memory) and as
"autonomous". And "Type~2" (or "reflective") processing is
characterized as "requir[ing] working memory" and involving "cognitive
decoupling" and "mental simulation". (Cognitive decoupling is,
roughly, the ability to mentally represent a mental representation, so
that the second-order representation can be thought about separately
from the original representation.)
For more on the two "systems" of thinking, and on unconscious cognition
more
generally, see:
CS Studies Information:
For a survey of various senses of 'information'
as it applies to computing, see
Piccinini 2015,
Ch. 14.
On the difficulty of defining 'information', see:
Lots of work has been done on the nature of
information and its relationship to CS, and on the philosophy of
information; see, especially:
In particular,
Dunn 2008
is a very readable survey of the
nature of information and its role in computer science,
covering many of the same topics and issues as this book.
CS as a Mathematical Science:
For more of Dijskstra's observations on mathematics and computer science, see:
For Dijkstra's obituary, see:
Martin Davis says that the theory of
computation is a "branch of applied mathematics":
Rosenbloom 2010
offers an interesting twist on the relationship of
mathematics to CS: Arguing that "computing amounts to a great
scientific
domain, on par with the physical, life, and social sciences",
he "subsum[es] mathematics within computing" (p. 2). In other words,
instead of CS being a branch of mathematics or being a mathematical
science, he sees mathematics as a branch of CS!
CS as a Natural Science of Procedures:
Sheraton, M. (1981, 2 May). The elusive art of writing precise recipes. New York
Times
discusses the difficulties of writing recipes.
Moskin, J. (2018, 9 July). Overlooked no more: Fannie Farmer, modern cookery’s
pioneer. New York Times,
page B5,
notes that it was the cookbook writer Fannie Farmer
who was "the first … to insist that scientific methods and precise
measurements" should be used.
On evolution as an algorithm,
see:
For more on natural computation, see:
CS as an Empirical Study:
On "black boxes":
Rosenblueth, A. and Wiener, N. (1945). The role of models in science.
Philosophy of Science, 12:316–321, esp.
pp. 318–319,
talk about "closed-box"
and "open-box" problems, surely an early version of the notion of
"black" and "glass" "boxes".
von Neumann
1948 characterizes axioms as black boxes. On the history of these
terms, see "Black Box" (Wikipedia)
On the program-process distinction,
see also:
On computers in the desert:
Weizenbaum, J. (1976). Computer Power and Human Reason. W.H. Freeman, New
York, Ch. 5, is the source of the computer-in-the-desert thought experiment.
Real-life examples include
"… Stonehenge, the world's largest undocumented computer"
(Brooks, Jr., F. P. (1975). The Mythical Man-Month. Addison-Wesley,
Reading, MA, p. 163)
and the Antikythera Mechanism (discussed in §6.4.1 of the book).
CS as Art:
On CS as both a fine and a liberal art, see:
CS as the Study of Complexity:
For an overview of these topics, see Rapaport, W.J. (2017). What is computer science?
American Philosophical Association Newsletter on
Philosophy and Computers, 16(2):2–22, esp. §13.2.
CS as Computational Thinking:
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful
Ideas. Basic Books, New York,
only mentions 'computational thinking' on
p. 182 and 'procedural thinking' on p. 155, but his entire book can be
thought of as an extended characterization of this kind of thinking and
learning. For more on Papert and his version of computational thinking, see:
Guzdial, M. (2008). Paving the way for computational thinking.
Communications of the ACM, 51(8):25–27,
is a commentary on Perlis's view of what is now called 'computational thinking'.
There is also a Center for Computational Thinking.
Lu, J. J. and Fletcher, G. H. (2009). Thinking about computational
thinking. SIGCSE Bulletin, 41(1):260–264,
gives examples of how computational thinking
can be introduced in primary- and secondary-school curricula even before
any formal introduction to CS.
Carey, K. (2010, 12 November). Decoding the value of computer science. Chronicle of
Higher Education, 58(12):A88,
argues for the value of algorithmic thinking
in fields other than computer science (including finance and journalism).
Tedre, M. and Denning, P. J. (2016). The long quest for computational
thinking. In Proceedings of the
16th Koli Calling International Conference on Computing Education
Research, Koli Calling '16, pages 120–129, New York. ACM,
gives a good survey of the history of
"computational thinking".
Denning, P. J. and Tedre, M. (2019). Computational Thinking. MIT Press,
Cambridge, MA,
expands on
that history as well as providing a thorough overview of its many
meanings, noting that "computing [in the sense of
"calculating"] is an ancient human process" (p. 11) dating back to at
least the Babylonians (§7.3.1), and
so "computational thinking" is equally ancient. See also
Denning, P. J. and Tedre, M. (2021). Computational thinking: A discipliary
perspective. Informatics in
Education, 2021(3):361–390.
For a skeptical eye on the notion, see:
Here are some good general introductions to the philosophy of science:
Other (longer!) book-length treatments include:
Two good short introductions are:
Other readings include:
For a detailed defense of the view that science is (merely?)
any systematic study, see:
On physicalism, see:
On noumena and phenomena, see:
The shortest (but by no means the easiest!) introduction to
Kant's philosophy is:
For further discussion, see:
Quantum Mechanics:
For a cultural critic's views on what we can learn about the
nature of science from the paradox of quantum entanglement in physics, see:
For more on quantum mechanics, see:
On realism vs. instrumentalism in science, see:
Scientific Theories:
On scientific theories in general, see:
On the syntactic vs. semantic views, see the debate in:
For a wonderful discussion of the nature of science with respect
to evolution, see the judge's opinion in the celebrated 2005 legal case
of Kitzmiller v. Dover Area School District
(esp. §4, "Whether ID [Intelligent Design] Is Science").
Willingham, D. T. (2011, May). Trust me, I'm a scientist. Scientific American,
discusses "Why so many people choose not to believe what scientists say".
Gopnik, Adam (2015). The evolution catechism. The New Yorker,
discusses the nature of 'theory'
as it is used in science and in ordinary language.
"The" Scientific Method:
On scientific methods, see
Hepburn, Brian and Hanne Andersen, Scientific Method, The Stanford
Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.),
esp. §5.2 on "computer methods".
On Popper's Philosophy of Science:
On pseudo-science, see:
On astrology, see Thagard, P. (1978). Why astrology is a pseudoscience.
PSA: Proceedings of the Biennial Meeting of the
Philosophy of Science Association, 1:223–234.
On Marxist economics, see Hudelson, R. (1980). Popper's critique of Marx.
Philosophical Studies, 37(3):259–270.
On Freudian theories, see Grünbaum, A. (1984). The Foundations of
Psychoanalysis. University of California Press, Berkeley,
and
Prof. Adolf Grünbaum's Publications
on Psychoanalysis and Philosophy of Medicine
On homeopathy—and a very good general survey of the nature of
pseudoscience— see:
Mukerji, N.; Ernst, E. (2022),
Why homoeopathy is pseudoscience. Synthese 200:394.
On rules of logic such as Modus Tollens and DeMorgan's Law, see:
On quantum logic, see Wilce, Alexander, Quantum Logic and Probability
Theory, The Stanford Encyclopedia of Philosophy (Fall 2021 Edition),
Edward N. Zalta (ed.).
On the pessimistic meta-induction, see:
Scientific Revolutions:
On scientific revolutions and paradigm shifts, see:
Other Alternatives:
The AI researcher and sociologist M. Ross Quillian
made remarks similar to those of Feyerabend in:
CS and Science:
On "the unreasonable effectiveness of mathematics in science", see:
Wilson, E. and Frenkel, E. (2013).
Two views: How much math do scientists
need? Notices of the AMS,
60(7):837–838
Mark Steedman, a computational linguist, in
Steedman, M. (2008). On becoming a discipline. Computational
Linguistics, 34(1):137–144,
has some interesting things to say on the differences between a
discipline such as physics and a discipline such as CS (in
general) and computational linguistics (in particular),
especially in §1, "The Public Image of a Science".
Tedre 2011
surveys ways in which computing is a science.
On the relationship between scientific
theories and algorithmic information theory, see:
On the paucity of definitions of 'engineering', see:
Dennett 1995,
p. 188,
also observed that the philosophy of engineering was not
well developed, and singled out the 1969 first edition of
Simon, H. A. (1996). The Sciences of the Artificial, Third Edition. MIT
Press, Cambridge, MA,
as a pioneering work in the philosophy of engineering (not
philosophy of science?).
Recently, there has been more work on
it: In 2009, the philosophy journal The Monist published a special
issue on philosophy and engineering:
Staples, M. (2014). Critical rationalism and engineering: Ontology.
Synthese, 191(10):2255–2279,
presents a deductive view of theories in engineering;
definitions of engineering are discussed in §2,
and the relation of engineering to science is discussed in §4.
Staples, M. (2015). Critical rationalism and engineering: Methodology.
Synthese, 192(1):337–362.
is a sequel containing a useful "taxonomy"
(§3) of ways in which artifacts can fail to conform
to specifications, a topic that is relevant to our discussions of implementation
(Chapter 13) and of intended behavior (§15.1).
Parts of Popper, K. (1972). Objective Knowledge: An Evolutionary
Approach. Oxford University Press, Oxford,
discuss the relation of engineering to science.
Putnam 1987,
p. 48,
suggests that there are three views of engineering: as design, as
application, and as construction.
Vincenti, W. G. (1990). What Engineers Know and How They Know It:
Analytical Studies from Aeronautical
History. Johns Hopkins University Press, Baltimore,
argues that engineering is a kind of knowledge that
is different from scientific knowledge.
Hoare, C. A. R. (2009). Retrospective: An axiomatic basis for computer
programming. Communications of the ACM, 52(10):30–32,
has some interesting comments on the complementary
nature of pure academic research (science) and applied industrial research
(engineering).
On the relationship between science and engineering in the
context of physical computation, see:
Mitcham, C. (2009). A philosophical inadequacy of engineering.
The Monist, 92(3):339–356,
commenting on Tredgold's 1828 definition of engineering, says:
Heuristics:
Two important surveys of meanings of the term 'heuristic' are:
Simon, H. A. and Newell, A. (1958). Heuristic problem solving: The next
advance in operations research.
Operations Research, 6(1):1–10
Other discussions include:
and the classic
Polya, G. (1957). How to Solve It: A New Aspect of Mathematical Method,
2nd edition. Doubleday Anchor, Garden City, NY.
Thagard, P. (2007). Coherence, truth, and the development of scientific
knowledge. Philosophy of Science, 74:28–47,
is not about heuristics, but presents a theory
of "approximate" truth that
bears a close resemblance to the idea that a heuristic
is an algorithm that computes an approximately correct answer.
Software Engineering:
For more on software engineering and software-engineering education, see:
CS and Engineering:
Tedre, M. (2009). Computing as engineering. Journal of Universal Computer
Science, 15(8):1642–1658
For a computer scientist's take on design, see:
Staples 2014,
§6.2, is a reply to Koen.
Kaag, J. and Bhatia, S. K. (2014). Fools for tools: Why engineers need to
become philosophers. The Chronicle
[of Higher Education] Review, 61(13):B13–B15,
argues that "engineers need to become philosophers".
A nice brief history of computers is in:
Copeland, B. J. (2004). Computation. In Floridi, L., editor, The
Blackwell Guide to the Philosophy of
Computing and Information, pages 3–17. Blackwell, Malden, MA,
esp. pp. 3–4, discusses "The Birth of the Modern
Computer".
Haigh, T. (2014). Actually, Turing did not invent the computer.
Communications of the ACM, 57(1):36–41,
discusses the (ir)relevance of the mathematical
history of computation to the engineering history.
Despite its title,
Mahoney, M. S. (2011). Histories of Computing. Harvard University Press,
Cambridge, MA. Edited by Thomas Haigh,
is not so much a history of computing or computers as
a history of CS. Chs. 10 and 11
are especially good on some of the recent mathematical history.
The Engineering History:
Good sources of information on the engineering history include:
Useful websites include:
Sloman, A. (2002). The irrelevance of Turing machines to AI. In Scheutz,
M., editor, Computationalism:
New Directions, pages 87–127. MIT Press, Cambridge, MA,
§2, argues that even the engineering
history of computers has "two strands": the "development of machines
for controlling physical mechanisms and [the] development of machines
for performing abstract operations, e.g. on numbers."
Husbands, P., Wheeler, M., and Owen, H. (2008). Introduction: The
mechanical mind. In Husbands, P.,
Wheeler, M., and Owen, H., editors, The Mechanical Mind in History,
pages 1–17. MIT Press, Cambridge, MA,
is an overview of attempts to make mechanical minds.
On the Antikythera Mechanism, see:
17th-Century Calculating Machines:
On Leibniz's contributions, see:
For pictures of early calculating machines, including Pascal's and Leibniz's, see:
Babbage's Machines:
Adam Smith,
"On the Division of Labor" (Book I, Ch. I).
Smith's pin-factory story is reprinted in
Lawson, R. (2004). Division of labour.
See also:
Babbage was inspired by de Prony, who was inspired by Smith.
Smith, in turn, may have been inspired by the Talmud — the 2500-year-old
Jewish commentaries on the Torah:
The recursive nature of top-down design and stepwise refinement has been
identified with the notion of scientific progress by
Rosenblueth and Wiener 1945,
p. 319:
Hyman, A. (1982). Charles Babbage: Pioneer of the Computer. Princeton
University Press, Princeton, NJ,
is a biography of Babbage (reviewed in
O’Hanlon, R. (1982). A calculating man. New York Times Book Review,
pages BR26–BR27).
Useful websites on Babbage include:
Simon and Newell
1958,
pp. 1–3, contains a brief history of Babbage's work.
For the story of the Jacquard loom, see
Keats, J. (2009, September). The mechanical loom.
In "The Start of Everything". Scientific American, page 88.
Gandy, R. (1988). The confluence of ideas in 1936. In Herken, R., editor,
The Universal Turing Machine:
A Half-Century Survey, Second Edition, pages 51–102. Springer-Verlag,
Vienna, esp. pp. 53–54 — written by
Turing's only PhD student — notes that
Babbage's Analytic Engine can be considered as a kind of register machine
(see §§9.3.2, 11.9, and 13.3.3),
in which case it is equivalent to a Turing Machine, and he considers
Babbage's statement that "the whole of the development and operations of
analysis are now capable of being executed by machinery" to be
"Babbage's Thesis" (perhaps on a par with the Church-Turing
Computability Thesis).
On the Difference Engine, see:
A partial physical model of the Difference Engine was finally built
around 1991; see Swade, D. D. (1993, February). Redeeming Charles Babbage's
mechanical computer. Scientific American, pages 86–91.
Efforts were underway to build a version of the
Analytical Engine.
Markoff, J. (2011). It started digital wheels turning. New York
Times, pages D1, D4.
Green, C. D. (2005). Was Babbage's analytical engine intended to be a
mechanical model of the mind?
History of Psychology, 8(1):35–45.
Ada Lovelace's commentary can be found in
her notes
to her translation of a description of the Analytic Engine:
For more on Lovelace, see:
On the history of programming, see
Randell, B. (1994). The origins of computer programming. IEEE Annals of
the History of Computing, 16(4):6–14.
For "Reflections on the first textbook on programming",
see
Campbell-Kelly, M. (2011). In praise of 'Wilkes, Wheeler, and Gill'.
Communications of the ACM, 54(9):25–27.
Ensmenger, N. (2011). Building castles in the air. Communications of the
ACM, 54(4):28–30,
contains "Reflections on recruiting and training programmers
during the early period of computing."
Electronic Computers:
For more on Colossus, see:
On the Enigma, see Kernan, M. (1990, May). The object at hand.
Smithsonian, 21:22, 24, 26.
Martin, D. (2013). Mavis Batey, 92, Allied code breaker in World
War II.
New York Times, page 32,
an obituary of Mavis Batey, a code breaker
who worked with Turing at Bletchley Park.
On Atanasoff, see Baranger, W. R. (1995).
John V. Atanasoff, 91,
dies; early computer researcher. New York Times, page 11.
On Zuse, see:
On the ENIAC, see:
For Eckert's obituary, see
Baranger, W.R. (1995). J. Presper Eckert, co-inventor of early computer,
dies at 76. New York Times, page B12.
On the ENIAC-ABC controversy, with a discussion
of an attempt to replicate the ABC, see
Wheeler, D.L. (1997). An ancient feud: Who invented the computer?
Chronicle of Higher Education, page B2.
A useful summary
of some of the issues involved can be found in
Ensmenger, N. (2003).
Bits of history: Review of A.R. Burks's Who Invented
the Computer? The Legal
Battle that Changed Computing History. In American
Scientist, 91:467–468.
Ensmenger observes that identifying Atanasoff as " the inventor of
the computer" (my phrasing and italics) is an "answer to what
is fundamentally the wrong question"
because "any particular claim to priority of
invention must necessarily be heavily qualified: If you add enough
adjectives, you can always claim your own favorite".
For another take on this kind of question, by a computer
scientist, see
Hamming, R. (1980). We would know what they thought when they did it. In
Metropolis, N., Howlett, J.,
and Rota, G.-C., editors, A History of Computing in the Twentieth Century:
A Collection of Essays, pages 3‐9. Academic Press, New York.
Halmos, P.R. (1973). The legend of John von Neumann. American
Mathematical Monthly, pages 382–394,
is a very readable, short biography of
von Neumann, with a
heavy emphasis on the humorous legends that have grown up around him.
The story of von Neumann's involvement in the development of computers
can be found in
Dyson, G. (2012). Turing’s Cathedral: The Origins of the Digital
Universe. Pantheon, New York.
For commentaries on Dyson 2012, see:
See also
Bacon 2010
The original document on the "von Neumann architecture" is
von Neumann, J. (1945).
First draft report on the EDVAC. IEEE Annals of
the History of Computing, 15(4)(1993)):27–75. Michael D. Godfrey (ed.).
Modern Computers:
For the history of personal computers, see:
On precursors of the Internet and the Web, see:
For a 1909(!) version of the Internet, see
Forster, E.M. (1909). The machine stops. In The Collected Tales of E.M.
Forster, pages 144–197. Modern
Library, 1968, New York.
For more recent histories of the Internet and the Web, see:
For brief biographies of two computer pioneers — Grace Murray
Hopper and
Jean E. Sammet — see:
And for
a history of computers as shown in cartoons, see
Mathews, W.M. and Reifers, K. (1984). The computer in cartoons: A
retrospective from The Saturday
Review. In Communications of the ACM, 27(11):1114–1119.
As for "higher purposes", see
Hafner, K. (2002). Happy birthday :-) to you:
A smiley face turns 20.
New York Times, page G4.
See also this cartoon.
The Scientific History:
The best overview of the scientific-logical history is
Davis, Martin D. (2012). The Universal Computer: The Road from Leibniz to
Turing; Turing Centenary Edition.
CRC Press/Taylor & Francis Group, Boca Raton, FL. Originally published as
Engines of Logic: Mathematicians
and the Origin of the Computer (New York: W.W. Norton, 2000),
written by one of the leading
mathematicians in the field of theory of computation.
For reviews of the first edition (2000), see:
Davis, M.D. (1995). Mathematical logic and the origin of modern
computers. In Herken, R., editor, The
Universal Turing Machine: A Half-Century Survey, Second Edition,
pages 135–158. Springer-Verlag, Vienna,
is an article-length
version of the story told in his book.
Davis, M. D. (2000). Overheard in the park. American Scientist,
88:366–367, is a somewhat negative review of
Berlinski, D. (2000). The Advent of the Algorithm. Harcourt, New York,
correcting some of the historical errors in that book.
Davis, M. D. (2003). Paradoxes in paradise. American Scientist,
91:268–269 is a review of
Giaquinto, M. (2002). The Search for Certainty: A Philosophical Account of
Foundations of Mathematics. Oxford University Press.
Early sections of:
An enjoyable graphic-novel treatment of the Russell-Frege story, with text
by a well-known computer scientist, is:
An excellent, brief overview of the history of logic and the
foundations of mathematics that led up to Turing's
analysis can be found in
Henkin, L. (1962). Are logic and mathematics identical? Science,
138(3542):788–794.
See also:
For the logical history as written by one of its chief players, see
Kleene, S C. (1981). Origins of recursive function theory. Annals of the
History of Computing, 3(1):52–67.
Robinson, J. A. (1994). Logic, computers, Turing, and von Neumann. In
Furukawa, K., Michie, D., and Muggleton,
S., editors, Machine Intelligence 13: Machine Intelligence and Inductive
Learning, pages 1–35.
Clarendon Press, Oxford,
is a personal history of the development of computers and the
related logical history, by the developer of the resolution
method of automated theorem proving.
On Church, see:
For very elementary introductions to the lambda-calculus, see:
The role of philosophy in the history of computers is told in
George, A. (1983). Philosophy and the birth of computer science. Robotics
Age, pages 26–31.
For a somewhat controversial take on the history of computing (and the
notion of a stored-program computer), see a debate between computer
scientist Moshe Vardi and philosopher B. Jack Copeland:
Davis 1995
overlaps and extends Henkin's history, and provides a
useful summary of Turing 1936,
which we will discuss in great detail in the next chapter.
Harnad, S., editor (1994).
What Is Computation? is a special issue of
the journal
Minds and Machines (Vol. 4, No. 4),
including, among others:
Shagrir 1999
discusses what computers are and what computation is.
§§1–3, 4.5–4.6 of
Soare, R.I. (1999). The history and concept of computability. In
Griffor, E., editor, Handbook of
Computability Theory, pages 3–36. North-Holland, Amsterdam,
is an excellent source on the history and terminology of computability.
Herman, G.T. (2000). Algorithms, theory of. In Ralston, A.,
Riley, E.D.,
and Hemmindinger, D., editors,
Encyclopedia of Computer Science, 4th edition, pages 51–53.
Grove's Dictionaries/Nature Publishing Group, New York/London,
discusses the informal notions of "algorithm"
and "effective computability"; good background for Turing 1936.
Smith, B.C. (2002). The foundations of computing. In Scheutz, M., editor,
Computationalism:
New Directions, pages 23–58. MIT Press, Cambridge, MA,
discusses the foundations of computing.
"PolR" 2009
is an excellent introduction for lawyers(!) to computation theory.
Bernhardt, C. (2016). Turing's Vision: The Birth of Computer
Science. MIT Press, Cambridge, MA,
is an excellent guide for the general reader to
the mathematics of computation theory.
What Is 'Computation'?
'compute':
The Latin word ' computare', in turn, comes from the
Latin morpheme ' com', meaning "together with", and the
Latin word ' putare', meaning "to cleanse, to prune, to reckon,
to consider, think" (and ' putare' came from a word meaning "clean,
pure"). So, in ancient Rome at least, to "compute" seems to have meant, more or
less, something like:
"to consider or think about things together", "to clear things up together",
or maybe "to reckon with (something)".
'reckon':
'count':
'calculate':
'figure':
The bottom line seems to be this: 'Computation' originally meant something
very closely related to our modern notion of "symbol (that is, shape)
manipulation",
which is another way of describing syntax — the "grammatical" properties
of, and relations among, the symbols of a language. (We discuss syntax
in §§13.2, 15.2.1, 16.10, and 18.8.3.)
On the history of the terms 'computable' vs. 'recursive', see
§11.5, p. 391 of:
We discuss recursive functions in §7.6.
Functions Described Intensionally:
For more on the notion of understanding in terms of the
internal workings of a black box, see:
which also suggests an analogy between computation and causation.
Ancient Algorithms:
For more on Al-Khwarizmi and his algorithms, see:
"Effectiveness":
Church, A. (1956). Introduction to Mathematical Logic. Princeton
University Press, Princeton, NJ,
calls 'effective' an "informal notion". See:
See also:
Another explication of 'effective' is on p. 124 of:
Procedure vs. Algorithm:
Hopcroft, J.E. and Ullman, J.D. (1969). Formal Languages and Their
Relation to Automata. Addison-Wesley, Reading, MA,
pp. 2–3, also distinguishes a "procedure" from an "algorithm".
In their "vague … definition", a procedure is "a finite
sequence of instructions that can be mechanically carried out, such as a computer
program" (to be formally defined in their chapter on Turing Machines),
and an algorithm is "a procedure which always terminates".
For more on the attempts to make the notion of "algorithm" precise, see
Sieg 1994
(containing a detailed history and analysis of the
development of the formal notions of algorithm in the 1930s and after) and
Copeland 1997
(an essay on hypercomputation — or
"non-classical" computation — whose introductory section
(pp. 690–698) contains an enlightening discussion
of the scope and limitations of Turing's accomplishments).
See also:
In §12.3.4, we look at whether
computer programs can be copyrighted or patented. In order to answer this
question, many
legal experts have tried to give a definition of 'algorithm'. One such
attempt is Chisum, D. S. (1985-1986). The patentability of algorithms.
University of Pittsburgh Law Review, 47:959–1022.
Farkas, D.K. (1999). The logical and rhetorical construction of
procedural discourse. Technical Communication,
46(1):42–54,
contains advice on how to write informal procedures.
Insight 1: Representation:
For more on binary representation, link to
Great Idea I: Binary Representation.
Wheeler, J. A. (1989). Information, physics, quantum: The search for
links. In Proceedings III International
Symposium on Foundations of Quantum Mechanics, pages 354–358.
Tokyo,
argues against the existence of the "continuum" of real numbers.
Chaitin, G.J. (2006). How real are real numbers? International Journal of
Bifurcation and Chaos,
"discuss[es] mathematical and physical arguments
against continuity and in favor of discreteness".
On Leibniz's binary insight, see:
See
Cerf, V.G. (2014). Virtual reality redux. Commmunications of the
ACM, 57(1):7,
for some interesting comments that are relevant to
the insight about binary representation of information.
On Shannon, see:
Shapiro, Stewart; Snyder, E.; & Samuels, R. (2022), Computability, notation, and de re knowledge
of numbers. Philosophies 7(1), p. 6,
discusses the relative
advantages of binary representation of natural numbers vs. unary representation.
Insight 2: Processing:
Wang, H. (1957). A variant to Turing's theory of computing machines.
Journal of the ACM, 4(1):63–92, p. 63,
"offer[s] a theory which is closely
related to Turing's but is more economical in the basic
operations. … [A] theoretically simple basic machine can
be … specified such that all partial recursive functions (and hence
all solvable computation problems) can be computed by it and that only
four basic types of instruction are employed for the programs: shift
left one space, shift right one space, mark a blank space, conditional
transfer. … [E]rasing is dispensable, one symbol for marking is
sufficient, and one kind of transfer is enough. The reduction
is … similar to … the definability of conjunction and
implication in terms of negation and disjunction … ."
A version of Wang's machine, with many examples, is given in
Dennett 2013,
Ch. 24.
Insight 3: Structure:
Two cartoons that illustrate the benefits of good structure in programming
are online at XKCD: Good Code
and Abstruse Goose: O.P.C.
For more on the structure insight, link to
Great Idea III: The Boehm-Jacopini Theorem
and Structured Programming.
On abstraction, see:
One of the best introductions to abstraction is
Pattis, R.E., Roberts, J., and Stehlik, M. (1995).
Karel the Robot: A Gentle Introduction to the Art of
Programming, Second Edition. John Wiley & Sons, New York.
See also §5.2.2 of
Rapaport, W.J. (2017). On the relation of computing to the world. In
Powers, T.M., editor, Philosophy
and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and
Ethics, pages 29‐64. Springer, Cham, Switzerland.
For more on the power of abstraction, see the first few
paragraphs of Antoy, S. and Hanus, M. (2010). Functional logic
programming. Communications of the ACM, 53(4):74–85.
Insight 4: The Church-Turing Computability Thesis:
Recommended textbooks on computability theory include:
For discussion of the appeal to Gödel's authority, see
Shagrir, O. (2006). Gödel on Turing on computability. In
Olszewski, A.,
Wołenski, J., and Janusz, R.,
editors, Church's Thesis after 70 Years, pages 393–419.
Ontos-Verlag,
which explores why Gödel believed both that
"the correct definition of mechanical computabilty was established beyond any
doubt by Turing"
(Gödel, K. (1938). Undecidable Diophantine propositions. In
Feferman, S.
et al., editors, Kurt Gödel:
Collected Works, Vol. III, pages 164–175.
Oxford University Press, 1995, Oxford,
p. 168) and that "this definition
… rest[s] on the dubious assumption that there is a finite number
of states of mind" (Shagrir 2006, §1).
Copeland, B.J. and Shagrir, O. (2013). Turing versus Gödel on
computability and the mind. In Copeland, B. J., Posy, C. J.,
and Shagrir, O., editors, Computability: Turing,
Gödel, Church, and Beyond, pages 1‐33.
MIT Press, Cambridge, MA,
explores both Gödel's interpretations
of Turing's work and the relation of the human mind to Turing Machines.
See also
Sieg, W. (2006). Gödel on computability. Philosophia
Mathematica, 14:189–207,
as well as
Soare 2009,
§2, "Origins of Computabilty and Incomputablity",
which contains a good summary of the history of both
Turing's and Gödel's accomplishments.
Turing cites
Church, A. (1936). An unsolvable problem of elementary number theory.
American Journal of Mathematics,
58(2):345–363,
for the definition of
lambda-definability. He cites
Kleene, S.C. (1935-1936). General recursive functions of natural
numbers. Mathematische Annalen, 112:727–742,
for the definition of general
recursiveness. And he cites
Kleene, S.C. (1936). λ-definability and recursiveness. Duke Mathematical
Journal, 2:340–353.
for the proof of their equivalence.
For statements of equivalence of general recursive, μ-recursive,
lambda-definable, etc., see
Soare, R.I. (2012). Formalism and intuition in computability.
Philosophical Transactions of the Royal
Society A, 370:3277–3304, esp. p. 3284.
Kleene, S.C. (1995). Turing's analysis of computability, and major
applications of it. In Herken, R., editor,
The Universal Turing Machine: A Half-Century Survey, Second Edition,
pages 15–49. Springer-Verlag, Vienna,
shows how to "compile" (or translate the language
of) recursive functions into (the language of)
Turing Machines, i.e., how a Turing Machine can compute recursive functions.
Kreisel, G. (1987). Church's thesis and the ideal of informal rigour.
Notre Dame Journal of Formal Logic, 28(4):499–519,
is a paper by a well-known logician arguing that
Church's Thesis can be proved. Similar arguments are made in:
For more on the Church-Turing Computability Thesis, see:
Soare 2016,
§17.3.3, argues that the Computability Thesis
should properly be understood as a "claim with demonstration" and not
as a proposition "in need of continual verification".
Rescorla, M. (2007). Church's thesis and the conceptual analysis of
computability. Notre Dame Journal of Formal Logic,
48(2):253–280,
identifies
Church's Thesis as the proposition that a number-theoretic function
is intuitively computable iff it is recursive. And he identifies
Turing's Thesis as the proposition that a number-theoretic
function is intuitively
computable iff a corresponding string-theoretic function that
represents the number-theoretic one is computable by a Turing Machine.
He concludes that Church's Thesis is therefore
not the same as Turing's Thesis.
(On representing numbers by strings, see
Shapiro, Stuart C. (1977).
Representing numbers in semantic networks:
Prolegomena. In Proceedings of the 5th
International Joint Conference on Artificial Intelligence, page 284.
Morgan Kaufmann, Los Altos, CA.)
Tharp, L.H. (1975). Which logic is the right logic? Synthese,
31:1–21,
investigates an analogous issue in logic: Is our
informal or pre-theoretical conception of logic best captured by
first-order predicate logic or by some other (more powerful?) logic.
Insight 5: Implementation:
Physical implementations of computation are discussed in great detail in:
A Recursive Definition of Natural Numbers:
Peano's axioms were originally proposed in:
For more on what are also sometimes called the "Dedekind-Peano" axioms, see:
Recursive Definitions of Recursive Functions:
A good general overview of recursive functions that is pretty consistent
with the presentation in this section is
Dean, Walter, Recursive Functions, The Stanford Encyclopedia of
Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.).
The Halting Problem:
Chaitin 2006
(and
Chaitin, G.J. (2006b, March). The limits of reason. Scientific American,
pages 74‐81, which is aimed at a
more general audience) discusses the Halting Problem, the non-existence of real
numbers(!), and the idea that "everything is
software, God is a computer programmer, … and
the world is … a giant computer"(!!).
On the non-"reality" of "real" numbers, see also
Knuth 2001,
pp. 174–175.
The Busy Beaver function was first described, and proved to be
non-computable, in Radó, T. (1962, May), On non-computable functions, The Bell System
Technical Journal, pages 877–884. This paper is a wonderfully
readable introduction to the Busy Beaver "game". §II gives a very
simple example, aimed at students, of a Turing Machine,
and §§I–III
and, especially, §VIII are amusing and well worth reading!
For more information, see:
On countable and uncountable infinities, see:
For more on the difference between a single program that can determine
whether any program halts and programs that can determine whether a given
program halts, see:
Gödel Numbering:
"Gödel numbering" is actually a bit more complicated
than presented in the text. For more information, see
"Gödel Number" (Wolfram MathWorld)
or
"Gödel Numbering" (Wikipedia).
For a comparison of it with "Turing numbering", see
p. 492 of
Kleene, S.C. (1987). Reflections on Church's thesis. Notre Dame Journal
of Formal Logic, 28(4):490–498.
For more information on relative computability, see
Soare 2009,
Soare 2016,
and
Homer and Selman 2011
"Writing Symbols on Paper":
"Man" and "Machine":
Hilbert's observation about tables, chairs, and beer mugs appears in his
Gesammelte Abhandlungen ("Complete Works"),
vol. 3, p. 403, as cited on p. 135 of
Coffa, J. (1991). The Semantic Tradition from Kant to Carnap: To the
Vienna Station. Cambridge University Press, Cambridge, UK.
See also
Shapiro, Stewart (2009). We hold these truths to be self-evident: But what do
we mean by that? Review of
Symbolic Logic, 2(1):175–207.
Elsewhere, Hilbert used a different example:
On Turing's claim that "by altering its m-configuration the machine can
effectively
remember some of the symbols which it has "seen" (scanned) previously", see:
"The Universal Computing Machine":
The Universal Turing Machine is covered in detail in:
Another presentation, using the notion of a register machine, can be found in
Dennett 2013,
pp. 126–128.
Lammens, J. (1990). Universal program, is a Common Lisp
implementation of Turing's universal program as specified
in
Davis and Weyuker 1983
Davis 2012,
p. 147, notes that
"Turing's universal machine showed that the distinctness of these
three catgories [machine, program, and data] is an illusion."
Turing's Writings:
See also Billock, V.A. and Tsou, B.H. (2010, February). Seeing forbidden colors.
Scientific American, 302(2):72–77, esp.
pp. 75–76.
In this article, he also proves a version of the Halting Problem, in the
sense that he proves "that there cannot be any systematic procedure
for determining whether a puzzle be solvable or not" (p. 590).
Biographical Information:
Dramatizations:
Others include:
Turing's Legacy:
Pedagogy:
Implementations of Turing Machines:
There are several
Turing Machine simulators and implementations, some quite curious:
"[O]ne using servo motors, a Parallax
Propeller, felt-tipped pen, and 1000 feet of film leader" is described at
A Turing Machine in the Classic Style
and at
Turing Machine a Masterpiece of Craftsmanship.
A Turing Machine simulator app for iPhones and iPads is available at
Turing — Turing Machine Simulator
Shagrir 1999
is a short, but wide-ranging, paper on the nature
of computers, hypercomputation, analog computation, and computation as not being
purely syntactic (a topic that we look into in Chapter 16).
Shagrir, O. (2006). Why we view the brain as a computer. Synthese,
153:393–416,
§1, "The Problem of Physical Computation:
What Does Distinguish
Computers from Other Physical Systems?",
contains a good survey of various theories of what a computer is.
Chalmers, D.J. (2011). A computational foundation for the study of
cognition. Journal of Cognitive Science
(South Korea), 12(4):323–357,
§"What about computers?", pp. 335–336, suggests that
a computer is a device that can implement multiple computations by being
programmable. (The 2011 version of this essay — originally written in
1993 — was accompanied by
commentaries, including:
and a reply:
Chalmers, D. J. (2012). The varieties of computation: A reply. Journal of
Cognitive Science (South Korea),
13(3):211–248.
It is also useful to see what introductory CS texts say. A good (early)
example is
Ralston 1971.
A textbook survey of numerous definitions is in
Harnish, R.M. (2002). Coda: Computation for cognitive science, or what IS
a computer, anyway?
In Minds, Brains, Computers: An Historical Introduction to the Foundations
of Cognitive Science,
pages 394–412. Blackwell, Malden, MA.
See also the website
Anderson, D.L. (2006). The nature of computers. The Mind Project,
On virtual machines, see:
and — especially — Pollock, J.L. (2008). What am I? Virtual
machines and the mind/body problem. Philosophy and Phenomenological
Research, 76(2):237–309,
who argues that we are virtual machines!
See also
Chalmers, D.J. (2017).
The virtual and the real. Disputatio,
9(46):309–351.
Sloman, A. (2008). Why virtual machines really matter — for several
disciplines,
views virtual machines as mathematical abstractions
that can have causal relations with the physical world.
Stored Program vs. Programmable:
Haigh, T. (2013). 'Stored program concept' considered harmful: History and
historiography. In Bonizzoni, P., Brattka, V., and
Löwe, B., editors, CiE 2013, pages 241–251.
Springer-Verlag Lecture Notes in
Computer Science 7921, Berlin,
argues that the phrase 'stored-program
computer' is ambiguous between several different readings and often
conflated with the notion of a Universal Turing Machine, hence
that it would be better to refrain from using it.
See also Haigh, T. and Priestley, M. (2016).
Where code comes from:
Architectures of automatic control from Babbage
to Algol. Communications of the ACM, 59(1):39–44.
Randell 1994,
p. 12, argues that the analogy with Universal
Turing Machines is
more central (p. 13), and that "EDVAC does not qualify as a
stored-program computer" because the "representations [of data and
instructions] were quite distinct, and no means were provided for
converting data items into instructions" or vice versa (p. 13).
According to
Daylight 2013,
p. XX, note 3, the phrase 'stored
program' was not commonly used in the early 1950s. Furthermore,
according to
Daylight 2013,
§3, p. VII, storing data and
instructions together "was based on practical concerns, not theoretical
reasoning" of the sort that might have been inspired by Turing's notion
of a Universal Turing Machine.
John Searle's "Pancomputationalism": Everything Is a Computer:
For a related argument on pancomputationalism, see:
For more detailed critiques and other relevant commentary, see:
Being a Computer Is Not an Intrinsic Property of Physical
Objects:
For more discussion on what 'intrinsic' means, see:
On the intrinsicness of mass in this connection, see
Shagrir 2016,
p. 31.
For more detailed objections to Searle from the nature of measurement, see:
Patrick Hayes: Computers as Magic Paper:
For more on computers as switch-setting devices, see the discussions
in
Stewart 1994
and
Hayes 2007
of how train switches can implement computations.
Both of these are also examples of Turing Machines
implemented in very different media than silicon (namely, trains)!
Is the Brain a Computer?
Long before computers, the brain and nervous system were also likened to
an electronic communications network, as in
Fritz Kahn's artwork.
On computationalism, see:
On the brain as a computer, see:
It is one thing to argue that brains are (or are not) computers of some
kind. It is quite another to argue that they are Turing Machines, in
particular. The earliest suggestion to that effect is:
More recently, the cognitive neuroscientist
Stanislas Dehaene and his colleagues have made similar arguments; see:
See also:
Is the Universe a Computer?
For a different take on the question of whether the solar system computes
Kepler's laws of motion, in the context of "pancomputationalsim" (the
view that "every deterministic physical system computes
some function"), see:
On cellular automata that are Turing Machine-equivalent, see these
Wikipedia articles:
You can read more about Wolfram and his theories at his
homepage and in:
For a critical review, see:
Aaronson, S. (2011, 9 December). Quantum computing promises new insights, not just
supermachines.
New York Times, page D5,
claims that quantum computing has "overthrown"
views like those of
Lloyd, S. (2000). Ultimate physical limits to computation. Nature,
406:1047–1054,
investigates "quantitative bounds to the computational
power of an 'ultimate laptop' with a mass of one kilogram
confined to a volume of one litre."
Lloyd, S. (2002). Computational capacity of the universe. Physical Review
Letters, 88(23):237901–1 — 237901–4,
argues that
And see
Konrad Zuse also argued that the universe is a computer; see:
For more on the universe as a computer, see:
Related to Lloyd is Bostrom's
work on whether we are living in a
computer simulation:
For an argument that simulation theories are not
scientific, see Dunning, B. (2018, 8 May).
Are you living in a simulation?
Good general discussions of the various Computability Theses (Church's,
Turing's, et al.) and their history can be found in:
Kripke, S.A. (2013). The Church-Turing "thesis" as a special corollary of
Gödel's completeness theorem. In
Copeland, B.J., Posy, C.J., and Shagrir, O., editors,
Computability: Turing, Gödel, Church, and Beyond,
pages&nbs;p;77–104. MIT Press, Cambridge, MA,
contains a lot of useful historical remarks and offers an argument that
the Thesis can be proved as a corollary of Gödel's
Completeness Theorem.
On the other hand,
Folina 1998
argues — against
Gandy 1988,
Mendelson 1990,
Stewart Shapiro 1993,
Sieg 1994,
and others — that the thesis is true but unprovable
(perhaps as in Gödel's Incompleteness Theorem?).
See also:
Stewart Shapiro 2013
is a very readable discussion of the provability of
the Computability Thesis; of the nature of the
"informality", "vagueness", or "open texture" of the notion
of computability; and of the difference between
human, mechanical, and mathematical computability,
with observations on many of the other readings
discussed or mentioned in this chapter.
Mendelson 1990
has one of the clearest discussions of the
nature of the Computability Thesis.
For responses to Mendelson, see
Bringsjord 1993
and
Stewart Shapiro 1993.
Robin Gandy — Turing's only Ph.D. student — gave this statement
of what (ironically) he referred to as Church's Thesis:
Gandy goes on to be concerned with a mechanical version of the
Thesis: whether "What can be calculated by a machine is computable"
(Gandy 1980,
p. 124), where by 'machine' he says that he is …
One difference between a human
(even an "abstract" one) and a machine is that the latter can easily
perform parallel operations such as printing "an arbitrary number of
symbols simultaneously"
(Gandy 1980,
p. 125).
For discussion of Gandy's Thesis, see
Shagrir 2022,
§3.1.
On physical versions of the Computability Thesis, see
Piccinini and Maley 2021
On the Computability Thesis and AI,
On the other hand,
Rey, G. (2012). The Turing thesis vs. the Turing test. The Philosopher's
Magazine, 57:84 89,
distinguishes the Turing Thesis from the Turing Test.
Carol Cleland: Some Effective Procedures Are Not Turing Machines:
Cleland's arguments have generated a lengthy debate:
Other articles on the issues appear in:
Beth Preston: Recipes, Algorithms, and Specifications:
On the notion of a "specification" (the general requirements for an
algorithm), see
Turner, R. (2011). Specification. Minds and Machines,
21(2):135–152.
Petroski, H. (2010). Occasional design. American Scientist,
98(1):16–19,
is a nice follow-up to Preston, in which he describes
how "an everyday challenge provides lessons in the processes of
planning and execution".
Daly, I. (2010, 24 February). Just like mombot used to make. New York Times,
pages D1, D5 — about robots that not only follow recipes,
but actually
cook the meals ("Fear not, humans, these robots are here to be our friends.
To prove it, they serve you food") — is interesting reading in connection
with what both Cleland and Preston have to say about the computability of recipes.
For more on sub-Turing computation, see
Bernhardt 2016,
Ch. 3, or
any text on the theory of computation (such as those cited in
above,
as well as discussions of the
"Chomsky hierarchy"
For more on Copeland's views on hypercomputation, see:
Yet another form of hypercomputation, which does not fit easily into the
categories of this chapter, is a generalization of Turing
computability — which can be considered to be a branch of discrete
mathematics — to the
real numbers, so that there can be a theory of computation within
continuous mathematics. This has been investigated in:
Their form of computation
over the real numbers contains Turing computation as a special case.
On what Turing might have thought about hypercomputation, see
Hodges, A. (2004). What would Alan Turing have done after 1954? In
Teuscher, C., editor, Alan Turing:
Life and Legacy of a Great Thinker, pages 43–58. Springer, Berlin.
For more on hypercomputation, see the following:
Welch, P.D. (2004). On the possibility, or otherwise, of
hypercomputation. British Journal for the Philosophy
of Science, 55:739–746,
is a reply.
"Newer Physics" Hypercomputers:
Copeland 1998,
p. 151, footnote 2,
distinguishes between "accelerating" Turing Machines and "Zeus" machines.
See also
Copeland 2002a
and
Copeland and Shagrir 2011.
Davies, E.B. (2001). Building infinite machines. British Journal for the
Philosophy of Science, 52:671 682,
argues that an accelerating computer could be built
"in a continuous Newtonian universe" (as opposed
to the Malament-Hogarth spacetime mentioned in the
epigraph to this section of the book), though not "in the real universe".
However,
Cockshott, P. and Michaelson, G. (2007). Are there new models of
computation? Reply to Wegner and
Eberbach. The Computer Journal, 50(2):232–247,
§2.5, p. 235,
reject such relaxations of the laws of classical physics out of hand.
They also give an excellent summary of Turing Machines and complexity theory, and
they argue against a number of other hypercomputation proposals,
including Wegner's interaction machines.
Here are some readings
(in addition to those
above on
quantum computing in general) for readers who would like to pursue the topic:
Further Reading on Malament-Hogarth Hypercomputation:
And for those of you curious about the epigraph to this section concerning
Malament-Hogarth hypercomputation, here are some suggestions:
His machines turn out to be related to Zeus
machines. The introduction is worth quoting:
And §2 ("The Story of Dave, HAL, and Goldbach") — a science fiction
tale to illustrate his argument — is very much worth reading!
Batch vs. Online Processing:
There are several other names for interactive computing, including
"coupled" Turing Machines, each of
whose outputs serve as inputs for the other
(Copeland and Sylvan 1999
and
Zenil and Hernández-Quiroz 2007)
and "concurrent" computation.
On reactive systems, see also
Knuth, D.E. (2014, 20 May). Twenty questions for Donald Knuth, Question 13.
Peter Wegner: Interaction Is Not Turing-Computable:
Gurevich, Y. (1999). The sequential ASM thesis. Bulletin of the European
Association for Theoretical
Computer Science, 67:93–124, esp. pp. 93, 98,
talks both about "algorithms that
are closed in the sense that they do not interact with their
environments" and about ones that do so interact
(§5, pp. 111–115).
He also notes that, in the case of non-deterministic algorithms,
"the active environment will make the choices" (p. 116).
Can Interaction Be Simulated by a Non-Interactive Turing Machine?
The S-m-n Theorem was first stated and proved by
Kleene 1952,
Theorem XXIII, p. 342. It gets its name from the form that Kleene
expressed it in, using a function that he called 'S^m_n'.
It is also sometimes called the "Parameter Theorem" or the "Iteration
Theorem"
(Davis and Weyuker 1983,
p. 64).
Rogers 1967,
Theorem IV, p. 22,
calls it the "enumeration theorem" (but distinguishes it from what
he calls the "s-m-n theorem"
(Rogers 1967,
Theorem V, pp. 23–24).
My interpretation of the theorem is due to
Case, J. Motivating the proof of the Kleene recursion theorem.
(In lectures at the University at Buffalo in the early 1980s,
Case used the mnemonic
"Stuff-em-in} theorem" to emphasize that the external
inputs could be "stuffed into" the computer program.)
Similar interpretations can be found in
Cooper 2004,
p. 64, and
Homer and Selman 2011,
p. 53.
And
Kfoury et al 1982,
p. 82, give a version of it stated in terms of computer programs.
Cooper 2004,
p. 64,
also notes that the existence of the Universal Turing Machine can be
expressed — using our notation above (instead of his) — by taking
y as a
("hardwired", non-universal) Turing Machine and s(x,y) as a
Universal Turing Machine with y stored on its tape (as its software).
And
Rogers 1967,
p. 23, notes that it "shows that the computing
agent … need not be human" — because the human "computing agent"
in Turing's informal analysis can be replaced by a Universal Turing Machine.
van Leeuwen, J. and Wiedermann, J. (2000). The Turing machine paradigm in
contemporary computing.
In Engquist, B. and Schmid, W., editors, Mathematics
Unlimited — 2001 and Beyond, pages 1139–1155.
Springer-Verlag, Berlin,
formalizes Wegner's ideas about
interaction machines, arguing that "'interactive Turing
machines with advice' … are more powerful than ordinary Turing machines."
For a debate between a hypercomputation skeptic and a believer, see
Interactive Computation Is More Powerful Than Non
Interactive (2014).
For more by Wegner and his colleagues, see:
Hewitt, C. (2019, 10 July). Computer science must rely on strongly-typed Actors and
theories for cybersecurity.
Oracle Computation:
For more on oracle machines, see:
An excellent, relatively informal overview of relative
computability
for philosophers is
Jenny, M. (2018). Counterpossibles in science: The case of relative
computability. Noûs, 52(3):530–560, §2.
Piccinini, G. (2003). Alan Turing and the mathematical objection. Minds
and Machines, 13:23–48,
is primarily about Turing's views on AI,
but also discusses his theory of computation and the role of "oracle"
machines.
For a science fiction treatment of oracles, see Egan, G. (2000).
Oracle.
Asimov's Science Fiction.
Trial-and-Error Computation:
Philosophers, logicians, and computer scientists have all contributed to
this theory:
For more on Kugel's views, see:
On the mathematical abilities of humans vs. machines, see:
On Gödel, Turing, and mathematical ability, see:
Inductive Inference:
For more information on "computational learning theory" and
"language learning in the limit", see:
Gemignani, M. (1981). What is a computer program? American Mathematical
Monthly, 88(3):185–188.
Haigh and
Priestley 2016
is an interesting history of both programming and
the term 'program', arguing that Ada Lovelace
was probably not the first computer
programmer and that programming computers originally
had a lot in common with concert "programming" or radio "programs".
Programs and Algorithms:
A nice description — for readers who are not computer
scientists — of how algorithms are implemented in computers can be
found in
Colburn, T.R. (1999). Software, abstraction, and ontology.
The Monist, 82(1):3–19, esp. pp. 6–9.
Dennett 2017,
p. 256, discusses how Universal Turing Machines
running software S often "evolve" into "'dedicated' hardware"
Turing Machines that only execute S.
Software and Art:
There are other interesting relationships between software and art:
For some examples of "aesthetic" or "playful" programs, see:
Copyright vs. Patent:
For general discussion of the issues
surrounding legal protection of programs, see:
For follow-ups to Newell's essay
(including links to commentary by Donald Knuth), see
"Newell 1986: The Models Are Broken"
The ontological task is investigated in:
On ontologies more generally, see the work by philosopher Barry Smith and
his colleagues at
The Buffalo Ontology Site,
which includes Duncan's ontology of software vs. hardware:
Levels of Understanding:
For more on the intentional stance, see:
If you are uneasy about the intentional stance's use of
psychological terms to describe computers, you might
treat the terms 'desired', 'believed', and 'formed the intention' as
metaphorical; see:
See also p. 59, where he talks about
"realizations or embodiments of a Turing machine".
"Following Instructions":
On center embedding:
A simpler example of center embedding is the sentence
where the sentence 'Cats chase' is embedded in the center of the
sentence 'Mice eat cheese', qualifying the kind of mice that eat cheese,
namely, the ones that cats chase.
If we replace these words with others,
we get a center-embedding sentence that is harder to understand:
Here, the sentence 'Dogs dog' (meaning "Dogs follow closely") is
embedded in the similar sentence 'Dogs dog dogs' (meaning "Dogs follow
other dogs closely").
Finally, if we replace these words with still others, we get the
following perfectly grammatical and meaningful sentence of English:
Instead of cats chasing and dogs dogging, we have buffalo
buffaloing (that is, bison intimidating; note that the plural of
'buffalo' is also 'buffalo', just as the plural of 'sheep' is also 'sheep').
Embedding 'Buffalo
buffalo' into 'Buffalo buffalo buffalo' (meaning "Some bison intimidate
other bison"), we get the sentence above, which means "Bison that other
bison intimidate intimidate in turn still other bison"). See
A History of the Sentence "Buffalo buffalo buffalo buffalo
buffalo."
On following instructions:
Software Is a Concrete Abstraction:
For physiological examples of multiple realizations, see:
Semantic Interpretation:
For good descriptions of the syntax and semantics of formal
systems and their relationship to Turing Machines, see:
Gödel also identified them (see §15.2.3).
And
Kripke 2013,
p. 80, has advocated for this point of view:
For more on formal systems, see:
Sometimes, the entities of such systems are called
'constructive objects'.
On "marks" and "mark manipulation"
systems, see
Kearns 1997,
§2, pp. 273–274.
Chalmers's Theory of Implementation:
Originally written in 1993, the 2011 version of
Chalmers 2011
was accompanied by commentaries (including
Egan 2012,
Rescorla 2012,
Scheutz 2012,
and
Shagrir 2012,
and a reply
Chalmers 2012.
Chalmers 2011,
§2, was published in slightly different form as
Chalmers 1994.
See also:
(Chalmers corrects "an error in my arguments" in
Chalmers 2012,
pp. 236--238.)
On implementation and its relationship to the concept of a virtual
machine, see
Sloman 2008
and
Sloman 2019.
Ladyman, J. (2009). What does it mean to say that a physical system
implements a computation?
Theoretical Computer Science, 410(4–5):376–383,
esp. p. 379
Dresner 2010
examines "the association between numbers and the physical
world that is made in measurement" and argues that implementation
"and (measurement-theoretic) representation" are "a single
relation (or concept) viewed from different angles" (p. 276).
Note that "representation" is a semantic concept.
Shagrir, O. (2012). Computation, implementation, cognition. Minds and
Machines, 22(2):137 148.
The philosopher and computer scientist Matthias Scheutz has written
extensively on implementation; see:
On simulated hurricanes, see:
On simulated digestion, see:
Computer simulations and computer models are discussed in:
In the context of ethical computing,
Green, C.D. (2001). Scientific models, connectionist networks, and
cognitive science. Theory and Psychology, 11:97 117.
For arguments that there are limitations to simulations, see:
See also:
Theories:
Partridge, D. and Wilks, Y., editors (1990). The Foundations of Artificial
Intelligence: A Sourcebook. Cambridge University Press, Cambridge, UK.
Downes, S. (1990). Herbert Simon's computational models of scientific
discovery. PSA: Proceedings of the
[1990] Biennial Meeting of the Philosophy of Science Association,
1:97–108.
Tymoczko, T. (1979). The four-color problem and its philosophical
significance. Journal of Philosophy, 76(2):57–83.
For a different view of programs as theories,
see
Chaitin 2009,
esp. pp. 8–9.
Turner, R. (2010). Programming languages as mathematical theories. In
Vallverdú, J., editor, Thinking
Machines and the Philosophy of Computer Science: Concepts and
Principles, pages 66–82. IGI Global
On the history of the term 'bug', see:
The idea that the first computer bug
was really a bug (actually, a moth) is an urban legend, because the
term was used in the non-entomological sense as early as 1875; see:
For a photo of the allegedly first "bug", see
The Jargon File, which traces the term back to Shakespeare!
Proofs and Programs:
Dijskstra 1972,
pp. 9–11, has a discussion about the nature and
value of formal proofs of "obvious" theorems that is relevant to the
issues raised by De Millo et al. 1979 and Rosser's time-traveler.
Devlin, K. (1992). Computers and mathematics (column). Notices of the
American Mathematical Society, 39(9):1065 1066,
Lipton, R.J. (2019, 21 October). A polemical overreach? Gödel's Lost Letter and
P=NP is a more recent commentary on
De Millo et al 1979 by one of its authors.
Thurston, W.P. (1994). On proof and progress in mathematics. Bulletin of
the American Mathematical
Society, 30(2):161–177,
is an interesting contrast to
Suber 1997b
on the relationship between computer programs and mathematical proofs.
Davis, P.J. and Hersh, R. (1998).
"The Ideal Mathematician", in their
The Mathematical Experience. Houghton Mifflin, Boston,
pp. 34–44,
is a partly facetious description of the behavior of (some)
mathematicians, including a discussion of the nature of
proof as carried out by mathematicians.
For an antidote to their characterization of mathematicians,
read
Program Verification:
Colburn, T.R., Fetzer, J.H., and Rankin, T.L., editors (1993). Program
Verification: Fundamental Issues in
Computer Science. Kluwer Academic Publishers, Dordrecht, The Netherlands,
is an anthology containing many of the papers discussed
in this chapter, as well as an extensive
bibliography on program verification. And Colburn himself, some of
whose views on the philosophy of CS we have examined
in earlier chapters, has two papers on program
verification that are worth reading:
The origins of program verification can be found in:
For Hoare's later views, see:
Dijkstra, E.W. (1974). Programming as a discipline of mathematical
nature. American Mathematical
Monthly, 81(6):608–612,
argues "that the correctness of programs could and
should be established by proof", that structured programs are simpler
to prove than unstructured ones — (on that point, see
Dijkstra, E.W. (1968). Go to statement considered harmful.
Communications
of the ACM, 11(3):147–148.) — that theorems
about programs make program proofs easier, and that "to prove
the correctness of a given program was … putting the cart before
the horse. A much more promising approach turned out to be letting
correctness proof and program grow hand in hand"
(pp. 609–610).
See also
Verity, J.W. (1985). Bridging the software gap. Datamation,
pages 84–88,
contains interviews with Dijkstra, Hoare, and Gries.
Mili, A., Desharnais, J., and Gagné, J.R. (1986). Formal models of
stepwise refinement of programs. ACM
Computing Surveys, 18(3):231–276,
§1, contains a formal presentation of program
correctness.
Two papers show how to use program-verification techniques to develop
programs:
MacKenzie 1992
discusses a legal challenge to a claim that a certain
computer program had been verified. The claim was that
the verification was of the relatively informal,
De Millo et al. 1979-style variety of proof; the challenge
was that a formal, mathematical proof was necessary.
Frenkel, K. (1993). An interview with Robin Milner. Communications of the
ACM, 36(1):90–97,
discusses program verification, among other topics.
The Fetzer Controversy:
The following letters to the editor (with replies by Fetzer) are representative
of the battle that resulted from Fetzer's article:
See also:
For Fetzer's later writings on the controversy, see:
MacKenzie, D. (2001). Mechanizing Proof: Computing, Risk, and Trust. MIT
Press, Cambridge, MA,
Ch. 6, "Social Processes and Category Mistakes",
concerns the De Millo et al.-Fetzer
controversy. For a review, see Hayes, B. (2002, July-August).
The search for rigor.
American Scientist, 90:382–384.
Glass, R.L. (2002). The proof of correctness wars. Communications of the
ACM, 45(8):19–21,
is a historical survey, with some useful references, arguing
that the controversies over program verification are "extremely
healthy for the field" of computing, roughly for the same reasons
that (as argued in §2.4) philosophy is
important: the challenging of assumptions.
Long after the Fetzer controversy, articles for and against
program verification continue to be published:
Lamport, L. (2015). Who builds a house without drawing blueprints?
Communications of the ACM, 58(4):38–41.
Models: Putting the World into Computers:
The earliest discussion of a map that is the same size as what it
maps is in:
All of these concern space, but there is an analogous
problem for time: Weather prediction is based on models of the world that
have to be "capable of scrolling ahead to the future faster than time can
progress" (Fry, H. (2019, 1 July). Looks like rain: How weather forecasting got
good. The New Yorker, pages 61–65).
On what I am calling "Smith's gap" between the model and the world, see:
Smith's gap is also related to the issues surrounding attempts to prove
the Computability Thesis from axioms that are intended to capture the
informal notion of computability (such as
Dershowitz and Gurevich 2008
and
Sieg 2008:
Other examples, concerning, e.g., the
applicability of group theory to physics are discussed in
Frenkel 2013.
The mathematician Paul Halmos
(Halmos 1973,
p. 384) points out that,
once aspects of Hilbert spaces are seen to be structurally identical to
aspects of quantum-mechanical systems, "the difference between a
quantum physicist and a mathematical operator-theorist becomes one of
language and emphasis only".
Eugene Wigner's classic essay on
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
(Wigner 1960)
is also relevant; computer scientists should
be sure to read R.W. Hamming's response
(Hamming 1980).
On the "indeterminacy of computation", see:
Two Views of Computation:
Robin K. Hill's distinction between
"Do A" and "In order to accomplish goal G,
do A" echoes something that Herbert Simon said three quarters of a
century ago. According to:
Making a distinction akin to Hill's,
Putnam, H. (1999). The Threefold Cord: Mind, Body, and World. Columbia
University Press, New York,
p. 6, says:
"the world is as it is independently of the interests of describers."
Similarly, an algorithm (A)
"is as it is independently of" any purpose (G)
that it might be used for. For further discussion, see Putnam 1999,
pp. 45–46.
Are Inputs Needed?
Besides the basic philosophical "theorem" that, for any X, there is a
philosophy of X (§2.7), there is another that says, "For
any possibility you can name, there exists a philosopher who turned it
into a theory" (Casati, R. (2000). Shadows: Unlocking Their Secrets, from
Plato to Our Time. Vintage/Random House,
2004, New York. Translated by Abigail Asher;
p. 65).
Concerning the relation of an output to a non-existent input, there are
philosophical theories about relations to non-existent entities; see:
Algorithms Don't Need a Purpose:
The philosopher Frances Egan takes the mathematical functional view,
focusing (in our terminology) on A, not G:
On that view, Marr's "computational" level is not teleological
(Egan 1991, p. 201).
Shagrir, O. and Bechtel, W. (2015). Marr's computational-level theories
and delineating phenomena. In Kaplan, D., editor,
Integrating Psychology and Neuroscience: Prospects and
Problems. Oxford University Press, Oxford,
§2.2,
suggests that Marr's "computational" level
conflates two separate, albeit related, questions: "why" and "what".
On mathematical understanding, see:
and Albert Goldfain's work on how to get AI computer systems to
understand mathematics in addition to merely doing it:
Algorithms and Goals:
On "succinct ways of saying what an algorithm is for", see
Chaitin 2002
and
Chaitin 2006b.
Turner, R. (2019). Correctness, explanation and intention. In Manea, F.,
Martin, B., Paulusma, D., and
Primiero, G., editors, Computing with Foresight and Industry. CiE
2019, pages 62‐71. Springer, Cham, Switzerland,
is a useful discussion of the intentionality or purposefulness of
computer programs in the context of program verification.
Harbecke, J. and Shagrir, O. (2019). The role of the environment in
computational explanations. European
Journal for Philosophy of Science, 9(37),
§§1, 2,
offers an excellent summary of the Do A/To G,
do A distinction.
Computing with Symbols or Their Meanings:
The distinction between symbols and meanings also appears
in the philosophy of mathematics concerning "structuralism":
Is the pattern, or structure, of the natural numbers all that matters? Or does
it also matter what the natural numbers in the pattern "really"
are? For discussion, see:
Debates in the philosophy of mathematics concerning "pure" or
"abstract" math vs "applied" math
(Marshall, R. (2019). Philosophy, maths, logic and computers. 3:16)
are also relevant to the
"abstract" Do A vs the more "applied" To G,
do A.
It may also be related to the distinction between
"pure" syntax vs "applied" semantic interpretation.
Syntactic Semantics:
For more on syntactic semantics, see:
Rescorla's "indigenous semantics":
emphasizes causal relations, whereas syntactic semantics
emphasizes the importance of conceptual-role semantics:
For good introductions to computer ethics in general, see:
AAAI's
"AI Topics"
website is an excellent source, with many links.
See also
The Research Center on Values in Engineering, Science, and
Technology's pages on computer ethics.
Automated Vehicles:
Halpern, S. (2016). Our driverless future. New York Review of Books,
63(18):18 20,
discusses technical and ethical issues concerning the
automated decisions made by driverless vehicles.
Monticello, M. (2016). The state of the self-driving car"
and
Will self-driving cars make our road safer? Consumer
Reports, 81(5):44–49,
offers observations on their relative safety.
There are other automated vehicles besides self-driving cars, for which
some of the same issues arise:
Game-Playing Computers:
On checkers, see:
On chess and other games, see:
Should We Trust Decisions Computers Make?
For a philosophical investigation of the nature of trust, see:
Heingartner, D. (2006, 18 July). Maybe we should leave that up to the computer. New
York Times
For scientific research on wrong or illogical explanations by humans,
see:
For other readings on humans' difficulty in reasoning, see
my bibliography on
"Reasoning"
Introductions to "Deep Learning"
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature,
521:436–444,
is an introduction to "deep" machine learning
by three of its pioneering Turing Award winners.
Buckner, C. (2019). Deep learning: A philosophical introduction.
Philosophy Compass, 14(10),
is an introduction to deep learning for philosophers.
The Bias Problem:
Savage, N. (2016). When computers stand in the schoolhouse door.
Communications of the ACM, 59(3):19–21,
is a discussion of how "researchers are trying to
identify … and root … out" biases found in classification
algorithms.
In contrast,
Metz, C. (2019, 16 August). A.I. is learning from humans. Many humans. New York
Times,
reports on minimally trained non-professionals who supply the training for
machine-learning systems, which raises the question of how accurate those
systems algorithms could be.
Singer, N. and Metz, C. (2019, 19 December). Many facial-recognition systems are
biased, says U.S. study. New York Times,
discusses evidence
of bias in facial-recognition algorithms.
Smith, C.S. (2020, 2 January). Dealing with bias in artificial intelligence.
New York Times,
contains
interviews with three female AI researchers on how they deal with such
bias in algorithms.
For a detailed analysis of bias problems in natural-language
processing
systems (such as GPT-3) — and as opposed to
natural-language understanding systems — see:
The Black Box Problem:
Brachman, R.J. (2002). Systems that know what they're doing. IEEE
Intelligent Systems, pages 67–71,
by a leading AI researcher, suggests how and why
decision-making computers should be able to explain their decisions.
For discussions of attempts to get systems to be able to "account"
for themselves, see:
For other arguments on the value of symbolic computation, see:
Are There Decisions Computers Must Make for Us?
For contrasting discussions of the US Airways case, see:
For another airplane incident involving
"rogue" computers, see:
The 2019 crashes of two Boeing 737 Max 8 jets are another important case study.
The crashes seem to have been due to some combination of one or more
of the following: a faulty sensor that input erroneous information to
certain software, the software itself that may have been flawed, or lack
of proper pilot training in the use of the software. For a summary, see
"Boeing 737 MAX Groundings"
(Wikipedia).
See also:
Zremski, J. (2009, 18 February). Perspectives differ on autopilot, icing. Buffalo
News, pages A1‐A2,
explores the possibility that a decision-making computer
might make things more difficult for a human who is in the decision-making loop.
Halpern, S. (2015, 2 April). How robots & algorithms are taking over. New York
Review of Books, 62(6):24, 26, 28,
points out, among other things, that an "overreliance
on automation, and on a tendency to trust computer data
even in the face of contradictory physical evidence, can be dangerous",
in part because the human in the decision-making loop might not be paying
attention: "over half of all airplane accidents were the result of the
mental autopilot brought on by actual autopilot".
Are There Decisions Computers Shouldn't Make?
Asimov, I. (1950). The evitable conflict. Astounding Science
Fiction.
Reprinted in Isaac Asimov, I, Robot
(Garden City, NY: Doubleday, 1950), Ch. 9, pp. 195–218,
about decisions made by machines, is a fictional approach to Moor's question.
OHeigeartaigh, S. (2013, 9 August). Would you hand over a moral decision to a
machine? Why not? Moral
outsourcing and Artificial Intelligence. Practical Ethics,
is a blog that considers many of the issues
discussed in Moor 1979.
Friedman and Kahn argue that programmers should
not design computer systems so that users think that the
systems are "intelligent". The April 2004 issue of
Communications of the ACM
(Miller 2004)
has a whole section devoted to this.
On belief-desire-intention theory, see, e.g.:
As a "classic theme" in AI, I am thinking primarily of Joseph Weizenbaum's
"Eliza" program, which, in its most famous version, allegedly
simulated a Rogerian psychotherapist. See:
In literature, I highly recommend:
In cinema, there are:
to name just three.
Artificial Intelligence as Computational Cognition:
On AI and IQ, see my
"Artificial I.Q. Test" from
Rapaport 1986b,
as well as:
and there are AI-related discussions in:
Paton, E. and Friedman, V. (2018, 29 May). 'Diamonds are forever,' and made by machine. New
York Times,
observes that the distinction between
synthetic and natural diamonds raises "an almost metaphysical question of
what defines a diamond. Is it its chemical structure …? Or is it
its provenance: created deep in the ground … rather than cooked up
in a machine?".
The Turing Test:
Encyclopedia articles on the Turing Test include:
Several anthologies of essays on the Turing Test have appeared:
Wilkes, M. (1953). Can machines think? Discovery, 14:151. Reprinted in
Proceedings of the Institute of
Radio Engineers 41(1953)(10) (October):1230–1234,
contains speculations by one of the pioneers of computers on
the Turing Test, learning machines, and the role of external input.
The differences between various versions of the Turing Test are discussed in:
Argamon, S., Koppel, M., Fine, J., and Shimoni, A.R. (2003). Gender, genre, and writing style in formal
written texts. Text & Talk, 23(3):321–346,
provides evidence that women can be distinguished from men on
the basis of their writing style.
Piccinini 2003
is primarily about Turing's views on AI,
but also discusses his theory of computation and the role of "oracle" machines.
Aaronson 2006
discusses the Turing Test in the context of quantum hypercomputation.
Shieber, S.M. (2007). The Turing test as interactive proof.
Noûs, 41(4):686–713,
proposes an interpretation of the Turing Test as
an interactive proof.
McDermott, D. (2014, 31 December). What was Alan Turing's imitation game?,
is a critique of Turing 1950 written by a well-known AI researcher
as background for the film The Imitation Game.
There have been several programs that are thought by some people to have
passed a Turing Test:
No current natural-language-understanding computer has achieved the
"understanding" exhibited by the fictional computer HAL in the movie
2001 (Stork, D. G. (1997). HAL's Legacy: 2001's Computer as Dream
and Reality. MIT Press, Cambridge, MA.),
not even Apple's Siri or Amazon's Alexa.
The question of
whether some robots and softbots may have passed a kind of Turing Test is
raised in:
Walsh, T. (2016). Turing's red flag law. Communications of the
ACM, 59(7):34–37,
proposes a "Turing Red Flag law: An autonomous system
should be designed so that it is unlikely to be mistaken for
anything besides an autonomous sysem, and should
identify itself at the start of any interaction with another agent."
Berkeley, I.S. (2020). AI: A crowd-sourced criterion. A commentary on Pei
Wang's paper "On defining
Artificial Intelligence". Journal of Artificial General
Intelligence, 11(2):24–26,
compares Turing's
"use of words and general educated opinion" to
Dennett's intentional
stance; see also:
The Chinese Room Argument:
Too much has been written on the CRA to cite here, but you might start
with these:
Weinberg, J. (2019, 15 May). Did a story about a computer made of humans scoop
Searle's "Chinese room" by 20 years? Daily Nous.
Semiotics:
For more details on syntactic semantics, see
"Syntactic Semantics",
above.
Any study of semantics in linguistics — in the sense
of a study of the meanings of linguistic expressions — that
focuses only on relations either among the expressions or between the
expressions and mental "conceptual structures" — and not on relations
between expressions and aspects of the external
world — is a syntactic enterprise. This is the nature
of semantics in, for instance, cognitive linguistics:
For more about Cassie, see:
Points of View:
For arguments why AI should take the first-person point of view, see:
On antecedent understanding, see:
It also plays an important role in Michael Dummett's
theories about understanding the meaning of linguistic expressions:
On syntactic understanding, see
Rapaport 1986a
On meaning holism, see:
Leibniz's Mill and Turing's "Strange Inversion":
For more on what Searle-in-the-room might know, see:
A Better Way:
Levesque, H.J. (2009). Is it enough to get the behaviour right? In
IJCAI'09: Proceedings of the 21st International
Joint Conference on Artifical Intelligence, pages 1439–1444.
Morgan Kaufmann, San Francisco,
agrees that "in the
end, it all depends on the [rule] book. … [W]e need to ask what a
real book would have to be like … ". See also
Levesque 2017.
For another list of things that a (computational) cognitive agent must be
able to do in order to understand natural language, see:
But their conclusion is much more pessimistic than mine!
Robots:
Robots were introduced in Karel Čapek's 1920 play R.U.R.: Rossom's
Universal Robots:
Mendelsohn, D. (2015). The robots are winning! New York Review of
Books, 62(10):51–54.
The most famous fictional treatment of
robot ethics is in the stories of Isaac Asimov that discuss his "Three
Laws of Robotics":
Later, he added a "zeroth" law, specifying that the other three laws
only held if they did not conflict with it:
For an extended discussion of Asimov's Laws, in a computer engineering
journal, see:
James Bridle — quoted in
Lucas, J. (2019, 22 April). Madam, I'm Adam (Man, woman, and robot in Ian McEwen's
new novel). The New Yorker, pages 68–70 — adds
another law (recall the Black Box problem):
A-Life:
More neutrally,
A-Life is also called "complex adaptive systems" or "simulation of
adaptive behavior". There are:
For an overview, see
"Artificial Life" (Wikipedia).
Lem's story is reprinted and discussed in
Hofstadter, D.R. and Dennett, D.C. (1981). Reflections [on Lem 1971]. In
Hofstadter, D.R. and Dennett, D.C., editors, The Mind's I:
Fantasies and Reflections on Self and Soul,
pages 317–320. Basic Books, New York.
Many other stories investigate the relationships between humans and their
robotic creations; I highly recommend:
On the "reality" vs the "virtuality" of Lem-like entities, see:
Is AI Possible in Principle?
For a good overview of functionalism, see
On the related problem of qualia, see:
On computational personality, see:
On computational theories of emotion, see:
On the cognitive science of emotion, see
my bibliography on the topic.
What Is a Person?
On ethical issues of personhood and moral responsibility,
see
Müller 2020
As part of a longer study on "European Civil Law Rules in Robotics",
Nevejans 2016
discusses the "incongruity of establishing robots as liable legal persons".
For a fictional treatment of many of these issues, see:
McEwan 2019.
Rights:
A predecessor of Petersen's paper is:
Petersen continues his argument in:
For a response, see:
Yampolskiy, R.V. and Fox, J. (2013). Safety engineering for artificial
general intelligence. Topoi, 32(2):217–226.
Responsibilities:
Rini, R. (2017, 18 April). Raising good robots. Aeon
Are We Personal AIs?
Bostrom's papers are available at "The Simulation Argument" website.
For critiques of his theory, see:
Perhaps the first treatment of these issues was Descartes's "evil
genius" (or "evil demon") argument in his
Meditations:
Descartes argues that the one thing the evil genius can't fool him about
is that he is doubting things, that doubting is a kind of thinking, and
that, if he thinks, then he exists.
One of the earliest contemporary philosophical treatments of a simulation
argument is Hilary Putnam's
"Brains in a Vat" argument:
For an overview, see the above link, as well as:
The movie The Matrix (and its sequels) is the most famous recent
science-fiction treatment of this; for a philosophical analysis, see:
Hoffman, D.D. (2009). The interface theory of perception: Natural
selection drives true perception to swift
extinction. In Dickinson, S., Tarr, M., Leonardis, A.,
and Schiele, B., editors, Object Categorization:
Computer and Human Vision Perspectives, pages 148–165. Cambridge
University Press, Cambridge, UK.
As with so much else in CS and AI, Turing was among the first to discuss
what is now called "The Singularity":
For overviews of The Singularity, see:
For discussions, see:
Musk's observations are discussed at
"Elon Musk — Artificial intelligence, metaverse, and public
transit" (Wikipedia).
Hawking's observations are discussed at
"Stephen Hawking — Future of humanity"
(Wikipedia).
Bundy, A. (2017). Smart machines are not a threat to humanity.
Communications of the ACM, 60(2):40–42,
argues that "Worrying about machines that are too
smart distracts us from the real and present threat from machines that are
too dumb."
A related topic concerns robots that kill: They might be industrial
robots that accidentally kill
or harm human workers (something that has already happened),
military robots designed to kill or harm enemies during a battle,
or post-Singularity robots that decide to kill or harm humans.
For discussion, see:
For humorous takes on both simulation arguments and the Singularity, see
the cartoons at
Dilbert (3/3/2019),
Abstruse Goose, "Sandbox",
and
Abstruse Goose, "Sandbox, Part 2"
For non-technical discussions, see:
On "practical" computability, see:
Chapter 1: An Introduction to the Philosophy of Computer Science
General readings on the philosophy of computer science:
Reviewed in:
Chapter 2: Philosophy: A Personal View
Standard reference works in philosophy:
But the most respected and most widely cited
encyclopedia is online and continually updated:
Can There Be Progress in Philosophy?
argues that there's nothing wrong with "armchair" philosophy.
Chapter 3: What Is Computer Science?
Naming the Discipline:
"My personal definition of the field and its name would be
'computology: the study of computational
processes and the means by which they may be realized.'
But alas, the name 'computer science,' like OS/360 Job
Control Language, will probably persist until the sun grows cold."
The Once-upon-a-Time Science of Microscopy:
"… Marcello Malpighi (1628–1694),
was a great scientist whose work had no dogmatic
unity. [I.e., Malpighi did not study
any single, natural phenomenon; rather, he studied all
phenomena that are only visible with a microscope.]
He was one of the first of a new breed of explorers who defined their
mission neither by the doctrine of their master nor by the subject that
they studied. They were no longer 'Aristotelians' or 'Galenists.'
Their eponym, their mechanical godparent, was some device that extended
their senses and widened their vistas. What gave his researches
coherence was a new instrument. Malpighi was to be a 'microscopist,'
and his science was 'microscopy' … .
His scientific career was held together not by
what he was trying to confirm or to prove, but by the vehicle which
carried him on his voyages of observation."
In a similar fashion, surely
computers are "device[s] that [have]
extended [our] senses and widened [our] vistas", and the science of
computer scientists is, well, computer science.
('Surely', by the way, is a word that
any philosopher should surely(?) take with a grain of salt!
(Dennett, D. C. (2013). Intuition Pumps and Other Tools for
Thinking. W.W. Norton, New York. Ch. 10))
After all, one of the two
principal professional associations is the Association for Computing
Machinery (ACM).
What "holds" computer scientists "together … [is] the
vehicle which carrie[s them] on [their] voyages of observation".
— Boorstin, D. (1983). The Discoverers. Random House, New
York. Ch. 49: "The Microscope of Nature", p. 376.
"The applications of computers to a discipline should be considered
properly a part of the natural evolution of the discipline. … The
mass spectrometer has permitted significant advances in chemistry, but
there is no 'mass spectrometry science' devoted to the study of this
instrument."
(Loui, M. C. (1987). Computer science is an engineering discipline.
Engineering Education, 78(3):177)
Similarly, the microscope has permitted significant advances in biology
(and many other disciplines) but, arguably, microscopy no longer exists as
an independent science
devoted to the study of that instrument or the things studied with it.
"Recent improvements in the Microscope having rendered that
instrument increasingly available for scientific research, and
having created a large class of observers who devote themselves
to whatever department of science may be investigated by its
aid, it has been thought that the time is come when a Journal
devoted entirely to objects connected with the use of the Microscope
would contribute to the advancement of science, and secure
the co-operation of all interested in its various applications.
If you replace 'microscope' with 'computer' (along with their
cognates), and 'histology' with something like 'mathematical
calculations' (or 'algorithms'!), then
this reads like a manifesto for the ACM.
"As cutting with a sharp
instrument is better than tearing with the nails, so vision with
the microscope is better than with the naked eye. Its use
[i.e., the microscope's use] is,
therefore, as extensive as that of the organ which it assists, and
it cannot be regarded as the property of one branch of science
more than another."
(1853b). Review of J.H. Wythes, The Microscopist; or a Complete
Manual on the Use of the
Microscope. In Quarterly Journal of Microscopical Science,
1(1)(1853):51–53, my italics.)
And here is a paraphrase:
As vision with the microscope is better than with the naked eye,
so thinking with
the computer is better than with the mind alone. Its use [i.e.,
the computer's use] is,
therefore, as extensive as that of the organ which it assists, and
it cannot be regarded as the property of one branch of science
more than another.
This is reminiscent of the philosopher Daniel Dennett's arguments for the
computer as a
"prosthesis" for the mind (Dennett, D. C. (1982). Notes on prosthetic
imagination. Boston Review, 7(3):3–7),
i.e., as a tool to help us think better.
"The first major step … is to create a department of computer
science … Without a department, a university may well
acquire a number of computer scientists, but they will be scattered and
relatively ineffective in dealing with computer science as a whole."
(Forsythe, G. E. (1967). A university’s educational program in computer
science. Communications of the ACM, 10(1):5.)
But the breakup of CS into component disciplines is something to ponder.
and the bibliography at:
On how Shannon's syntactic analysis of information
differs from the novelist Jane Austen's semantic characterization, see:
On semantic vs. syntactic (or non-semantic)
theories of information more generally, see:
Chapter 4: Science
. Cambridge
University Press, 2004, New York. Trans. by Gary Hatfield.
Quillian's essay is an explanation, in terms of the communication of
information, of why the natural sciences are more
"effective" than the social sciences. Although written in the
early days of the World Wide Web, his paper has some
interesting implications for the role of social media in political discourse.
For a response by
Turing Award-winner Richard Hamming, see:
Related to the above and to the discussion of CS as art,
in §3.16.1 of the book, see:
Tedre and
Moisseinen 2014 is a
survey of the nature of experiments in science,
and whether CS is experimental in nature.
Chapter 5: Engineering
"Engineering is commonly defined as the art or science of
'directing the great sources of power in nature for the
use and the convenience of humans …' But there is
nothing in engineering education or knowledge that contributes to
any distinct competence in making judgments about what
constitutes 'human use and convenience.' Engineering as a profession
is analogous to what medicine might be if physicians had no
expert knowledge of health or to law if attorneys knew
nothing special about justice." (p. 339)
discusses "computing as engineering".
Tedre, M. (2011). Computing as a science: A survey of competing
viewpoints. Minds and Machines, 21(3):361–387,
Chapter 6: Computers: A Brief History
Browse the linked websites at "A Very Brief History of Computers"
"Scientific progress consists in a progressive opening of …
["closed", i.e., "black"] boxes" and subdividing closed boxes into
"several smaller shut compartments" some of which "may be … left
closed, because they are considered only functionally, but not
structurally important."
contain a good summary of the history of computation.
For more on impossibility proofs, see:
Soare, R. I. (2016). Turing Computability: Theory and Applications.
Springer, Berlin.
Chapter 7: Algorithms and Computability
Henkin 1962,
pp. 788–791, is an excellent, brief overview of the
history of logic and the
foundations of mathematics that led up to Turing's analysis.
"Many now view computation as a fundamental part of nature, like atoms or
the integers."
Although the text talks about "computing", other terms that are used
to describe more or less the same territory are 'computation' and
'algorithms'. Here, we take a brief detour into the etymology of
these and some related terms.
(We look at the etymology of 'algorithm' in §7.3.1.)
— Fortnow, L. (2010). What is computation? Ubiquity, Article 5,
p. 2.
According to the
OED,
the verb 'to compute' comes from the Latin verb ' computare',
meaning "to calculate, reckon, to count up".
But when people talk about "computing" today, they mean a great deal
more than mere counting. Computing has come to include everything we
can do with computers, including text processing, watching videos, and
playing games. So, clearly, the meaning has been extended to include non-numerical
"reckoning".
The verb 'to reckon' originally meant "to give an account of, recount; to
tell; to describe", and later came to mean "to count, to calculate".
'Reckon' is from an
Indo-European
root 'rek', possibly meaning "to reach" or
"to tell, to narrate, to say" (as in "to recite" or "to recount").
These meanings, in turn, may derive from an earlier meaning "to
arrange", "to put right", "to move in a straight line".
'Count' also came from 'computare' and originally meant
"to enumerate", "to recite a list"
(and, as we just saw, 'recite' is probably related to 'reckon').
Note that when you "count", you "recite" a list of number words.
'Calculate' came from Latin 'calculus'. This certainly did not
mean the contents of a certain high school or college
course that studies the branch of mathematics concerned with
differentiation and integration, and invented by Newton and
Leibniz in the 17th century! Rather, it meant "pebble" or "small stone",
since counting was done with stones originally.
(As a
Rhymes with Orange comic strip from 22 August 2011 put it,
"There is more computing power in this little abacus than in all the giant
rocks we move aorund to do addition.")
Even today, a "calculus" in medicine is an accumulation of minerals in the body,
forming a small, stone-like object. The root 'calc' came from
'calx', meaning "chalk, limestone", and is related to 'calcium'.
The verb 'to figure' means "to use figures to reckon".
The earliest citation in the OED for the noun 'figure'
is from 1225, when it meant "numerical symbol".
A citation from 1250 has the meaning "embodied (human) form".
And a citation from 1300 has the more general meaning of "shape".
(This conversion of the noun 'figure' to a verb is an
example of what Alan Perlis meant when he joked,
"In English, every word can be verbed".)
Chapter 8: Turing's Analysis of Computation
A wonderful guide to reading all of Turing 1936 slowly
and carefully is
Petzold, C. (2008).
The Annotated Turing: A Guided Tour through Alan
Turing’s Historic Paper on Computability
and the Turing Machine. Wiley, Indianapolis.
For a review of Petzold's book,
see Davis, M.D. (2008). Touring Turing. American Scientist, 96(6):520.
"A 1923 play called The Adding Machine lampooned the monotony of
assembly-line office work and prefigured fears about machine automation.
Its main character, 'Mr. Zero,' writes down numbers all day long, 'upon
a square sheet of ruled paper.'"
(Lepore, J. (2018). These Truths: A History of the United States. W.W.
Norton, New York, p. 404.)
For more information on the play, by Elmer Rice,
see "The Adding Machine" (Wikipedia)
"… it is surely obvious that every theory is only a scaffolding or
schema of concepts together with their necessary relations to one another,
and that the basic elements can be thought of in any way one likes. If in
speaking of my points, I think of some system of things, e.g., the system:
love, law, chimney-sweep … and then assume all my axioms as
relations between these things, then my propositions, e.g., Pythagoras'
theorem, are also valid for these things … [A]ny theory can always
be applied to infinitely many systems of basic elements."
(Quoted on p. 168 of:
Excellent discussions of this kind of abstraction can be found in
Cohen, M.R. and Nagel, E. (1934).
An Introduction to Logic and Scientific Method. Harcourt, Brace and
Co., New York, Ch. 7, "The Nature of a Logical or Mathematical System",
and in
Hempel, C. G. (1945). Geometry and empirical science. American
Mathematical Monthly, 52(1):7–17.
"… statement is moreover one which one does not attempt to
prove. Propaganda is more appropriate to it than proof, for its status is
something
between a theorem and a definition. In so far as we know a priori what is a
puzzle and what is not, the statement is a theorem. In so far as we do not
know what puzzles are, the statement is a definition which tells us something
about what they are." (p. 588).
See also Hodges's "Alan Turing: The Enigma" website.
Reviews of the first edition of Hodges's biography can be found in:
Several websites offer background and critiques of the film.
One is
"The Imitation Game: The Philosophical Legacy of Alan
Turing",
The Critique (31 December 2014).
and
Hayes, B. (2007). Trains of thought. American Scientist, 95(2),
are implementations of Turing Machines using subway
trains and railroad trains, respectively.
Rendell, P. (2001). A Turing Machine In Conway's Game Life;
Rendell, P. (2010). This is a Universal Turing Machine (UTM) implemented
in Conway's Game of Life.
are implementations
of Turing Machines in John Conway's Game of Life.
Smith, A. R. (2014b).
Turingtoys.com
are
implementations on a business card! (For more information on the
business-card Turing Machines, see
Casselman, B. (2014).
About the cover: Smart card. Notices of the AMS,
61(8):872.
Chapter 9: Computers: A Philosophical Perspective
What Is a Computer?
and the video Chirimuuta, M., Boone, T., and DeMedonsa, M. (2014).
Is your brain a computer?, Instant HPS
Sloman, A. (2019). What is it like to be a rock?,
notes that
"virtual machines … are implemented in, but not equivalent (or
reducible) to any underlying physical machine, in part because the terms
used to describe the properties and functions of the virtual machine
(e.g. the internet-based email system now used all over our planet) are not
definable in the language of physics, and the virtual machine that runs
for an extended time is not equivalent to or reducible to the collection
of physical machinery that happens to implement the email system at any
time. For example, parts of the physical network can be temporarily
unavailable causing messages to be routed differently, and over time parts
of the physical network are replaced using new physical and software
technology that was unknown a few years earlier, providing cheaper, faster
and more reliable hardware running the same virtual machine."
Putnam, H. (1988). Representation and Reality. MIT Press, Cambridge, MA.
There are also analogies between telephone systems and computers
themselves:
On metaphors for the brain (and mind), see:
For a critical and historical
review of that classic paper, see:
On cellular automata in general, see:
Burks, A.W., editor (1970). Essays on
Cellular Automata. University of Illinois Press, Urbana, IL.
(Philosopher and mathematician Arthur Burks was one of the people
involved in the construction of ENIAC and EDVAC.)
that "the universe itself is
basically a giant computer … by showing that if [it is, then]
it's a vastly more powerful kind of computer than any yet constructed by
humankind."
"All physical systems register and process information.
The laws of physics determine the amount of
information that a physical system can register (number of bits) and the
number of elementary logic
operations that a system can perform (number of ops). The Universe is a
physical system. The amount
of information that the Universe can register and the number of
elementary operations that it can have
performed over its history are calculated. The Universe can have
performed 10^120 ops on 10^90 bits
10^120 bits including gravitational degrees of freedom)."
along with two reviews of it:
If Lloyd is right, then the universe is a computer,
and we are data structures in its program, brought to life as it were by
its execution. If Bostrom is right, then we are data structures in
someone else's program.
If theists (computational theists?) are right,
then we are data structures in God's program.
Chapter 10: Procedures
The Church-Turing Computability Thesis:
"What is effectively calculable ["by an abstract human being using
some mechanical aids (such as paper and pencil)"] is computable …
[where] "computable" … mean[s] "computable by a Turing
machine"[,] … "abstract" indicates that the argument makes no
appeal to the existence of practical limits on time and space … [and]
"effective" in the thesis serves to emphasize that the process of calculation is
deterministic — not dependent on guesswork — and that it must
terminate after a finite time."
(Gandy 1980,
pp. 123–124, my italics)
"… using the term with its nineteenth century meaning; the reader may
like to imagine some glorious contraption of gleaming brass and polished
mahogany, or he [sic] may choose to inspect the parts of Babbage's
"Analytical Engine" which are preserved in the Science Museum at South
Kensington."
(Gandy 1980,
p. 125)
Kearns, J. (1997). Thinking machines: Some fundamental confusions. Minds
and Machines, 7(2):269 287,
argues that the Computability Thesis "has no interesting consequences for
human cognition" because "carrying out a procedure is
a purposive, intentional activity. No actual machine does, or can do, as much."
Abramson, D. (2011). Philosophy of mind is (in part) philosophy of
computer science. Minds and Machines,
21:203–219,
argues that the Computability Thesis
is relevant to questions about the Turing Test for AI.
Chapter 11: Hypercomputation
On origami considered as an investigation of what paper-folding
constructions are possible using only certain basic operations, see
Landesman, C. (2021). A deep dive into the fold.
The Paper, page 30.
"Any computer primed to perform an infinite number of computational
steps must take an eternity to complete the task, because
completion in a finite time would imply an unbounded signal
velocity — conflicting with relativity theory. This would seem to suggest
that the full potential of these computers is available only to immortal
computer users. But … there is no reason why the computer user must remain
beside the computer. If he follows a different worldline his clock
will tick at a rate different to that of the computer's clock, and perhaps
an extreme case could be organized in which the rates are such that the
finite proper time as measured by the computer user "corresponds"
to an infinite proper time as measured by the computer. In
this case, and granting also that the computer can always signal to
the computer user, the computer user will take only a finite time to
view the eternity of the computer's life and with it the results of its
computations." (pp. 173–174)
Trial and error predicates and the solution to a
problem of Mostowski. Journal of
Symbolic Logic, 30(1) (1965):49–57,
originated the term 'trial-and-error predicate'.
echoed that in 'trial-and-error computability'.
called such computation 'inductive inference'.
claims to have originated the terms 'trial-and-error machine' and
'Putnam-Gold machine'.
and
Davis, M.D. (1993). How subtle is Gödel's theorem? More on Roger
Penrose. Behavioral and Brain Sciences, 16(3):611.
Chapter 12: Software and Hardware
The Nature of Computer Programs:
"A programming language is a language after all, albeit a
hightly constrained one. As such, it is a perfect medium for
the poet comfortable with other highly constrained poetic forms like the
sonnet or haiku". (p. 123)
Duncan, W.D. (2017). Ontological distinctions between hardware and
software. Applied Ontology, 12(17):5–32.
"Intentional system theory deals just with the performance specifications of
believers while remaining silent on how the systems are to be implemented."
Mice cats chase eat cheese.
Dogs dogs dog dog dogs.
Buffalo buffalo buffalo buffalo buffalo.
Chapter 13: Implementation
For more on the theory of implementation as semantic interpretation,
see:
"[A] computation is a special form of mathematical argument. One is
given a set of instructions, and the steps in the computation are supposed
to … follow deductively … from the instructions as given.
So a computation is just another mathematical deduction … "
Real examples of formal systems include:
Hofstadter, D.R. (1979). Gödel, Escher, Bach: An Eternal Golden
Braid. Basic Books, New York.
Chalmers, D.J. (1996). Does a rock implement every finite-state
automaton? Synthese, 108:309–333
which is a reply to Putnam's "theorem" that
"Every ordinary open system is a realization of every abstract finite
automaton"
(Putnam 1988,
Appendix).
Putnam's argument for this "theorem" is
related to
Searle's 1990
argument about
the wall behind me implementing Wordstar (discussed in §9.4.6),
but it is much more detailed.
Scheutz, M. (2001). Computational versus causal complexity. Minds and
Machines, 11:543 566,
and
Piccinini, G. (2006). Computation without representation. Philosophical
Studies, 137(2):204–241,
arguing that
there can be "systems that simultaneously implement different
complex automata" (p. 137).
Chapter 14: Computer Programs as Scientific Theories
Simulations:
Neumann, P.G. (1993). Modeling and simulation. Communications of the
ACM, 36(6):124,
contains useful, real-life examples of ways in which
simulations (and theories) can fail to be precise models of reality, and it
discusses "the illusion that the virtual
is real" (quoting Rebecca Mercuri).
"Just because two things share some properties
in common does not mean that one models the other.
Indeed, if it did, it would mean that everything models
everything else. There must be at least a plausible
claim of some similarity in the ways in which such properties
are realized in the model and the thing being modeled."
(§IV, final paragraph)
Frigg, R., Hartman, S., and Imbert, C. (2009). Special issue on models and
simulations. Synthese, 169(3):425–626.
has two sections on the nature of theories:
Scherlis, W.L. and Scott, D.S. (1983). First steps towards inferential
programming. In Mason, R., editor,
Information Processing 83, pages 199–212.
JFIP and Elsevier North-Holland, §3
Chapter 15: Program Verification
Bugs and Intended Behavior:
Frenkel, E. (2013). Love and Math: The Heart of Hidden Reality. Basic
Books, New York.
Hoare, C.A.R. (1969). An axiomatic basis for computer programming.
Communications of the ACM, 12(10):576–580, 583.
Reprinted in Colburn et al. 1993, pp. 83–96.
Dijkstra, E.W. (1983). Fruits of misunderstanding (EWD-854).
Reprinted in Datamation (15 February 1985): 86–87.
Dudley, R. (1990). Program verification. Notices of the AMS,
37:123–124. Letter to the editor, with reply by J. Barwise.
Morrisett, G. (2009). A compiler's story. Communications of the ACM,
52(7):106
containing philosophical remarks on the value of program verification.
Other discussions of the idea can be found in:
See also "Quote of the Day: Lewis Carroll's Paradox of the Complete
Map".
Rapaport, W.J. (2012). IntenSionality vs. intenTionality.
"… how [can] we tell whether a given piece of live mathematical
reasoning corresponds to a given actual or envisioned formal proof …
How does one guarantee that the stated axioms or premises of the formal
proof are in fact necessary for the intuitive, pre-theoretic notions
invoked in the informal text? That is, how do we assure ourselves that
the formalization is faithful? This question cannot be settled by a
formal derivation. That would start a regress, at least potentially.
We would push our problem to the axioms or premises of that
derivation. Moreover, any formalization of a real-life argument reveals
missing steps, or gaps, and plugging those would sometimes require theses
much like Church's thesis"
(Stewart Shapiro 2013,
pp. 158–159).
Chapter 16: Programs and the World
Internal vs. External Behavior: Some Examples:
"In his exposition of science,
Simon … divides it into two kinds: practical and theoretical.
Scientific propositions are practical if they
are stated in some such form as 'In order to produce such and such a
state of affairs, such and such must be done'
(Simon, H. A. (1947). Administrative Behavior.
Macmillan, New York, p. 248).
The equivalent
theoretical proposition with the same conditions of verification
can be stated in a purely descriptive form: 'Such and such a state of
affairs is invariably accompanied by such and such conditions'
(Simon 1947, p. 248)."
(Brown 2004, p. 1247, my italics)
Rapaport, W.J. (2002). Holism, conceptual-role semantics, and syntactic
semantics. Minds and Machines, 12(1):3–59
Chapter 17: Computer Ethics I: Should We Trust Computers?
Further Reading on Computer Ethics:
Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; &
Shmitchell, Shmargaret (2021),
"On the Dangers of Stochastic Parrots: Can Language Models
Be Too Big? 🦜", CFAccT '21: Proceedings of the 2021 ACM Conference
on Fairness, Accountability, and Transparency: 610–623.
Chapter 18: Philosophy of Artificial Intelligence
For more on the philosophy of AI, see:
The nature of intelligence is beyond our
scope, but there are useful general discussions in:
Similar views are discussed in:
Chapter 19: Computer Ethics II: Should We Build Artificial
Intelligences?
On AI ethics in general, see:
0.
A robot may not injure humanity, or, through inaction, allow humanity to
come to harm.
For some of its more contemporary versions and objections to them, see:
This is treated fictionally in
Chiang 2019
"[I]t seems probable
that once the machine thinking method had started, it would not take long
to outstrip our feeble powers. There would be no question of the
machines dying, and they would be able to converse with each other to
sharpen their wits. At some stage therefore we should have to expect
the machines to take control, in the way that is mentioned in Samuel
Butler's Erewhon."
(Turing, A M. (1951). Intelligent machinery, a heretical theory.
Philosophia Mathematica, 4(3):256–260; quotation from
pp. 259–260.
Reprinted in
Copeland 2004,
Ch. 12, and
Shieber 2004,
Ch. 5)
Chalmers, D.J. (2012). The singularity: A reply. Journal of
Consciousness Studies
Chapter 20: Computer Science: A Personal View
For semi-technical discussions of computational complexity and P=NP, see:
Copyright © 2022 by
William J. Rapaport
(rapaport@buffalo.edu)
http://www.cse.buffalo.edu/~rapaport/OR/A00furtherrdg-wiley.html-20221022-2