A note on the Chinese Room

  title={A note on the Chinese Room},
  author={Hann{\'o}ch Ben-Yami},
Searle's Chinese Room was supposed to prove that computers can't understand: the man in the room, following, like a computer, syntactical rules alone, though indistinguishable from a genuine Chinese speaker, doesn't understand a word. But such a room is impossible: the man won't be able to respond correctly to questions like ‘What is the time?’, even though such an ability is indispensable for a genuine Chinese speaker. Several ways to provide the room with the required ability are considered… Expand
The logic of Searle’s Chinese room argument
  • R. Damper
  • Philosophy, Computer Science
  • Minds and Machines
  • 2006
Treating the CRA as a prototypical example of a ‘destructive’ thought experiment, this paper attempts to set it in a logical framework (due to Sorensen), which allows us to systematise and classify the various objections. Expand


Syntactic Semantics: Foundations of Computational Natural-Language Understanding
It is argued that although a certain kind of semantic interpretation is needed for understanding natural language, it is a kind that only involves syntactic symbol manipulation of precisely the sort of which computers are capable, so that it is possible in principle for computers to understand natural language. Expand
Could a machine think?
The authors reject the Turing test as a sufficient condition for conscious intelligence and base their position of the specific behavioral failures of the classical SM machines and on the specific virtues of machines with a more brain-like architecture, which show that certain computational strategies have vast and decisive advantages over others where typical cognitive tasks are concerned. Expand
Is the brain's mind a computer program?
  • Searle
  • Physics, Medicine
  • Scientific American
  • 1990
The goal is to design programs that will simulate human cognition in such a way as to pass the Turing test, and to distinguish these two approaches, the authors call the first strong AI and the second weak AI. Expand
Behaviorism and psychologism: Why block's argument against behaviorism is unsound
Ned Block ((1981). Psychologism and behaviorism. Philosophical Review, 90, 5–43.) argued that a behaviorist conception of intelligence is mistaken, and that the nature of an agent's internalExpand
Minds, Brains, and Programs
  • J. Searle
  • Psychology, Computer Science
  • The Philosophy of Artificial Intelligence
  • 1990
Only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains, and no program by itself is sufficient for thinking. Expand
Psychologism and Behaviorism
I et psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it. More specifically, I meanExpand
Not a trivial consequence
Commentaires sur un article precedent consacre aux consequences de deux propositions― 1. L'intentionnalite de l'existence humaine (ou animale) est un produit de caracteristiques causales du cerveau―Expand
Minds, Brains and Science
Introduction 1. The Mind-Body Problem 2. Can Computers Think? 3. Cognitive Science 4. The Structure of Action 5. Prospects for the Social Sciences 6. The Freedom of the Will Suggestions for FurtherExpand
The right stuff.
Fast Thinking', in his The Intentional Stance
  • 1987