Functionalism

Posted by Ali Reda | Posted in | Posted on 5/12/2015

Some people said that the fact that neurons were either firing or not firing was an indication that the brain was a binary system, just like any other digital computer. Thus came the idea, mental states are computational states of the brain. But when we consider the computational operations, the manipulation of symbols in accord with formal rules, a computing machine performs, we‘abstract’ it from its underlying base, i.e whatever the hardware structure and the software operating this hardware, calculations are always the same. For example, a Function like showing a character moving from point X to Y, is realized by (Software (Windows+Game Engine) and Hardware) and Android and iPhone and so on. So in theory, same calculations (functions) can be done on whatever hardware. This is known as "multiply realizablity". So, pain, for example, is unlikely to just be C-fiber stimulation (or some other appropriate brain state), because octopuses and other such creatures can probably feel pain, despite their not having C-fiber stimulatory capacity. This led to the development of functionalism, which promised to unify physically different phenomena under the banner of causal (functional) similarity.

But what is the relation between functions (computations) and the underlying brain, neurons and synapses (software and hardware)? A functionalist prefers to say that computational processes are ‘realized’ in material systems but not dependent on them. The functionalist’s point is just that higher-level properties such as being in pain or computing the sum of 7 and 5 are not to be identified with,‘reduced to’, or mistaken for their realizers (the lower material level). Individual neurons are not conscious, but portions of the brain system composed of neurons are conscious. We may compare the brain with other organs, such as the eye. The individual parts that make up the eye all serve the function of seeing. For instance, the parts of the eye allow us to see, but the individual state of each part is not what we mean by "seeing".

Various reasons against reductive versions of physicalism have led many to accept some form of “nonreductive physicalism”, the view that despite everything being dependent on the physical, it is not the case that mental properties are identical to physical properties. Minds are not identifiable with brains; but neither are minds distinct immaterial substances mysteriously linked to bodies. Minds are functional states characterizable by their place in a structured large causal network, it has a particular role or a job description which is its function, if it responds to causal inputs (stimuli and mental states like believes and desires and other functional states) with particular kinds of output (other mental states and other functional states and external behavior), like a finite state machine.

Pains, for instance, might be characterized by reference to typical causes (tissue damage, pressure, extremes of temperature), their relations to other states of mind (they give rise to the belief that you are in pain, and a desire to rid yourself of the source of pain), and behavioral outputs (you move your body in particular ways, groan, perspire). Consider your being in pain as a result of your grasping the handle of a cast iron skillet that has been left heating on the stove. Here, you being in pain is a matter of your being in a particular state, one that stands in appropriate causal relations to sensory inputs, to output behavior, and to other states of mind. These other states of mind are themselves characterizable by reference to their causal roles. Another example is, to say that Jones believes that it is raining is to say that he has a certain state, or process going on in him that is caused by certain sorts of inputs (external stimuli—for example, he perceives that it is raining); and this phenomenon, in conjunction with certain other factors, such as his desire to stay dry, will cause a certain sort of behavior on his part, the behavior of carrying an umbrella.

But how can we know the functions of the mind, if we will abstract from its hardware? Imagine you are a scientist confronted with a computing machine deposited on Earth by an alien starship. You might want to know how the device was programmed. Finding out would involve a measure of ‘reverse engineering’. You would ‘work backwards’ by observing inputs and outputs,hypothesizing computational operations linking inputs to outputs, testing these hypotheses against new inputs and outputs, and gradually refining your understanding of the alien device’s program. It seemed to solve all issues, for example, a computing machine can‘crash’ because of a software ‘bug’, or because of a hardware defect or failure. That's why people with mind defects are either due to brain problems or a mental dis-functions like going crazy.

Now the million dollar question everyone is avoiding until now is "How are those Mind functions are realized in the Brain and the nervous system?". How these functions are realized in the underlying software and hardware of a specific type, let's say for example, humans? A functionalist would answer that this is out of his scope of study, because the Black Box's inner workings are the responsibility of the neuroscience. Functionalism made philosophy of the Mind similar to Computer engineering.

The first version of functionalism, machine functionalism, presented by Hilary Putnam in the early 1960’s, machine functionalism argues that mental states, more specifically, are states of a hypothetical machine called a Turing Machine. Turing Machines are automatons which can, in principle, compute any problem and which do so in virtue of what are called ‘system states,’ which are tied to instructions for computational steps (e.g., “If in system state S, perform computation C and then transition into system state S2, and so on). In doing this it uses a computer model which describes the mind as a “multiply realisable”, it is like the calculations and rules that make up a software program that can be run on any machine, or in our case for example, animals and humans. Furthermore, we have a test that will enable us to tell when we have actually duplicated human cognition, the Turing test. The Turing test gives us a conclusive proof of the presence of cognitive capacities. To find out whether or not we have actually invented an intelligent machine we need only apply the Turing test.

To differentiate between this model and behaviourism, this model assumes that the functional states cause (and are therefore not identical with) behaviour while acknowledging the insight (often attributed to Ryle) that the mental is importantly related to behavioural output or response (as well as to stimulus or input). The differences is that functionalism also refers to other mental states; further, these other mental states are interlinked with each other, stimuli, and behavior in a web of causal relations. This allows both an appearance of choice (“Shall I respond in this way?”) and the presence of beliefs independent of any possible behaviour.

The model also differs from identity theory in that it does not matter what the physical cause of the mental state is because a causal role can be defined independently of its physical realization (that is, because functional states are multiply realizable). So, whether my brain state is always the same when I do a particular thing, or whether it is consistent with other people’s or animals when they do, is immaterial because there are any number of different ways in which such an experience might be “realised”. Rather than define pain in terms of C-fiber firing, functionalism defines pain in terms of the causal role it plays in our mental life: causing avoidance behavior, warning us of danger, etc., in response to certain environmental stimuli.

Problems

  1. Consciousness remains deeply mysterious on anyone’s view. We have no idea how to accommodate consciousness to the material world, no idea how to explain the phenomenon of consciousness. Chinese Mind Argument: The philosopher Ned Block has argued that a case could be made for creating a mind - according to the functional definition -  on a grand scale where the population of China was fitted with radios which were connected up in just the same way that the neurons in the brain are connected up, and messages passed between them in the same way as between neurons. According to functionalism, this should create a mind; Functionalism relies on the idea that Functional states are “multiply realisable” – an idea which means that, not only may aliens and animals experience pain, but robots and the whole Chinese nation as well. But it is very difficult to believe that there would be a ‘Chinese consciousness’. If the Chinese system replicated the state of my brain when I feel pain, would something be in pain?
  2. We said that you being in pain is a matter of your being in a particular state, one that stands in appropriate causal relations to sensory inputs, to output behavior, and to other states of mind. But if we keep analyzing states of Mind with other states of Mind we end with infinite circular accounts. Solution: The idea is that because the identity of every state depends on relations it bears to other states, we cannot characterize mental items piecemeal, butonly ‘holistically’ – all at once.
  3. Qualia Problem. Solution:You are able to describe your experience as of a spherical red object, but it is the tomato that is spherical and red, not your experience. So the first distinction is between:
    1. Qualities of experiences (seen from third person perspective, like a scientist looking at your brain while you are seeing a tomato) 
    2. Qualities of objects experienced. (Seen from a first person perspective like you seeing a red and round tomato)
    A functionalist might contend that an experience is a matter of your representing a throbbing occurrence in your big toe but nothing in fact throbs. In the state of a tomato, nothing is red or round, only we represent it like this. These are qualities we represent objects as having, but it does not follow that anything actually has the qualities – any more than from the fact that we can represent mermaids, it follows that mermaids exist. What opponents of functionalism describe as qualities of conscious experiences – qualia – are qualities of nothing at all! They are rather qualities we mistakenly represent objects and occurrences as having. Alternatively, to say that your experience possesses such qualities is just to say that you are representing something as having them. Problem: But why do we represent them like this? And why are different conscious experiences have different qualities? And why this representation can be sensed?
  4. Chinese Room Argument: Any theory of mind that includes multiple realizability allows for the existence of strong AI. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI". This is considered an argument for refutation of functionalism mainly. 

Chinese Room

Searle's Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave. The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI". This is considered an argument for refutation of functionalism mainly.
Suppose that I ’ m locked in a room and given a large batch of Chinese writing. I know no Chinese, either written or spoken. Now suppose further that after this fi rst batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that “ formal ” means here is that I can identify the symbols entirely by their shapes. Unknown to me, the people who are giving me all of these symbols call the call the [first] batch “ questions. ” Furthermore, they call the symbols I give them back in response to the [first] batch “ answers to the questions, ” and the set of rules in English that they gave me, they call “ the program. ” Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external points of view – that is, from the point of view of somebody outside the room in which I am locked – my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. As regards the [claims of strong AI], it seems to me quite obvious in the example that I do not understand a word of Chinese. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. (Searle, 417 – 18)
Any account of meaning has to recognize the distinction between the symbols, construed as purely abstract syntactical entities, and the semantics, the meanings attached to those symbols. The symbols have to be distinguished from their meanings.  For example, if I write down a sentence in German, “Es regnet,” you will see words on the page and thus see the syntactical objects, but if you do not know German, you will be aware only of the syntax, not of the semantics. A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols, unlike our thoughts have meaning: they represent things and we know what it is they represent.

If Searle doesn't understand Chinese solely on the basis of running the right rules, then neither does a computer solely on the basis of running the right program. All that is ever happening is rule-based activity (which is not how humans work), so manipulating symbols according to a program is not enough by itself to guarantee cognition, perception, understanding, thinking, and so forth; that is, the creation of minds. Searle's room can pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind".

Replies on the Chinese Room Argument


The System Reply


The basic "system reply" argues that it is the "whole system" that understands Chinese. While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese.

Searle responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. Searle argues that if the man doesn't understand Chinese then the system doesn't understand Chinese either because now "the system" and "the man" both describe exactly the same object.

But what do we mean by understanding the symbols of a language? is it the link between a word and idea from the memory? Can't a computer do that? We learn rules of manipulation and when to use them, a computer can also learn them. When we hear a word, we try to recall its meaning, a computer can also do that. So it all depends on what one means by “understand”.

The Robot Reply


Some critics concede Searle's claim that just running a natural language processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. But these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”).

The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger—Searle's example of something the room operator would not know. It seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot—a computer with a body—could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language. 

Tim Crane discusses the Chinese Room argument in his 1991 book, The Mechanical Mind. Crane appears to end with a version of the Robot Reply: “Searle's argument itself begs the question by (in effect) just denying the central thesis of AI—that thinking is formal symbol manipulation. But Searle's assumption, none the less, seems to me correct … the proper response to Searle's argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But if you let the outside world have some impact on the room, meaning or ‘semantics' might begin to get a foothold. But of course, this concedes that thinking cannot be simply symbol manipulation.”

Conclusion


The theory is obviously lacking, but In the absence of clear competitors, many theorists have opted to stick with functionalism despite what they admit are gaps and deficiencies, atleast until something better emerges. In this way, functionalism wins by default.

Comments (0)

Post a Comment