Extending our View of Cognition
Matjaž Potrč
The University of Ljubljana
Department of Philosophy
Aškerčeva 2
SLO-1000 Ljubljana; Slovenia
E-mail: matjaz.potrc@guest.arnes.si
Matjaz.potrc@uni-lj.si
Extending our View of Cognition
Matjaž Potrč
The University of Ljubljana
Department of Philosophy
Aškerčeva 2
SLO-1000 Ljubljana; Slovenia
E-mail: matjaz.potrc@guest.arnes.si
Matjaz.potrc@uni-lj.si
Matjaž Potrč
Abstract
A simplified compositional account of language is presented first. According to this account, it is sufficient for grasping of language to first provide semantic grasping of single elements and principles of syntactic rules governing their appropriate composition. Some examples of data typically available to human cognizers are presented then, such as metaphor and joke. It is argued that it is impossible to account for these data by the use of simplified account. Metaphor and joke obviously require some background cognitive information, additional to the compositional information, in order to be effective. Examples of garbled perceptual input and of partial and to be reconstructed perceptual data which cognizers master without difficulty show that the extended cognitive system under discussion is not limited to the linguistic, but presents a much wider phenomenon. Discussed cases turn out to be illustrations of morphological content, which was proposed in the generic model of dynamical cognition. Morphological content appears at the middle level of the dynamical system's description, and presents a generalization of the classical computational model's algorithmically conceived description. Morphological content forms context, at the middle mathematical level of description, of total cognitive states realized as points at the multi dimensional landscape of possible cognitive developments. Efficacy of morphological content, different to both occurrent and dispositional realization, is substantial in understanding the majority of data in a rich cognitive environment. Morphological content is a way of possessing intentional information which is accommodated in processing without being explicitly represented. In connectionist terms, this would be information in the weights, and not either the actual activation of occurrent patterns, or a network's dispositions to generate some kind of intentional tokens. Although this information is neither explicit nor occurrent, it can be accounted for as effective if exceptionless classical rules are substituted by cognitive dynamical forces, preserving the requirements of productivity and systematicity in a nonclassical setting.
Here is a simplified compositional semantic account of language grasping. In order to understand the meaning of the sentence "The cat is on the mat", you first have to understand the meaning of each single word composing it, namely of "cat", "mat", "is", "on" and of "the". Then you have to master the principles of syntactic structure, according to which elements are composed in an appropriate manner. So, you have first to understand that the symbol "cat" refers to cats, and that "on" means the relation where one item is superposed in respect to another item. Then you have to master the principles of syntactic structure according to which "Cat mat is on the" is not a well formed sentence, in counterdistinction to the sentence quoted above, consisting though of the same ingredients. A compositional account of grasping sentences then relies on the general principle according to which the meaning of a complete sentence depends on the meaning of this sentence's parts, together with the principles guiding their appropriate composition.
Similarly it goes for the computational representational approach to mind. In this case, there are intentional representations or symbols, directed at some object or content. Then, there are syntactic rules determining computation over symbols. (Fodor, 1998) defends a particularly clear version of this theory, which rests on a sharp distinction between semantic or intentional and between syntactic ingredients of the theory. The meaning of concepts, for him, comes atomistically. This means that the concept "cat" has its meaning independently of other concepts. This atomistic view is opposed to holists and to localists. Holists (Fodor and Lepore, 1992) hold it that meaning of a symbol depends on the entire system of symbols, whereas localists (Devitt, 1996) propose the dependence of a particular symbols' meaning on the meaning of some symbols in its semantic local vicinity. Consider that for holisits and localists the meaning of a symbol can not be directly related to the world, because it directs towards other symbols. The direct link however is natural for an atomist, who may propose a vertical semantic locking relation, understood naturalistically as being of teleological or causal kind. On the other hand, there is a syntactic system determining permissible rules about how to combine symbols.
Ultimately, syntactic or horizontal paradigm proves to be predominant in respect to the vertical or intentional branch. Discussing mental efficacy under terms of the classicist's problem of productivity and systematicity, Fodor proposes it as an answer to the questions of how to explain the existence of an infinite number of propositional attitudes with distinctive objects and causal powers. He makes both answers depend on syntax.
It is interesting to notice that despite his excellent and witty style of writing, Fodor's examples pertaining to semantics of concepts are extremely simplistic. He talks about "red", "pet fish", and delivers various similar simple examples. These examples are basic and clear. Yet it is not necessarily the case that human communication and perceptual situation is always basic and clear. Fodor's examples have to be simplistic for he can not account with his representational computational model for most of the data available to human cognizers.
Data which are mostly available to human cognizers are metaphors, jokes and garbled visual or acoustic inputs. This is the stuff we live in. Most of our perception is not clearly separable into semantic and syntactic parts, and particularly it can not be easily subsumed under principles of compositionality.
Let us look at the case of a metaphor, such as "Juliet is sun". Applying to it the compositional principles exclusively will not suffice for us to grasp its meaning. We may know the meaning of each single constituent and also the syntactic means of composition. Classical theory does not offer anything else to us. So we will know the meaning of words "Juliet", "sun" and "is". Besides to this, theory will provide syntactically appropriate means to bring elements together. Armed with all this however, we will only come to what is sometimes known as literal meaning of the sentence. But literal meaning is not what the author had in mind. He did not mean to say that Juliet is an enormous bulk of overheated mass destroying everything alive in its vicinity, provided that this should be the literal meaning. He intended to use the expression "sun" in what is sometimes called its metaphorical meaning, implying perhaps that Juliet is delivering life, joy and love to everything around her. Now, it is surely not difficult for us to grasp the meaning of the metaphor, whatever it happens to be. But it seems impossible for the theory of compositionality to deliver anything but literal meaning when faced with the metaphor. (Davidson, 1978) as the most notable proponent of the theory has bitten the bullet and affirmed that metaphors indeed have literal meaning and nothing else besides to it. However, this is not plausible, for at least we will have to introduce a metaphorical attitude, parallel to or a kind of propositional attitude, in order to account for the phenomenon of metaphor. It is just not clear how compositionality theory, dealing with elements and their syntactic ways of composition, will be able to arrive to what we humans easily do: understand a metaphor's meaning.
The point here is that we will need another mechanism as exclusively the mechanism of compositionality in order to account for metaphorical data.
Given that human cognizers very rarely have data delivered in a manner fitting requirement of compositionality theory, i.e. consisting exclusively of the meaning of elements and of syntactic ways these elements are composed, this other kind of mechanism will have to introduce principles of composition based on some wider and different basis. The structure of compositional data needs not be rejected. To the contrary, we wish to stay with it. But it will be recognized as the product of this different mechanism.
Let us take the example of metaphor again. And let us presume that the classical principle of compositionality holds. In this case, we need something more in order to understand the metaphor as a metaphor. We need some semantic surrounding to the expression "sun" in order to account for the metaphorical meaning of expression "sun" in this context. In other words, we may say that we are in need of some additional semantic knowledge, the knowledge additional to the explicit semantic meaning of the expression "sun". This additional knowledge may be called background knowledge of the cognitive system. Such a background knowledge will still be a kind of content. But it will not be explicit or occurrent kind of content such as it is used in the classical account of compositionality. It will have to be background or morphological content, the term to be explained.
Let us first look though at some other examples, which should illustrate the ubiquitous need for such additional background knowledge in explaining most data available to human cognizers.
The next example is a joke figuring Clinton as a womanizer. As with any joke, it is not enough in order to grasp it that we only understand the meaning of its elements and principles of their composition. In order for the joke to make us laugh, we need some additional information, which may be called our background knowledge. Otherwise we could never laugh at or find any joke funny. (Tienson, 1999)
Now as we introduced the background knowledge, we may notice that the kind of input which interests us is not limited to language exclusively. Let us stay though for a moment with perception of linguistic input. If somebody is talking in a thick foreign accent, we still may comparatively easily understand them. Again, besides to the explicit elements and their composition, we will need some additional, background knowledge, which is not occurrent, but helps us to sort out the garbled input.
Similarly it goes for perception of visual data. Just on the basis of a fraction of data needed, one easily and automatically reconstructs information about the item present. Such is the case when we reconstruct visual information concerning a rabbit just on the basis of some partial and garbled data, say seeing only the rabbit's ears in the field. Here we make use of the background information helping our visual perception.
If this is the case, then data concerning background knowledge are not limited only to language and by this fact most probably are not limited to humans exclusively, but are also characteristic for other cognizers, such as animals.
Examples of grasping a joke, understanding someone despite that he speaks in a very thick foreign accent and of reconstructing the garbled and partial visual input are examples of morphological content (MC) (Horgan and Tienson, 1996) (Henderson and Horgan, 1999).
MC invites a new model of cognition, quite different to the model of occurrent compositionality. A model of dynamical cognition (DC), in which MC was proposed, will be presented shortly now.
MC is to be distinguished from two other forms in which content tends to be presented.
As we present content in a propositional form, this is just one customary way to account for the content occurrently, i.e. as directly accessible to a cognizer's system.
Another way of the content's appearance is dispositional. In this case content is accounted for as residing in the counterfactual profile of the cognitive system's physical realization.
Marr proposed three levels of intelligent systems' description. Computers, humans or hyenas may be described at the top level, where their cognitive function is specified, involving description of states in terms of propositional attitudes such as beliefs and desires. The same systems also allow for description at the middle level, which now specifies their states in a purely formal way or syntactically, including specification of the procedure leading from one state to another. Finally there is the level of implementation where the hardware or the physical basis of algorithm and of the cognitive function are described.
Genericizing of such a model is most clearly visible at the middle level of description. Mathematical-state transitions form a set including algorithms as their subset: algorithms are just part of a much wider mathematical picture.
Causal mechanisms are easily provided by the classical computational approach. Rules account for transitions from one representational state to another. But such transitions can not be what is really happening, because of richness of human cognition, which is not tractable, and so it can not be appropriately accounted for by programmable representational level (PRL) rules. One needs some other kind of causal efficacy in connectionist models.
One can claim that causal efficacy (systematicity and productivity) comes down to the requirement of having an efficient syntax. There is not only the classical efficient syntax. Efficient syntax may be nonclassical, i.e. one may not subscribe to the PRL rules, but there would still be systematicity and productivity.
Connectionist inspired approach to causal efficacy is provided by the view of DC, inspired by the mathematical basis of connectionism. In connectionism relations between neurons as biological building blocks of the brains are characterized by mathematical features specified by the values of multiple weights.
The idea of DC is to use mutual relations between weights to produce either cooperating or competing tendencies. Cooperation or competition does not happen according to the hard computational rules, but according to the overall tendencies of dynamic movements in the cognitive bowl.
Such cooperative or competing tendencies are best illustrated by conspiring of mental states, i.e. by what are usually known as propositional attitudes. I wish to have beer from the fridge, and I will go for it -- ceteris paribus. For there may be somebody in the kitchen whom I do not wish to encounter. And there are several similar considerations. Conspiring of several contents, their competition or cooperation, is a matter of semantic interaction. The main difference to the classicism is that several contents or semantic contributions get thrown into one cognitive hopper, and that result is not obtained by PRL rules, but by the mentioned tendencies of several semantic ingredients to compete or cooperate on the dynamical mathematically describable basis.
One important issue presents relations of realization between levels. Total cognitive states are realized as points at the level of the mathematical-state transitions. And mathematical states are realized at the physical level of implementation.
In classicism middle level provided causal efficacy by adhering to PRL rules specified in algorithms. In the generic picture, causal efficacy is provided at the middle level not by rules but by competing and cooperating cognitive forces. The result is a language of thought (LOT), i.e. the structured syntactic picture allowing for systematicity and productivity. But it is not a classical LOT because it does not allow algorithms and whole-parts relations between constituents.
Middle level specifies the contoured multi dimensional landscape of MC. This landscape has causal efficacy in that it pushes towards total cognitive states (TCS's) along many rich dimensions. During the passage of time MC landscape changes. These changes provide new clues towards causal efficacy, i.e. how transitions proceed from one TCS to another. Classicism does not have any similar explanation for the dynamic change of the landscape, with introduction of new representations and their relations.
The middle level is rich. But cognitive states are realized at it as points. From this perspective we have: a single TCS at the top level, TCS realized as a point at the middle level and implementation of TCS at the lower level. On the other hand, we may zoom at the middle level as a whole. The structure of the middle level then will turn out to be rich, and it would correspond to MC.
LITERATURE
- Fodor, Jerry (1998). Concepts. Oxford: Oxford University Press.
- Harnish, Mike (1999). " Fodor's New Representational Theory of Mind: Frege + Turing?" (To be published).
- Horgan, Terence and Henderson, David (1999). "Iceberg Epistemology." (To be published).
- Horgan, Terence and Tienson, John (1996). Connectionism and the Philosophy of Psychology. Boston: MIT Press.
- Davidson, Donald (1978). "What Metaphors Mean", in Sacks, Sheldon (ed.) On Metaphor. Chicago and London: The University of Chicago Press, p. 29-45.
- Fodor, Jerry and Lepore, Ernest (1992). Holism. Oxford: Blackwell.
- Devitt, Michael (1996). Coming to Our Senses. New York: Cambridge University Press.
- Tienson, John (1999). "Morphological Content." (To be published)