Language, Thought, Logic, and Existence

Richard Brown

The Graduate Center, CUNY

 

 

Must I exist? Common sense seems to indicate not. My parents might not have met and therefore I might never have been born. To put this a bit more formally it seems that there is a possible world where I do not exist and it is natural to say that the reason why it is true that I might not have existed is because of that possible world. In fact it seems that most of the things that exist do so contingently. Surely the computer that I wrote this paper on and the paper that it is now printed on might not have existed. But it is well known that we can prove that any object must necessarily exist in S5 (Prior 1956). I have a standard proof of this as (1).

 

(1) Proof that every object necessarily exists in S5 (from (Menzel 2005))

1 x=x                                                               axiom of identity

2 (y) (y≠x) → (x≠x)                                        instance of quantifier axiom

3 (x=x) → ~(y) (y≠x)                                      from 2 by contraposition

4 (x=x) → Ey (y=x)                                        from 3 quantifier exchange

5 Ey (y=x)                                                       from 1 & 4 by Modus Ponens

6 □Ey (y=x)                                                    from 5 by rule of necessitation

7 (x)□Ey (y=x)                                                from 6 by universal generalization

 

As you can see it follows directly from the axiom of identity and an instance of the quantifier axiom which is a statement of universal instantiation. So, briefly in words, premise 2 says that if no object is identical to x then x (being an object) isn’t identical to x. This is equivalent to saying that if x is self-identical then x exists, and it follows from this and the axiom of identity that x does exist. Since x’s existence is a theorem of S5 we can say that it’s necessary and universal.

Since S5 is the modal logic that most people agree is strong enough to actually be of use as a logic of necessity and possibility this may seem quite jarring. Necessary existence is usually reserved for such lofty beings as numbers and God; who would’ve thought that my computer and I kept such company?! So what are we to do? There are those who recommend that we have an open mind about this and entertain that everything might really necessarily exist (Williamson 2002), but I do not find myself able to have such an open mind. Rather, the existence of this kind of proof seems to me evidence that something has gone wrong with the formulation of S5.[1]

Luckily for those who feel like I do there is Kripke’s well known solution to this problem (Kripke 1963).[2]  Following Quine, he invokes the ‘generality interpretation’ of variables and requires that no free variables be allowed in our instances of axioms in a proof (Quine 1940). So, (1) comes out invalid and premises 1 and 2 need to be reformulated as 1’ and 2’,

 

1’ (x) (x=x)

2’ (x) ((y) (y≠x) → (x≠x))

 

which are said to be ‘closures’ of the standard axioms, and we can’t get a proof of necessary existence from them. All we can prove is that all objects are necessarily self-identical, which is harmless. I have the proof of that in the footnotes if one wants to look at it but I don’t plan on discussing it.[3]

However, all is not yet well, for we can still construct a proof of necessary existence for any given object that we want, including myself, my computer, or unicorns, by using singular terms, or ‘non-logical constants’ instead of variables. As for example, in (2), which shows that we can derive a contradiction from the assumption that it is possible that Kripke doesn’t exist, and so by reductio, that he must.

 

(2) Proof by reductio that Saul Kripke necessarily exists: □Ex (x=SK) (adaptation of a first-order proof from David Rosenthal)


1. ◊ ~Ex (x=SK)                                             assumption
2. ◊(x) (x≠SK)                                                 equivalent to 1.

3. (x)□ (x=x)                                                   modal axiom of identity

4. □ (SK=SK)                                                 Universal Instantiation (UI) of 3.

5. ◊ (SK≠SK)                                                  UI of 2.

6. ~□ (SK=SK)                                               equivalent to 5.

7. □ (SK=SK) & ~□ (SK=SK)                       4. , 6. Conjunction Introduction

8. □Ex (x=SK)                                                            1. - 7. reduction

 

This proof does not appeal to any axioms that have occurrences of free variables and so respects the generality interpretation. Now one may think that the move from 3. to 4. is suspect because 1. denies that SK exists.[4] But this objection is misguided. UI just says that I can replace a universally bound variable with any constant and SK is a constant; it appears in the first premise! However, even if one was convinced that this objection was correct we could reformulate (2) as a version of (1) with constants instead of variables. As in (2’).

 

(2’) Adaptation of (1) with constants

 

1’ (x)□ (x=x)                                                   instance of closed axiom of identity

2’ □ (SK=SK)                                                 UI of 1’

3’ □ ((y) (y≠SK) → (SK≠SK))                       instance of closed quantifier axiom

4’ □ ((SK=SK) → ~(y) (y≠SK)                      contraposition

5’ □ (SK=SK) → □Ey (y=SK)                       quantifier exchange, & distribution                  

6’ □Ey (y=SK)                                                from 2’ & 5’ by Modus Ponens

 

So if we are to avoid the implication that I or my computer necessarily exist we need more than just the generality interpretation. The obvious source of the problem is that SK is a singular term so Kripke’s next move is to require that there be no singular terms in our formal language. His quantified modal logic includes only variables.  

            So how then do we say that it is possible that Kripke might not have existed? One option is to adopt Quine’s suggestion that we use Russell’s theory of descriptions so that when we analyze sentences like ‘Saul Kripke exists’ we get a logical statement free of singular terms. He, of course, recommended that we invent a description like ‘the thing that Kripkisizes’, or ‘the Kripkisizer’ so that we would render ‘Kripke exists’ as (Ex) K(x) where ‘K’ stands for the invented description. This was meant to be a purely technical device to solve the technical problems associated with existence statements about non-existent things. In particular one gets the feeling that it is only to be used when one knows that the thing in question doesn’t exist but in principle this kind of device could be used to replace all names in a language. So, this strategy presents itself as an obvious way to avoid the embarrassment of arguments like (2) and (2’). But, as Quine makes clear, we need only revert to this strategy if we are unable to find a suitable description to translate the name. The –isizes device is available as a last resort, but it would be nice to have a more principled description.

One view that has got what I consider to be a bad rap is the view that Kent Bach calls Nominal Description Theory (NDT) (Bach 1987; Bach 2002a). NDT says that a name N is semantically equivalent to the description that mentions N, something like ‘is called “N”’ or, as Bach prefers, ‘is the bearer of “N”’. So for instance on this view the sentence ‘Saul Kripke likes tea,’ would be rendered as (3),

 

(3) The English sentence ‘Saul Kripke likes tea’ is true if and only if

      (Ex) (is called “Saul Kripke”(x) & Likes tea(x))

 

Its truth condition is that there is an object which is called ‘Saul Kripke’ and which likes tea. This view is often dismissed out of hand because many people think that it is addressed by Kripke’s remarks in Naming and Necessity about circular theories of reference. But this is not the case because NDT is a theory about the meaning of names not about their reference. It is supposed to tell us what a competent speaker needs to know in order to use the word correctly, it is decidedly not a theory about the reference of names, or how their reference is determined. So NDT presents itself as a way of following through with Quine and Kripke’s urge to free our formal language from singular terms and so from worries about necessary existence.

            But how do we reconcile this with the thesis of rigid designation? Indeed, if rigidity is supposed to be a semantic property then one may think that adopting NDT as a way to get around (2) and (2’) is really admitting that there is no such semantic property. Who the description in (3) picks out can vary from counter-factual situation to counter-factual situation. The reference of the name does not depend on anything semantic. Rather it depends on pragmatic facts about how the name is used to express a singular thought. This is something like the view that Bach arrives at. He argues that intuitions about the rigidity of names are merely a ‘pragmatic illusion’. I have coined the term ‘frigidity’ and ‘frigid designator’ as a way to contrast a view like Bach’s with the standard semantic conception of rigidity. It may then seem that since the truth-conditions for a sentence like ‘Saul Kripke likes tea’ can vary depending on how it is used [5]  we have evidence that (3) is the right way to capture the sentence’s meaning and so that NDT is the correct semantic account of names and hence frigidity is to be preferred to rigidity.

But this begs the question about the ‘right’ way to draw the semantic/pragmatic distinction. So, Devitt will respond that an utterance of ‘Saul Kripke likes tea’ will have a token of ‘Saul Kripke’ that traces back to the thought about him and so to the actual guy; the occurrence of ‘Saul Kripke’ will then rigidly designate the actual guy. And we can collect all of these tokens together and call that a type. It will follow from this that the name will be ambiguous in as many ways as there are people places and things that bear the name, but that is no big deal. So the truth conditions for the sentence change but that is because each use of the sentence has a different kind of ‘Saul Kripke’ in it.  So, if there is a real issue here as between rigidity and frigidity then it should be possible to formulate it in a way that is neutral as between various ways of drawing the semantic/pragmatic distinction. I think we can do so.

Following Kripke let us say that a designator is a rigid designator if it designates the same object in any relevant counter-factual situation and for the moment let us breeze over the distinction between what is sometimes called Obstinate and Persistent rigid designators (Salmon 1982; Stanley 1997).[6] When one says that rigidity as construed above is a semantic property and that names are rigid designators there are potentially two things that one might mean. One might take the semantic task to be that of giving the meaning of and truth-conditions for thoughts, as Michael Devitt  does (Devitt 1997). For Devitt meaning is primarily a property of thoughts and the semantic task is to explain what property they have which allows them to play the role in behavior that they do.[7] On the other hand, one might take the semantic task to be that of giving the meaning of sentences independently of their being used to express any thought. This way of thinking about semantics has it as simply a part of grammar. To illustrate, if I say ‘Saul Kripke likes tea’ talking about my dog whom I have named ‘Saul Kripke’ and you say it talking about the person Saul Kripke we both use the same English sentence, though we refer to different objects (Strawson 1950/1985).[8]  We do so in the sense that we use something with the same physical structure but we also use something with a certain syntactic structure, something that has a noun phrase and a verb phrase as part of its structure like (4),

(4) [S [NP [proper noun, Saul Kripke]], [VP [verb, likes], [np, tea]]] 

 

According to Bach the job of semantics is to provide an interpretation of (4) that explains how it can be used to do the things that people do with it (Bach 1999; Bach 2002).[9]

So to sum up, we have two different and legitimate conceptions of what the semantic task is. Bach’s conception of semantic information is ‘linguistically encoded information’  whereas Devitt’s is ‘properties of thoughts that explain behavior’.  I will use ‘P-semantics’ for semantics in the psychological sense that we want to give a theory of the meaning of thoughts and ‘L-semantics’ for semantics in the linguistic sense that we want to give a theory of the meaning of sentences considered apart from their being used to express any given thought. Now there are three possibilities that present themselves. The first is that L-semantics just is P-semantics which is to say that the semantics of English just is the semantics of thought.[10] Or one might think that P-semantics just is L-semantics, which is to say that the semantics of thought just is the semantics of English.[11] Finally, one might want to give separate accounts for each. Let us suppose for the moment that we can make a distinction between the linguistic meaning that a sentence has and the meaning of the thought(s) it may be used to express.

Both kinds of semantics will be interested in sentences and truth-conditions because when we want to know the truth-conditions of a sentence we could mean one of two things. We could be taking this sentence to represent an utterance, an actual saying of it or a writing of it, and so as an expression of thought. The truth conditions for the sentence taken this way are really truth conditions for the thought it is being used to express. On the other hand we could take the sentence as a linguistic type and be trying to evaluate its truth conditions independently of any thought it may be used to express. What would a speaker of English need to know in order to use the sentence correctly? I can now neutrally formulate the distinction between rigidity and frigidity by saying that frigidity is the claim that there is no such L-semantic property of rigidity. There is no grammatical or syntactic category comprising rigid designators. There are no singular terms in English qua English. When we construct a linguistic theory of the semantics of natural languages (as opposed to a psychological theory of thoughts) we should do it so that it is free of singular terms. So what I think (2) and (2’) show is that our L-semantic theory cannot contain rigid designators. When we treat names as singular terms our best logic goes off the rails.

But then how do we evaluate these singular thoughts if there are no singular terms? This is easy to answer if we take the causal theory of reference as a P-semantic theory. It says that we can have singular thoughts, given that the right kinds of causal/historical connections hold between certain thought contents and the world. But we express those thoughts using a language that itself does not have singular terms as per NDT. We can then formulate a description that singles out Kripke without any singular terms. We can say that there exists an object that is called ‘Kripke’ and which I am thinking about now. We then explain ‘thinking about’ in terms of a thought (or a proper part of the thought if one likes) having the right kind of causal relation to x. This is symbolized as (5),

 

(5) (E!x) (K(x) & (Ey) (is an occurent Thought(y) & is Causally related to(x, y)))

 

where ‘(E!x)’, pronounced ‘E-shriek x,’ is shorthand for ‘there is a unique object x such that’. So (5) says ‘there is a unique object that is both called “Kripke” and is causally/historically related to my occurent thought in the right way’.[12] Thus, to say that it is possible that Saul Kripke does not exist is to say that it is possible that the object picked out by (5) in the actual world is absent in some possible world(s).   

So then we can see that the thought that Saul Kripke likes tea and the English sentence ‘Saul Kripke likes tea’ will have different truth conditions. The truth conditions for the English sentence are just the conjunctive ones from (3), that there is an object called ‘Saul Kripke’ which likes tea. So ‘Saul Kripke’, the English proper noun, is not an L-semantic rigid designator. It is a description that can be used to refer to many different things. However, the thought that I have about Saul Kripke, the actual guy, will have the truth conditions specified by (5) but with ‘likes tea’ included as in (6),

 

(6) My thought that Saul Kripke likes tea is true if and only if

(E!x) ((K(x) & (Ey) (T(y) & C(x, y))) & L(x))

 

(6) is a de dicto rigid designator. It picks out Saul Kripke in this world since I am causally related to Kripke in the right kind of way required by C. [13]  We then ‘freeze’ him as the object of interest (via stipulation). So linguistic names are frigid designators, their reference is determined by the thought that they are used to express. They themselves do not refer to anything. But because of this feature, namely that their meaning is simply that there is an object that bears the name in question, they can be used to express our singular thoughts. I am successful in communicating my thought if I get you to have a thought that also has a causal relation to the object picked out by (5).[14] But you understand the sentence if you understand (3).[15]

 Of course, all of this is nothing more than an inconvenience unless we encounter circumstances where it becomes important. In normal circumstances we can just stipulate that SK is short hand for (6) where T is my thought about Kripke right now. In this respect the idea that there are singular terms in logic is akin to Newtonian Mechanics. Newton’s equations work well enough for us to ignore the fact that they are technically incorrect. So unless we are dealing with things that are traveling near the speed of light or trying to describe the interactions of the very small they are all that we will ever need. So there is no reason to change the way that we teach first-order logic. We can continue to represent Saul Kripke with SK and sentences like ‘Saul Kripke likes tea’ and ‘Saul Kripke exists’ as L(SK) and (Ex) (x=SK). When the student progresses to the appropriate level of sophistication it then becomes necessary to rid the language of singular terms, and L(SK) becomes (6) and (Ex) (x=SK) becomes (5), both of  which are horribly more complicated, but that’s life. One might think of it along the lines of Field’s strategy in Science without Numbers. Kripke shows that we can axiomatize modal logic without constants but then we go on using constants anyway because they are useful. I do not mean to endorse Field’s program, I merely offer this as a way to see why the claim made here is merely an inconvenience.

So if we want to avoid having to say that anything and everything necessarily exists then we should adopt frigidity. All L-semantic designators are descriptions. We must distinguish our P-semantic theory from our L-semantic theory.


References

Bach, K. (1987). Thought and Reference, Oxford University Press.

           

Bach, K. (1999). The Semantic Pragmatic Distinction: What It Is and Why It Matters. The Semantic-Pragmatic Interface from Different Points of View. K. Turner, Oxford:Elsevier.

           

Bach, K. (2002). Semantic, Pragmatic. Meaning and Truth. K. Cambell, O'Rourke and Shier, Seven Bridges Press: 284-292.

           

Bach, K. (2002a). "Giorgione was so called because of his Name." Philosophical Perspectives 16: 73-103.

           

Devitt, M. (1997). Precis of Coming to Our Senses: A Naturalistic Program for Semantic Localism. Philosophical Issues. E. Villanueva. Atascadero, Ridgeview Publishing Company. 8: 325-49.

           

Fodor, J. (1998). Concepts: Where Cognitive Science Went Wrong, Oxford University Press.

           

Kripke, S. (1963). "Semantical Considerations on Modal Logic." Acta Philosophica Fennica 16: 83-94.

           

Menzel, C. (2005). "Actualism." The Stanford Encyclopedia of Philosophy Summer 2007 Edition(Edward N. Zalta (ed.)): URL=http://plato.stanford.edu/archives/sum2007/entries/actualism/.

           

Prior, A. N. (1956). "Modality and Quantification in S5." Journal of Symbolic Logic 21(60-2).

           

Quine, W. V. O. (1940). Mathematical Logic, Harvard University Press.

           

Salmon, N. (1982). Reference and Essence. Oxford, Blackwell.

           

Stanley, J. (1997). Names and Rigid Designation. A Companion to the Philosophy of Language. Hale and Wright. Oxford, Blackwell Press.: 555-585.

           

Strawson, P. F. (1950/1985). On Referring. The Philosophy of Language. A. p. Martinich, Oxford University Press.

           

Williamson, T. (2002). Necessary Existents. Logic, Thought, and Language. A. O'Hear. Cambridge, Cambridge University Press.

           

 


Notes



[1] Williamson defends an argument that is similar to the proof I have given though it is couched in terms of models instead of proofs. I will not discuss his defense of it here, as I don’t have the space, but I do think it can be answered.

[2] Kripke is concerned with the Barcan and Converse Barcan formulae and not explicitly with the problem of necessary existence, but his strategy works for necessary existence just as well.

[3] Here is the proof (again from (Menzel 2005))

1′ (x) (x=x)
2′ (x) ((y) -(y=x) –> -(x=x)
3′ (x) ((x=x) –> Ey (y=x)                                  From 2′ by contraposition and quantifier exchange
4′ (x) (x=x) –> (x)Ey (y=x)                               From 3′ by quantifier distribution rule
5′ (x)Ey (y=x)                                                   From 1′ & 4′ by modus ponens
6′ □(x)Ey (y=x)                                                 From 5′ by rule of necessitation

[4] Thanks to Kent Bach and David Rosenthal for helpful discussion of this objection

[5] It may be true if said while talking about Saul Kripke, but false if I name my dog ‘Saul Kripke’ and say it then

[6] The distinction has to do with whether a singular term designates in a world where what it designates does not exist. Defenders of obstinate rigidity says yes, persistent no.

[7] I do not have space in the text to include this, but here is a quote that demonstrates what I have in mind

In Coming I seek a solution to this problem [i.e. identifying the semantic task] by focusing on the purposes for which we ascribe meanings (or contents) using `that' clauses ("t-clauses") in attitude ascriptions: in particular, the purposes of explaining intentional behavior and of using thoughts and utterances as guides to reality.  I call these purposes "semantic."  I say further that a property plays a "semantic" role if and only if it is a property of the sort specified by t-clauses, and, if it were the case that a token thought had the property, it would be in virtue of this fact that the token can explain the behavior of the thinker or be used as a guide to reality.  We are then in the position to add the following explication to the statement of the basic task: A property is a meaning if and only if it plays a semantic role in that sense.  And the basic task is to explain the nature of meanings in that sense.

[8] I here ignore the question of whether, like Strawson, we say that we make different uses of the sentence

[9] Here is how Bach characterizes the semantic task

Semantic information about sentences is part of sentence grammar, and it includes information about expressions whose meanings are relevant to use rather than to truth conditions. Linguistically encoded information can pertain to how the present utterance relates to the previous, to the topic of the present utterance, or to what the speaker is doing. That there are these sorts of linguistically encoded information shows that the business of sentence semantics cannot be confined to giving the proposition it expresses. Sentences can do more than express propositions. Also, as we have seen, there are sentences which do less than express propositions, because they are semantically incomplete.

 

I take the semantics of a sentence to be a projection of its syntax. That is, semantic structure is interpreted syntactic structure. Contents of sentences are determined compositionally; they are a function of the contents of the sentence's constituents and their syntactic relations. This leaves open the possibility that some sentences do not express complete propositions and that some sentences are typically used to convey something more specific than what is predictable from their compositionally determined contents. Also, insofar as sentences are tensed and contain indexicals, their semantic contents are relative to contexts …

[10] Broadly speaking this is the conception of semantics that P. F. Strawson (Strawson 1950/1985) had, i.e. the meaning of a word was given by instructions on how to use it. It is still popular, as for instance Jerry Fodor (Fodor 1998) when he says “…English has no semantics. Learning English…[is] learning how to associate its sentences with the corresponding thoughts” (p. 9)

[11] I suppose that people who think this would be people like Sellers who think that we start with sentences and then work back to thoughts, which are theoretical posits to explain verbal behavior.

[12] See (Bach 1987) especially pps 17-25 for a nice account of the various relations that will work for C

[13] If anyone cares ‘E!x’ is really just short for Ex (P(x) & (y) (P(y) à (y=x)))

[14] Sometimes it matters if it has the same causal relation, sometimes it doesn’t. Which is to say sometimes it matters if you think of the object in the same way as I do and sometimes it doesn’t

[15] It may be that context and mutually shared beliefs can get (5) without having a singular thought.