PROGRAM
MONDAY, JUNE 1
Welcome Andrej Ule, "Sorites - is vagueness really the only reason for the paradox?" Gabor Forrai Ferenc Huoranzski, "Vagueness and Reduction" Marta Ujvari, "Many-criteria predicates and Supervaluation" Howard Robinson, "Vagueness, Realism, Thought and Language" Gene Mills, "Soritical Supervenience" TUESDAY, JUNE 2
Joan Weiner, "Science and Semantics: the Case of Vagueness" James Cargile, "Vagueness and Properties" Laurence Goldstein, "Logic for backsliders: a solution to the sorites paradox" Steven Rieber, "A Defense of Indeterminatism" Greg Ray, "Williamson's Master Argument on Vagueness" Peter van Inwagen, "Why Vagueness is a Mystery" WEDNESDAY, JUNE 3
Miroslava Andjelkovic Josefine Papst Ziga Knap Robert Barnard, "Is Vagueness Non-Projectability" Urban Kordes STUDENT SESSIONS
Nastja Markovic, Smiljana Gartner, Tea Logar, Matej Novak, Fani Pecar, Gorazd Andrejc, Jan Bregant (University of Maribor); Andrej Pavesic, Borut Cerkovnik (University of Ljubljana) THURSDAY, JUNE 4
Rosanna Keefe, "Vagueness and Vague Metalanguages" U.T. Place, "Vagueness as a mark of dispositional intentionality" David Sanford, "Why Bother with Many Values?" Matjaz Potrc, "Vagueness is Robust" Olli Koistinen & Arto Repo, "Vague Objects and Phenomenal Wholes" Timothy Williamson, "On the Structure of Higher-Order Vagueness" Terry Horgan, "The Benign Logical Incoherence of Vagueness" FRIDAY, JUNE 5
Mark Kaplan, "In Praise of Modest Probabilism" Richard Grandy, "On Logic and Vagueness" Ruth Weintraub, "On Sharp Boundaries for Vague Terms" Kirk Ludwig & Greg Ray: "Is There a Problem about Vagueness?" Delia Graff, "Phenomenal continua" Elijah Millgram, "The Truth in Bivalence" SATURDAY, JUNE 6
Alice Kyburg, "Accommodating Utterances with Referring Vague Descriptions" Norman Gillespie, "The Limits of Vagueness" Michael Morreau, "Supervaluation can leave truth-value gaps after all" Mark Changizi, "Vagueness and Computation" ABSTRACT Robert Barnard, University of Memphis: "Is Vagueness Non-Projectability"
It is obvious that there is some difference between vague predicates and non-vague predicates. I wish to suggest that this difference, at least in part, consists in their non-projectability. This supposition gives rise to a view committed to there being radically underdetermined borderline cases of vague predicates. Such a view, I contend, can explain many features commonly associated with vagueness: e.g. why we cannot fix a single transition point between the positive and negative extension of a vague predicate. James Cargile: Vagueness and Properties
Some philosophers have argued that vagueness is not a property of properties, and some of these have held that for that reason, properties could not be the meanings of common nouns such as "horse". To evaluate this thesis would require clarity about "property" "vague" and "meaning". What do these terms mean? What properties do they express? Are they vague? Do the answers to those questions depend on what we mean by "property", "vague", "meaning" and "express"? Mark Changizi (Department of Computer Science University College Cork): Vagueness and Computation I present my epistemic theory of vagueness which claims that vagueness is a phenomenon common to any finite, sufficiently powerful, rational agent. The theory depends on three descriptive hypotheses. The first hypothesis is that humans are as powerful as Turing machines (``sufficiently powerful'') and no more powerful (``finite''). The second hypothesis states that a person's interpretation of a natural language predicate R (or `not R') is determined via some ``program in the head,'' so that an object is in the interpretation of R (`not R') if and only if the program says YES that the object is R (`not R'). The third and final hypothesis states that people allow themselves the richest range of interpretations for which they are capable. These three hypotheses entail the thesis that for ``most'' natural language predicates R (i) a person's interpretation of R is determined via a program that does not halt on some inputs, (ii) a person's interpretation of `not R' is determined via a program that does not halt on some inputs, and (iii) a person's interpretations of R and `not R' are not complements. My characterization of vagueness is that a predicate R is vague if and only if it satisfies (i), (ii) and (iii). Be clear that I do not identify vagueness with undecidability. The notion of a concept within my theory is unique in that it consists of two sets: the interpretation of R and the interpretation of a logically unrelated, but pragmatically related, predicate non-R (this latter interpretation is determined by the program for `not R'). The borderline region comes immediately from part (iii) of the characterization; it is the region comprised of objects in the interpretations of both R and `not R' unioned with the region comprised of objects in neither interpretation. Higher-order vagueness comes from the fact that the boundaries of the borderline region are not generally determinable using the programs in the head for R and `not R'; i.e., it comes from (i) and (ii) from the characterization. Consider the paradigmatic decreasing-heap sorites argument: (a) 1 million grains of sand can make a heap, (b) for any n, if n+1 grains can make a heap, then n can, therefore (c) 1 grain of sand can make a heap. To see how the paradox is safely defused, imagine moving from the definitely heap region (i.e., where `heap' applies but `non-heap' does not) toward the part of the borderline region where neither predicate applies. Upon entering this borderline region there will be some n such that `heap' applies to n+1 grains but not to n grains; thus (b) is denied, preventing the absurd conclusion (c). However, this does not contradict the borderline vagueness of `heap' since `non-heap' does not apply to n either, i.e., n is neither a heap nor a non-heap. Norman C. Gillespie (George Mason University, School of Law): The Limits of Vagueness The law uses language to order the world. Much of that language -- in contracts, statutes and constitutions -- is vague, often deliberately so. So vagueness is not simply a problem in logic and metaphysics; it creates practical legal and moral problems as well.
In this paper, I examine some of the practical problems that arise in applying vague legal and moral predicates. I focus on how the law resolves such problems, and analyze two basic approaches that American judges employ in resolving the practical and logical problems they confront in applying vague predicates. On the first approach, which I shall call the correspondence or proportional approach, the legal or moral status of a legally-disputed item or activity is determined by its relative place in the extension of the applicable vague predicate P. On this approach, clear instances of P are treated differently from borderline or marginal cases, and the legal treatment of marginal cases reflects their less-than-paradigm status in the extension of P. The second approach that judges use in applying a vague predicate relies upon an interpretation of that predicate to control or determine its application to a particular case. On this approach, which I shall call the interpretive approach, lines between marginally different cases are drawn, typically for the sake of providing guidance to the individuals who must conform their conduct to the legal rules which embody that controlling interpretation. These contrasting approaches to vagueness are not simply practical devices used to resolve legal disputes. They also shed considerable light on the logical and metaphysical issues that are at the center of most philosophical discussions of vagueness. For example, is the legal application of vague predicates inherently incoherent or indeterminate, or essentially contestable? Or do such applications presuppose epistemically-indeterminate right (or true) answers to the issue of whether a borderline case is, or is not, an instance of P? Does the moral or legal status of a marginal instance of P depend upon such line-drawing answers to the question: Is this item (an instance of) P? Or does that status depend, instead, upon a more searching determination of an item's place in the extension of P? These are some of the questions that I shall address after explaining the two basic approaches (summarized above) that judges employ in applying vague legal or moral predicates. The legal treatment of such predicates limits the logical and practical problems created by them, thus revealing the limits, rather than the problematic nature, of vagueness. Laurence Goldstein: Logic for backsliders: a solution to the sorites paradox Sorites paradoxes involve observational predicates, so it seems obvious that any satisfactory solution is going to have to say something about the nature of just those predicates, since, if it were not for specifically those predicates, there would be no Sorites paradoxes. That's the first intuition. The second is that if two objects, though different, are observationally indistinguishable in their F-ness, then the judgment that one looks F must have the same truth-value as the judgment that the other looks F. This paper gives flesh to those intuitions and proposes a solution to these paradoxes which both accommodates them and is, by normal standards, simple, straightforward and incontrovertible. In judging (say) the colours of a series of colour patches each visually indistinguishable from its predecessor, there is, pace epistemicism, no cut-off point. Even if there were one for `red', as the colour of the patches creeps imperceptibly towards orange, there could not be one for `looks red'. Any satisfactory solution to the Sorites must handle such tough variants. What happens in a ctual cases is that people flip-flop in their judgments, just as they do with the duck-rabbit. This is not, of course, irrational behavior; it is human behavior and is easy enough to explain biologically. Judgments about a single object change from one context to the next, even though the contexts may be just an instant apart. If people are inconstant in their judgments about a single object, then of course they are going to be inconstant in their judgments about two perceptually indiscriminable objects. Delia Graff: An Account of the Apparent Boundarylessness of Vague Predicates The most well-known theories of vagueness that account for borderline cases by rejecting bivalence seem wrong, or at least incomplete, because they do not provide an account of another problematic feature of vague predicates, namely, our inability to discern boundaries (of any kind) for them. The most well-known theory of vagueness that does account for this latter problem---Williamson's epistemicism---seems unpalatable to many precisely because it does not reject bivalence. There is a small tradition, however, of accounting for the apparent boundarylessness of vague predicates by appeal to their context-dependence. I will add to this tradition by arguing that the context-sensitivity of vague predicates runs deep---that it is never completely eliminable by explicit relativization to contexts (as in `tall for a basketball-player', which, unlike `tall', does not vary in intension just as the class of basketball-players becomes more or less salient). I will present and defend an account of the "deep"' context-dependence of vague predicates, and explain how by appeal to this account one can account for their apparent boundarylessness in a satisfying way. I will argue that the account sketched on the one hand makes the acceptance of bivalence more palatable, but on the other hand is also compatible with the rejection of bivalence, and so could be combined with any of the various semantic-indeterminacy theories so as to yield a more complete account of vagueness. Richard Grandy: On logic and vagueness I will present a logic of vagueness which attempts to combine the advantages of both supervaluationist and many valued approaches, while avoiding their disadvantages. Terry Horgan (University of Memphis): The Benign Logical Incoherence of Vagueness Transvaluationism, the approach to vagueness I advocate (Horgan 1995, 1998), makes two fundamental claims. First, vagueness is logically incoherent in a certain way: vague discourse and vague thought-content are governed by semantic standards that are mutually unsatisfiable. Second, vagueness is viable and legitimate nonetheless; its logical incoherence is benign rather than malevolent. In this paper I elaborate on the kind of logical incoherence that I claim is inherent in vague language and thought. I distinguish the incoherence of vagueness from a more familiar, highly pernicious, kind of logical incoherence, and I explain why and how the incoherence of vagueness is benign and beneficial, rather than noxious and debilitating. Horgan, T., 1995: "Transvaluationism: A Dionysian Approach to Vagueness," SOUTHERN JOURNAL OF PHILOSOPHY 33, Spindel Supplement on Vagueness, 97-126. Horgan, T, 1998: "The Transvaluationist Conception of Vagueness," THE MONIST 81, Vagueness, 316-33. Ferenc Huoranszki: Vagueness and Reduction
For any reductive analysis of intentional content the following principle must hold: The conditions of identification of a concept must be the same as the conditions which determine whether the concept applies to a particular token. If reduction is aimed the principle should be satisfied since every reductive analysis 1. attempts to explain intentional content in nomic-causal terms 2. interprets causality as a relation into which events/objects enter by virtue of their properties. It is argued that if a concept is vague then the above principle does not apply to it. Since vagueness is pervasive the intentional content is not to be analysed in causal-nomic terms. Mark Kaplan: In Praise of Modest Probabilism
Imagine a creature that has perfect logical and mathematical acumen. Imagine that, for any set of propositions we might choose (where that set is closed under truth-functional operations) this creature has a degree of confidence assignment: that is, she has, for each proposition in the set, a precise, real-valued degree of confidence in the truth of that proposition. Bayesians hold that it is a condition on the rationality of such a creature that her degree of confidence assignment satisfy the Kolmogorov axioms of probability. We, however, are not creatures of this sort. Our states of opinion are vague and incompletely formed, our logical and mathematical acumen severely limited. Thus many have found it hard to see how the Bayesian result (even if correct) could possibly tell us anything about the rationality of actual human beings. My purpose will be to show how it can: how we can fashion a view that (without appeal to idealization or false precision) offers a substantive, but credible, account of how the Bayesian result bears on the rationality of the vague and partially formed opinions we actually harbor. Rosanna Keefe: Vagueness and Vague Metalanguages I begin my paper by defending the (now familiar) claim that, if a theory of vagueness is adequately to capture the phenomena of vagueness and higher-order vagueness, it needs to be framed in a metalanguage that is itself vague. I then examine whether certain theories can successfully be maintained while admitting a vague metalanguage. I argue that many-valued theories fail this test, and in particular I reject Tye's three-valued theory. Supervaluationism, I claim, fares much better. I argue that admitting the vagueness of the key expressions of the theory does not undermine the account nor the motivations for adopting it. And I show that using a vague metalanguage in the treatment of vagueness does not make the theory trivial, circular or insufficient. Some further consequences of the approach are explored. Olli Koistinen, Arto Repo: Vague Objects and Phenomenal Wholes
The idea of precise spatial or temporal boundaries, or precise criteria of identity, turn out to be in conflict with the indefiniteness of such everyday objects as clouds, mountains, or heaps of stones; indeed this indefiniteness appears to characterize all macroscopic material objects. Vagueness can't even be avoided when questions about the composition of objects from their parts are asked. One way to try to deal with these phenomena of vagueness is to locate the vagueness into language or representations. Another way is to accept objective vagueness: there really are vague objects in our world, and there is nothing wrong with it. We will consider some arguments from Leibniz and Kant in which a sort of vagueness of ordinary material objects is argued for but in which the conclusion is not the acceptance of vagueness in reality; on the contrary Leibniz and Kant use the indeterminate nature of the world of macroscopic extended things to argue for their phenomenal character. The basic intuition accepted by both of them is that what is truly real cannot be vague or indeterminate. The unity or wholeness of composites - the idea that they are not only pluralities of things but something constituted by those pluralities - is dependent on a mind which perceives the many as one. We try to reconstruct a Leibnizian or Kantian way to answer to some problems connected to the vagueness of ordinary things, e.g. Peter Unger's problem of the many. Alice Kyburg (University of Wisconsin Oshkosh): Accommodating Utterances with Referring Vague Descriptions The contextuality of precisifications can be invoked to account for the possibilities of reference using vague definite descriptions. A speaker can refer to a borderline bald man who has less hair than the rest with the description ``the bald man''. In referring in this way, he also seems to stipulate a more precise usage for ``bald man''. With no objection from the hearer, common ground is rewritten to accommodate this new usage. I will illustrate this process of accommodation with several examples. I will also sketch a model of the updating of common ground which supposes that the `appropriate precisification', relative to which the sentences of common ground are interpreted, changes as dialog proceeds. Kirk Ludwig and Greg Ray: Is There a Problem about Vagueness?
In this paper, we give a proof that semantically vague terms neither apply nor fail to apply to anything, and that consequently it is a mistake to diagnose sorites arguments by attempting to locate in them an unsound premise. Sorites arguments are not sound, but not unsound either. We argue furthermore that semantic vagueness is neither surprising, nor threatening, and provides us with no reason to suppose that the logic of natural languages is not classical.
Elijah Millgram: The Truth in Bivalence
Bivalence is normally managed by reengineering the items you are going to be thinking about. Engineering is usually an expensive proposition, so we need to ask what benefits can make bivalence worth the costs. It turns out that there is a general way to characterize those benefits: because bivalence makes possible deductive as opposed to defeasible reasoning, engineering bivalence allows reasoning that can be made completely explicit, and that does not require good judgment in order to be successful. Eugene Mills: Soritical Supervenience Epistemicism's unquestioned theoretical virtues seem, to most philosophers, outweighed by its apparent craziness. I argue that what underlies the appearance of craziness is a natural but mistaken view about the sorts of properties on which the concepts expressed by vague predicates supervene. Once the mistake is exposed, epistemicism no longer seems crazy. Michael Morreau: Supervaluation can leave truth-value gaps after all
Among other good things, supervaluation is supposed to allow vague sentences to go without truth values. But Jerry Fodor and Ernest LePore have recently argued that it cannot allow this - not if it also validates certain conceptual truths. The main point that I wish to make in this talk is that Fodor and LePore are quite mistaken. Supervaluation can leave truth-value gaps while validating the conceptual truths that they have in mind. Ullin T. Place (The University of Leeds): Vagueness as a mark of dispositional intentionality We owe the suggestion that vagueness (or indeterminacy) is a mark of intentionality to Elizabeth Anscombe (1965). It is one of three distinguishing marks of intentionality which she discusses, the other two being that it is only under a particular description that something is the object of thought or intention and that the object of thought or intention may not in fact exist or come about. C. 1968 in an unpublished paper presented to the Department of Philosophy, University of Sydney, John Burnheim argued that `physical' dispositions, meaning by that the dispositional properties of physical objects, satisfy all three marks of intentionality proposed by Anscombe. In their 1986 paper `Intentionality and the non-psychological' Martin and Pfeifer extended Burnheim's thesis to a further two marks of intentionality, Chisholm's (1957) permissible falsity of propositional attitudes and Searle's (1983) re-instatement of directedness towards an object which, though part of Brentano's original (1973/1995) account, had dropped out when Brentano's inexistence of the intentional object criterion was "linguistified" by Chisholm (1957). Assuming, as Burnheim had done before them, that intentionality, whatever it is, is the mark of the mental, Martin and Pfeifer concluded that, in so far as all the accepted marks of intentionality are satisfied by `physical' dispositions, these traditionally accepted marks of intentionality cannot be what they purport to be. More recently, the writer (Place 1996) has argued that, since `intentionality' is a term of art, it means whatever the accepted marks of intentionality make it mean in which case, if `physical' dispositions satisfy those marks, they make intentionality the mark of the dispositional rather than the mental. If however, we draw the distinction originally proposed by Kneale (1968) between T-intenTionality, which is a property of mental and other (dispositional) states and S-intenSionality which is a property of linguistic expressions (chiefly the grammatical objects of psychological verbs and verbs of utterance) and distribute the accepted marks of intentionality between the two, it turns out that it is only those that are marks of a T-intenTional state (of which vagueness is one) which are marks of a disposition. S-intenSional locutions are quotations. The vagueness of a disposition arises from the fact that it consists in being `directed' towards a range of possible future events, any one of which, were it to occur (and none need occur for the disposition to exist), would constitute a manifestation of the disposition in question. Matjaz Potrc: Vagueness is Robust Boundarylessness is widely recognized as characteristic for vagueness. But boundarylessness brings robustness with it. So vagueness is robust. Horgan claims that transvaluationism is the only alternative to epistemicism. And transvaluationism is determined by robustness. So all alternatives to epistemicism must be robust. Whereas epistemicism, as its name suggests, harbors epistemic robustness. Greg Ray: "Williamson's Master Argument on Vagueness" Timothy Williamson holds that there is a sharp boundary between the things that fall under "vague" predicates, like 'bald' and 'heap', and those that do not, and our inability to settle on such boundaries non-arbitrarily is just an artifact of our ignorance. Certainly the most novel and interesting aspect of Williamson's defense of this view is his idea that our ignorance of the borderlines for vague terms "is just what independently justified epistemic principles would lead one to expect". Williamson constructs a master argument which takes as its starting point certain necessary conditions on knowledge and derives the conclusion that, if some number, n, was the least number of grains which make a heap, no one could possibly know that it was. I give a careful formulation of Williamson's argument, and argue that it does not work. Steven Rieber (Georgia State University): A Defense of Indeterminatism My goal is to defend the indeterminist approach to vagueness, according to which a borderline vague utterance is neither true nor false. Indeterminism appears to contradict bivalence and the disquotational schema for truth. I agree that indeterminism compels us to modify each of these principles. Kit Fine has defended indeterminism by claiming that ordinary ambiguous sentences are neither true nor false when one disambiguation is true and the other is false. But even if Fine is right about sentences, his point does not seem to generalize to utterances. What the indeterminist needs -- and what ordinary ambiguity does not provide -- is an ambiguous utterance where what is being said is indeterminate between two different propositions. I will show that such cases exist. These cases imply that the modifications that indeterminism makes to bivalence and the disquotational schema are required independently of indeterminism, in fact independently of vagueness.
Howard Robinson: Vagueness, Realism, Thought and Language Problems associated with vagueness and sorites are usually tackled either by devising a deviant logic, or, more recently, by adopting an epistemic approach. Taking the problems associated with these approaches for granted, I argue that there is another route. Starting from a remark of Timothy Williamson's, that the sorites problem arises because philosophers 'take an ordinary language as a model for what is to be understood but a logically perfect one as a model of what it is to underestand', I argue that an excessively formal conception of the unity of natural language is what creates these problems and paradoxes. If one takes a referentially and not syntactically driven approach to the different 'ontologies' and 'levels' implicit in natural language, then both sorites and problems with classical logic can be overcome. The consequences are that one must treat 'ontologically non-basic' discourses in a less than fully realist manner, and that language rests for its unity on skills more akin to perceptual judgements than to formal reasoning. David H. Sanford (Duke University): Why Bother with Many Values?
One problem for a many-value semantics for vagueness is that while attempting to escape precision, it introduces even greater precision. (Deciding between one of two values is difficult, and deciding between one of many value is more difficult.) So why not scrap the many-value approach? Because it helps provide a semantics for a determinacy operator and thus for a borderline operator. And why is such a determinacy worth the bother? Because it shows the coherence of higher-order borderline cases. And because it helps reveal logical relations between predicates that are difficult to express otherwise. If statements of each of the forms (Ex)(Fx & Gx), (Ex)(Fx & ~Gx), (Ex)(~Fx & Gx), (Ex)(~Fx & ~Gx) are possibly true, this does not settle the question whether statements of the following forms in which are also possibly true: (Ex)(BFx & BGx), (Ex)(BFx & B~Gx), (Ex)(B~Fx & BGx), (Ex)(B~Fx & B~Gx). ('B' is a borderline operator.) Determinacy helps to formulate additional requirements of independence. If we accept a many-value semantics, then, how should we deal with the problem of unwanted precision? Supervaluations are helpful. They need not be restricted to two-value semantics. Supervaluations and many-value semantics combined have strengths that neither has separately. Marta Ujvari: Many-criteria predicates and Supervaluation Vagueness talk creates the illusion that our monadic predicates worth for consideration are simple in nature; i.e. their application depends on the fulfillment of one single criterion. Such predicates admit the total ordering of the objects falling under them and are Sorites-susceptible. By contrast, I suggest to consider the vagueness of many-criteria predicates involving a cluster of associated properties and thus having a 'number of independent conditions of application'.(E.W.Alston) I show that these predicates are not Sorites-susceptible because the n-tuples of criteria with varying values do not admit total ordering - unless members of the tuples are weighted. But there is no reason to suppose that actual language use proceeds by a consensus about weighting. This feature is not the sign of our ignorance: there is nothing more to know about the particulars; rather, we have to negotiate about the semantic decision. Many-criteria predicates are natural candidates for the method of supervaluation because semantic assessment is an indispensible part of their condition of application. Tolerance is at work with these predicates but not with respect of a scale but with respect of a multiple realization. Thus what is missed by critics of supervaluationism, i.e. tolerance, has its proper role with many-criteria predicates. Andrej Ule: Sorites - is vagueness really the only reason for the paradox? Tthe basic problem of vagueness - the sorites paradox is sketched and a solution proposed. I will claim that the paradoxical result of the sorites arguments indicates that we have to consider different language systems or language games which produce the paradox when we combine them without sufficient care. I will propose to solve the problem with the regimentation, changing the logical structure of terms in the premises of soritical inference. Peter van Inwagen: Why Vagueness is a Mystery
This paper presents two reasons for regarding vagueness as a mystery. First, the only coherent account of vagueness we have is this: vagueness is due to predicates having borderline cases. But not all cases of vagueness are cases of predicates having borderline cases. Secondly, even cases of vagueness that are due to predicates having borderline cases can yield paradoxical results: as that there exists a non-empty set of real numbers that has a lower bound but no greatest lower bound. Joan Weiner: Science and Semantics: the Case of Vagueness
In a recent criticism of supervaluationism, Fodor and LePore suggest that surveys of precisifications of vague terms tell us little, if anything, about the meanings of these terms. For it is a constraint on any account of the meaning of vague terms that it preserve conceptual truths; and one of the conceptual truths about vague terms is that they are vague. A precisified version of a vague term, they claim, is simply a new word -- not an English word, but only a homophonic expression. There is, however, something odd about this view. 'Obesity' is a vague term -- there is no sharp demarcation between those who are obese and those who are not. Yet any scientific study of obesity begins with a precisification, a definition that provides a sharp demarcation. If we want to account for the contribution 'obesity' makes to the truth values of sentences in which it appears, and if we do not intend to provide a critique of perfectly sound scientific practice, then we must recognize that our understanding of the meaning of this term is intimately connected with an understanding of its acceptable precisifications. To recognize this, however, is not to provide a response for the supervaluationist. For to take the supervaluationist strategy seriously would just require a different reform in perfectly sound scientific practice. In this paper I will suggest a different strategy for describing the meanings of vague terms -- a strategy that uses as a guide the rules constraining scientific investigation of such claims as that obesity increases the risk of heart disease. Ruth Weintraub: On Sharp Boundaries for Vague Terms
The postulation by the "epistemic" theory of vagueness of a sharp cut-off point between heaps and non-heaps has made it seem incredible. My aim is to show that an objection of a similar kind can be levelled against most theories of vagueness. The only one which avoids it - the so-called "nihilistic" theory - is untenable. The objection is, anyway, less compelling than it initially seems. However, even when this obstacle is removed, the epistemic theory is not yet vindicated. Timothy Williamson (University of Edinburgh): On the Structure of Higher-Order Vagueness Higher-order vagueness is usually assumed to fall into a hierarchy of second-order vagueness, third-order vagueness, and so on through all subsequent natural numbers. Rigorous attempts to define the hierarchy are rare. Such a definition will be offered within a formal semantic framework common to epistemic and supervaluationist accounts. It will be shown that, on some natural assumptions, second-order vagueness implies vagueness of all orders, and that higher-order vagueness in a truth-functional compound does not entail higher-order vagueness in any of its constituents. Although these results may appear strange, it will be argued that we do not have sufficient understanding of higher- order vagueness to reject them. |