Philosophical Analysis
June 5th-9th, 2000
 

MONDAY, JUNE 5

10:45 Welcome
11:00-12:00 Matjaz Potrc, "Paradox of Analysis Compatibilized"
2:00-3:00  Michael Beaney, "Conceptions of Philosophical Analysis"
3:30-4:30  Paul Bloomfield, "Intuitions and the Limits of Analysis"
5:00-6:00 Volker Peckhaus, "Regressive Analysis"

TUESDAY, JUNE 6

9:30-10:30 Danilo Suster, "Postanalytic Metaphilosophy - the Case of Freedom"
11:00-12:00 Miroslava Andjelkovic, "On Analogy"
2:00-3:00 Friderik Klampfer, "What good is philosophical analysis for: the case of moral concepts"
4:00-5:00 Mylan Engel, "Ethics without a Net: How to do Applied Ethics without Moral Theory"
5:00-6:00 Paul Weirich, "Analysis in Decision Theory"


WEDNESDAY, JUNE 7

9:30-11:00 Student sessions (Gorazd Brne, Vojko Strahovnik, Uro? Ro?ker - University of Ljubljana)
11:15-12:15 Nenad Miscevic, "Conceivability and Conceptual Change"

THURSDAY, JUNE 8

9:30-10:30 Eugene Mills, "Analysis and Understanding"
11:00-12:00 John Biro, "The meta-linguistic analysis of puzzles about reference"
2:00-3:00 Olga Markic, "Emergent properties and downward causation"
3:30-4:30 Gene Witmer, "Conceptual Analysis, Circularity, and the Commitments of Physicalism"
5:00-6:00 Mark Lance, "The Word Made Flesh: toward a neo-Sellarsian view of concepts and their analysis"
***
8:00 Dinner

FRIDAY, JUNE 9

9:30-10:30 Marina Sbisa, "J.L.Austin's philosophical analysis and its implications"
11:00-12:00 David Sanford, "McTaggart and Analytic Philosophy"
2:00-3:00 Marta Ujvari, "Events as Tropes and Tropes of Substances"
3:30-4:30  Kirk Ludwig, "A conservative modal semantics with applications to de re necessities and arguments for coincident entities"
5:00-6:00 John Divers, "The Analytic Limit of Genuine Modal Realism"


ABSTRACTS


Miroslava Andjelkovic, University of Belgrade
On Analogy

Analogical reasoning is often expressed metaphorically. A recent example of this practice may be found in Dretske's book Naturalizing the Mind where much of the illustration of the representational thesis is given by analogies between instruments and human beings. In this paper I will give a general account of analogical reasoning, analyze the metaphorical language used to express it, and apply it to Dretske's reasoning. In an analogy two analogons are related in a way that supports our expectation that some of their features are similar. The analogons are distinguished according to the order of understanding, and the rules for expressing this difference in metaphorical language are given together with relevant rules of inference. Application of these rules to Dretske's analogies shows that they fail to shed any light of their subject because the direction of illumination goes from human minds to instruments, and, then, oddly, back again.

Michael Beaney, Institut fur Philosophie der Universitaet
Erlangen-Nurnberg, GERMANY

Conceptions of Philosophical Analysis

Over the last few years, within analytic philosophy as a whole, there has developed a wider concern with methodological questions, partly as a result of the increasing interest in the foundations--both historical and philosophical--of analytic philosophy, and partly due to the resurgence of metaphysics in reaction to the positivism that dominated major strands in the early analytic movement. In this paper I elucidate some of the key conceptions of analysis that arose during the formative years of analytic philosophy, focusing, in particular, on the debate over the nature of analysis in the early 1930s, within what was called at the time the 'Cambridge School of Analysis', and the development of Carnap's conception(s) of  logical analysis during his critical phase when he was a central figure in the Vienna Circle. These conceptions are no mere historical curiosities; the tensions between them are present in many forms in philosophy today; and a careful elucidation of them can throw much light on contemporary practice.

John Biro, University of Florida, USA
The meta-linguistic analysis of puzzles about reference

 

Paul Bloomfield, University of Connecticut, USA
Intuitions and the Limits of Analysis

Figuring out the proper role of intuitions in philosophical analysis is difficult business at best; this is especially so since some of our most trusted intuitions have turned out to be false. Still, in other cases, since we cannot imagine our intuitions being wrong, we steadfastly insist ourtheorizing comply. I treat two cases closely. One, regarding ethical internalism, wherein we find that attention to empirical facts leads to the conclusion that our intuitions are wrong. The other concerns the well accepted reduction of thermodynamics to statistical mechanics. Here we find intuitions about the irreversibility of, for example, biological processes, which cannot be accounted for by the reducing theory. Thus our theorizing ought to comply. A meta-philosophical problem solving strategy regarding the use of intuitions is briefly suggested.
 

John Divers, University of Leeds, England

The Analytic Limit of Genuine Modal Realism
(This paper is jointly written by John Divers and Joseph Melia)

David Lewis claims that his genuine realist theory of modality offers a nonmodal and accurate analysis of the family of modal concepts.In this paper we consider what it would be for a genuine realist analysis of possibility to be (objectionably) modal.In that light, we find that previous attempts to defeat Lewis's claim fail. However,we argue (in a broadly model-theoretic fashion) that the Lewisian analysis of possibility is incomplete in respect of alien possibility and irremediably so insofar as it is nonmodal.Alien possibility, therefore, sets the analytic limit of genuine modal realism and stands beyond that limit.
 
 

Mylan Engel, Northern Illinois University, USA
Ethics without a Net: How to do Applied Ethics without Moral Theory

The paper challenges the standard methodology used in applied ethics today.That methodology seeks to analyze and solve moral problems by appeal to controversial moral theories. The main problem with the standard approach is that whatever theory one appeals to, most philosophers reject that theory, and hence, they will simply reject any argumentation which essentially appeals to that theory. I propose and defend an alternative method of analyzing and solving moral problems the conclusions of which are not so easy to dismiss.
 

FRIDERIK KLAMPFER (MARIBOR)

WHAT GOOD IS PHILOSOPHICAL ANALYSIS FOR? THE CASE OF MORAL CONCEPTS

George E. Moore famously argued, in his seminal work Principia Ethica, that 'good', a fundamental ethical concept (or property), is unanalysable and hence indefinable, and that every attempt at defining it in terms of some other concept (or property), be it natural or metaphysical, commits the so-called "naturalistic fallacy". On the other hand, Moore repeatedly emphasized the crucial role of a correct analysis of moral terms in moral reasoning. Without an account of what it means to say that something is intrinsically good, or that it ought to exist for its own sake, he said, we can never know what counts as evidence (or reason or argument) for or against ethical propositions, and thus whether to accept them or not.

Most contemporary moral philosophers share neither Moore's high expectations nor his scepticism with regard to the correct analysis of fundamental moral terms. The aim of the paper is twofold. Firstly, to critically assess, upon their reconstruction, Moore's reasons for the claim that 'good' is unanalysable and indefinable. And secondly, to determine, by drawing upon some recent, less ambitious accounts of the nature of analysis, the proper place of an analysis of moral terms in ethical theory.

In the course of rejecting Moore's scepticism, I briefly consider some recent proposals for the analyses of moral terms: Jackson's moral functionalism, Boyd's and Brink's metaphysical (sythetic) naturalism, Smith's ethical rationalism, and dispositional ethical realism. Leaving the issue of their accuracy aside, I conclude that none of them is vulnerable to the Moorean type of objections.

It is one thing to show that we can provide a plausible analysis of the meaning of moral words, or the nature of moral properties, or the content of moral concepts and judgments (and thereby prove Moore wrong), but quite another to show that it is worth pursuing. It would be naive to expect (as Moore probably did) a correct analysis of moral terms to help us discover the fundamental principles of ethical reasoning or even settle substantive moral disputes. In the concluding section, I therefore argue for a less ambitious thesis: in order to make sense of, and explain, certain features of our common sensical view of morality (as embedded in moral discourse, feelings and agency), a correct analysis of moral terms, facts and judgments, is indispensable. There is, in such an analysis, enough potential for explanation, as Smith's analysis of moral rightness in terms of hypothetical action-desires clearly shows, as long as one is willing to give up a reconstructive account of analysis of moral terms, and adopt a revisionary one instead.

 

Mark Lance, Georgetown University, USA
The Word Made Flesh: toward a neo-Sellarsian view of concepts and their analysis

A "left-wing neo-Sellarsian" about concepts maintains three things:

1. (Following Lance and O'leary-Hawthorne) Meaning claims are normative claims and not descriptions of anything at all.To say what a word means is to say how it ought to be used, what rules ought to be followed in making use of it within the game of giving and asking for reasons.

2. (Following Sellars) Concepts are rule governed in the sense that "a rule is an embodied generalization which to speak loosely but suggestively, tends to make itself true.Better, it tends to inhibit the occurrence of such events as would falsify it."That is, part of the explanation of particular linguistic behavior must be that this behavior tends to conform to rules definitive of the concepts expressed.

3. (Following Heidegger, Dreyfuss, and Brandom) Understanding is fundamentally practical involved skill. In the usual case, we know how to do something without any awareness of rules for doing it at all, whether implicit or explicit.Further, even explicit knowledge of a rule or other assertion is to be understood as a skillful knowing one's way around the game of giving and asking for reasons which is itself not a matter of following rules.

There are tensions among these three claims.It is the goal of this paper to resolve them.Doing so exhibits some of the power of the neo-Sellarsian approach to conceptual understanding, and recasts the purpose of philosophical analysis of concepts.

Kirk Ludwig, University of Florida, USA
A conservative modal semantics with applications to de re necessities and arguments for coincident entities

In this paper, I sketch an approach to giving a semantics for languages containing modal operators which is ontologically and epistemically conservative in the following sense.  First, it introduces no abstract entities itself in the machinery required to give the recursive semantics, no possible worlds, or states of affairs, no possible entities, or anything else as finely individuated as intensions.   Second, it exhibits all truths about what's necessary and possible as conceptual truths, our knowledge of which rests on our grasp of the concepts used to express them.  Our knowledge of conceptual truths in turn is grounded in our capacities for thinking thoughts involving those concepts.  The semantics accommodates both de dicto and de re modal claims, including claims in which a quantifier taking wide scope over a modal operator controls a variable inside the scope of that modal operator.  I will develop the account for a fragment of a mathematical language with an eye to a certain interesting application of the account to some questions in metaphysics, namely, how we can make sense of de re necessities which are grounded in  cnceptual truths.  I will also apply the account to another problem in metaphysics, namely, the appeal to the assumption that there are objects which differ only in their modal properties to establish the existence of wholly coincident yet distinct objects, such as a lump of clay and a statue which come into and go out of existence at the same time.  If the account presented here is correct, we can make sense of essentialist intuitions and true de re modal claims without admitting the existence of modal properties, which undercuts this familiar argument for wholly coincident distinct objects.

 

Olga Markic , University of Ljubljana, Slovenia
Emergent Properties and Downward Causation

One of the principal motives to employ emergentist theory is to provide a solution to the mind-body problem that takes into account our common-sense belief that mental properties have causal powers and at the same time escapes difficulties of interactionist dualism. I this paper I discuss six main features of the traditional emergentism (S. Alexander, C. L. Morgan, C. D. Broad). I point out that their view, that mental properties are irreducible emergent properties with causal power of their own, leads to downword causation and thus violates the principle of the causal closure of the physical domain. I then discuss a rival position - nonreductive physicalism and argue that it is an unstable position which ends either as reductive physicalism or emergentism. I conclude that if there is no way to reduce mental properties we have to accept downward causation and rethink the interactionist position.

 

Eugene Mills
"Analysis and Understanding"

The paradox of analysis concludes that the truth of a conceptual analysis entails its triviality. This conclusion suggests in turn that the search for analyses is unimportant but that its success is practically guaranteed. If this suggestion is correct, it reflects badly on those philosophers from Plato on who have devoted so much of their energy to analysis, and with so little success. I argue, first, that sentences providing analyses are often used with an implicitly meta-linguistic content, and where such uses are at issue the paradox of analysis can be blocked. I also grant, however, that such sentences have other important uses that are in no way meta-linguistic and that the paradox of analysis is sound where these uses are in view. Distinguishing triviality per se from "schematic triviality," I diagnose and explain away the appearance of unsoundness. I then argue that accepting the soundness of the paradox of analysis does not commit us to the intellectual sterility of conceptual analysis. The process of analysis can be hard and of uncertain outcome even though its result, when successful, is a triviality, and it can enrich understanding whether it succeeds or fails.

 

Nenad Miscevic


The paper argues that the kinematics of conceptual change can be rational without being a priori dictated by concepts that figure within it, and criticizes Chalmers, Jackson and their followers who have put forward the idea that our concepts and their intensions dictate a priori our rational reactions to empirical discoveries. Against them, it shows that nadve commonsense concepts offer a poor guidance to empirical inquiry, which needs more refined and mature ones. Such concepts are themselves products of a lot of streamlining, that takes place under the impact of empirical discoveries and empirical theory building. Conceptual truths incorporated into such concepts are often themselves empirically founded. Therefore, knowledge of such truths is to a significant extent aposteriori knowledge. More centrally, networks that determine such concepts, and thereby the range of conceivability associated with them, are themselves liable to dramatic change under empirical impact. The direction of change is often determined by two extra-conceptual factors: first, by the deep causal structure of the domain initially delineated by it, and second, by general goals and norms of inquiry, most prominently by the goal of capturing such a structure, and by the norms of empirical inquiry into causal connections. The constraints set by these two factors are sufficient to make the kinematics rational, without subordinating it to the apriori dictates of the initial concepts.

 

Volker Peckhaus, Universitaet Erlangen Nuernberg, Germany
On Regressive Analysis

In 1907 Bertrand Russell gave a paper on "The Regressive Method of Discovering the Premises of Mathematics'' poiting out that it's  object was "to explain in what sense a comparatively obscure and  difficult proposition may be said to be a premise for a comparatively  obvious proposition.'' He wanted to consider furthermore "how  premises in this sense may be discovered, and to emphasis the  close analogy between the methods of pure mathematics and the  methods of the science of observation''. Russell shows a scepticism  towards this method of regressive analysis which seems to be typical for classical philosophy of science. The paper gives a   historical survey of regressive analysis from Pappus' definition of  analysis and synthesis up to David Hilbert's definition of the axiomatic method as a procedure to set up axiomatic systems.
Hilbert's example shows best that the regressive analytical method is  a method of discovery. As such the regressive analysis is not  completely logically determined, but has elements of contingency, creativity and intui

Matjaz Potrc, University of Ljubljana, Slovenia
PARADOX OF ANALYSIS COMPATIBILIZED

Paradox of analysis envisions two possibilities. Analysis either succeeds or it doesn?t. If it succeeds though, it is trivial. But if it doesn?t, it is simply false. So, in neither case is analysis feasible. This result of the paradox of analysis is in patent discord with success of analysis in our everyday practices and in philosophy. The secret of this success is that those kinds of analysis do not apply high standard criteria which are employed in the formulation of paradox. Explanation of this puzzle argues for compatibilism of high grade and of lower grade requirements put on analysis. While high grade criteria come naturally for a detached philosophical approach, they do not work well with everyday practices. Some explanation of cognitive roots is tempted, leading towards the tendency of embracing higher criteria, with their decisive role in establishing of the paradox.

David H. Sanford, Duke University, USA
McTaggart and Analytic Philosophy

McTaggart did more than argue that Time is unreal. I sketch a broader picture of his philosophy that mentions his three Hegelian books plus Some Dogmas of Religion, The Nature of Existence, and Philosophical Studies. I also present an historical narrative about McTaggart's influence on Russell and Moore. Through these two pupils and later colleagues, McTaggart's precision of thought and skill at rigorous argument set standards for analytic philosophy.

 

Marina Sbisa, University of Trieste, Italy
J.L.Austin's philosophical analysis and its implications

The paper will propose an interpretation of J.L.Austin's philosophical method (considered as a sample of philosophical analysis). Some objections to it will be discussed, among which charges of conservativism and of philosophical irrelevance. It will be claimed that such charges are mainly due to misunderstanding, but that a full defence of Austin's methodological proposals must take into account the fact that they are themselves philosophically loaded (their non-conservativism relies on the availability of a distinction between a language and the assertions which can be formulated by means of it, and their philosophical relevance is linked with a conception of philosophy as not merely concerned with clarification, but a descriptive enterprise yielding informative contents.) The philosophical implications of Austin's method have never been explored thoroughly and will be better understood once the label 'linguistic phenomenology' which he himself proposed for it is taken seriously and a comparison with Husserl's phenomenology is attempted. Such a comparison makes it clearer which role Austin intended to assign to the investigation of language and why. It also shows that is not by chance that Austin, like Husserl and unlike Wittgenstein, was willing to grant informative value to philosophy.

Danilo Suster, University of Maribor, Slovenia
Postanalytic Metaphilosophy - the Case of Freedom

Discussions of free will (freedom, free action) have lately become discussions about meta-philosophical issues - the nature of argumentation and the role of conceptual analysis in general.  Several proposals are examined - those against conceptual analysis of freedom (subjectivism, scientism, pessimism,  Pyrrhonism) and those in favor of conceptual analysis (traditional apriorism, paradigm cases analyses, new apriorism, postanalytic metaphilosophy). Terry Horgan (with D. Henderson: "What Is A Priori, and What Is It Good For?" and with G. Graham: "Southern Fundamentalism and the End of Philosophy") argues for a new general metaphilosophical position called postanalytic metaphilosophy. I raise some critical points connected with the prime example of postanalytic metaphilosophy - philosophical analysis of freedom (free action). I question the distinction between opulent and austere construals of philosophical concepts. I also argue against contextualism, a doctrine that the question whether someone enjoys free will amounts to a different question, depending on which perspective is contextually indicated.

 

Marta Ujvari, Budapest University of Economics, Hungary
Events as Tropes and Tropes of Substances

The view that events are tropes goes back to well-known historical antecedents such as Locke's 'modes' and Leibniz' 'individual accidents'. As understood currently, tropes are individuals but they need not be accidental. For example, the bundle-of-tropes view of substancerequires essential tropes to constitute the individual nature of substance. According to the standard view put forward by Keith Campbell, tropes are individual property instances orabstract particulars. However, recently Peter Simon has characterised tropes as 'concrete dependent particulars'. In my view the concreteness of tropes is not tenable for various reasons; I shall elaborate this.

Now given that tropes play a role with substances and also that events are tropes, a series of questions suggest itself.What account of substance is the best companion, to ensure coherence, to the trope account of events? Is there is a fit between, say,the bundle-of-tropes view of substance and the trope view of events? Or, perhaps, an unanalysed notion of substance fares better with events? The ambiguous ontological status of tropesmanifests itself differently with substances and events: substance theories try to reduce the insubstantiality of tropes while event theories can cope with it.

I have two tenets here. One is to show that the trope account is sound,despite all the problems in the details. Its merit is salient against the background of pure universalist and pure particularist alternatives. The other tenet is to point out that indeterminacy in the modal status of tropes remains unresolved on the trope accounts. More precisely, the modal indeterminacy of the tropes of substance could be resolved, as Simon's theory shows, only at the price of postulating a static distribution of essential andcontingent tropes between the hard core and the outer fringe.

Paul Weirich, University of Missouri, USA
Analysis in Decision Theory

Philosophical analysis takes various forms.It may be conceptual or linguistic, for instance.What unites the various forms of philosophical analysis is their a priori nature and their application to philosophical topics.Being a priori distinguishes philosophical analysis from empirical analyses, such as chemical analysis.Being about philosophical topics distinguishes philosophical analysis from other types of a priori analysis, such as mathematical analysis.

Since philosophical analysis need not involve a particular type of analysandum and analysans, it allows for great variety.It may break down a concept or linguistic expression into more fundamental concepts or expressions.But it may also break down a theory into fundamental axioms, or break down a quantity into its fundamental constituents.Thus an axiomatic account of deductive inference counts as a philosophical analysis, as does a utilitarian analysis of collective utility in terms of utilities for individuals.

My conception of philosophical analysis may be broader than the conception of twentieth century analytic philosophers such as Russell. They may have taken philosophical analysis as a version of philosophical definition, that is, the construction of precise definitions of important philosophical concepts or terms, definitions of the sort Plato sought for knowledge and justice.But this narrow interpretation of philosophical analysis excludes much prominent analytic work in philosophy, such as the syntactic analysis of semantic concepts in first-order predicate logic.I extend the boundaries of philosophical analysis to include such triumphs of a priori analysis.

This paper reflects on the fruitfulness of philosophical analysis in decision theory, in particular, normative decision theory, that is, the theory of rational choice and ancillary matters such as rational degrees of belief and desire.It evaluates attempts to define key concepts, but also evaluates attempts to reduce theories to axioms and attempts to reduce quantities to their basic constituents.

One of contemporary decision theory's main projects is defining subjective probabilities and utilities in terms of preferences.A definition along these lines offers a reduction of the quantitative to the comparative, a significant gain in clarity.Representation theorems by Ramsey, Savage, Jeffrey, and others are part of this definitional enterprise.Their theorems show that given satisfaction of certain axioms of preference, and the choice of a unit and a zero point for utilities, there are unique probability and utility functions that make expected utilities align with preferences.Probability and utility may then be defined respectively as those functions.

However, this project in conceptual analysis cannot offer decision theory the concepts of probability and utility it needs.Using preferences to define probability and utility makes preferences follow expected utilities by definition, and so strips probability and utility of their power to explain preferences.

Joyce (1999) holds that representation theorems use local constraints on preferences, such as transitivity, to justify the global constraint of being representable by expected utilities.This view supports only a weakened version of the traditional expected utility principle, however. Representability does not require the existence of probabilities and utilities, or, if they exist, expected utilities' governance of preferences.Decision theory needs a stronger version of the expected utility principle, one that guides preferences with expected utilities.

I take probability and utility as primitive concepts of decision theory, and take representation theorems to express methods of measuring probabilities and utilities, under the assumption that preferences follow expected utilities.This view preserves the expected utility principle's strength and probability's and utility's explanatory power.

Although probability and utility are conceptually primitive, they maybe carefully introduced to make them clear.The appropriate method of introduction is not explicit definition, but what Horwich (1998) calls 'implicit definition.'An implicit definition does not present necessary and sufficient conditions.It does not involve conceptual analysis.Rather it makes claims about probability and utility that enable people to acquire the concepts of probability and utility.Ramsey (1931) and Davidson (1984) attempt to reduce implicit definition to explicit definition by attending to what the introductory claims logically entail about the concepts they involve.But I argue that the process of implicit definition depends on human psychology as much as on entailments.This psychological dependency thwarts its reduction to explicit definition.

The implicit definition of probability and utility appeals to the theory of probability, including the theory of expected utility.I take probability and utility to be introduced by that theory, although I do not define them to be whatever they must be to make the theory true.I dispense with explicit definition.One advantageous consequence is that probability theory may be revised without revising the concepts of probability and utility it employs.This allows for continuity in the theory's development.

The main use of philosophical analysis in decision theory is, then, the a priori analysis of normative principles and their constituents.For example, bargaining theory axiomatically characterizes the rational outcome of a bargaining problem.Nash uses four axioms constraining solutions, such as Pareto optimality, to characterize his solution to bargaining problems.That characterization is an a priori analysis of rationality in bargaining, and so is a philosophical analysis.Although Nash's solution is controversial, its axiomatic characterization is an unchallenged accomplishment of analytic philosophy.

Another piece of philosophical analysis in decision theory is the famous expected utility principle, refined lately in causal decision theory.It specifies the rational degree of desire to perform an action, given a rational assignment of probabilities and utilities to the action's possible outcomes.Being an a priori analysis of rational desire, it qualifies as a philosophical analysis.

My conclusion is that philosophical analysis in decision theory is more fruitful when it aims at an axiomatic characterization of rationality, or an analysis of rational utility assignments, than when it aims at an analysis of a concept.Philosophical analysis evolves to fit a niche where it may thrive.In decision theory it thrives not as conceptual analysis but as a priori analysis of principles of rationality and their constituents.

Select Bibliography

Davidson, D.1984.Inquires into Truth and Interpretation.Oxford: Oxford University Press.

Horwich, P.1998.Meaning.Oxford: Oxford University Press.

Joyce, J.1999.The Foundations of Causal Decision Theory.New York: Cambridge University Press.

Ramsey, F.1931.The Foundation of Mathematics.R. Braithwaite, ed.New York: Harcourt.

Gene Witmer, University of Florida, USA
Conceptual Analysis, Circularity, and the Commitments of Physicalism

Frank Jackson has recently defended the claim that if physicalism is true, then we ought to be able to deduce, a priori, from a purely physical description of the world, its true psychological description.His argument depends on a rehabilitation of the traditional view that we can know a priori which truths are strictly necessary.I resist his conclusion, not by attacking that traditional view, but by showing how one can accept it and still resist the "a priori deducibility" thesis.In so doing I show how circular conceptual analyses can be made to do the work a physicalist needs them to do.