Tuesday, August 3, 2010

Re: Austere Realism

So I think I have a substantially clearer picture of H&P's theory in Austere realism. But I've run into another bit of confusion that is even more out of my comfort zone than the previous. I haven't read Dan Korman's review yet, so I'm not sure if he says anything about what I'm confused about. At any rate.

Before, I was confused about the semantic standards under which Indirect Correspondence was governed. Claims made in a Direct Correspondence context are true when the ontic claims match the ontology of the world. IC claims are true in virtue of the way the world is, but are not made true in any thoroughly systematic way. That is, there are no exceptionless rules that a sentence must follow.

This is best understood by contrasting it to two competing views. The first is that claims like 'the sun moved behind the trees' and 'Israel invaded the Gaza strip' are, strictly speaking, false. The view would then have to offer an explanation as to why such sentences are different than absolutely false sentences like 'the grass is blue'. Call this the Error Theory on Everyday Correspondence. Much of what is said day-to-day is simply false.

The second view is the paraphrase strategy. True sentences that don't directly reflect ontology (ontic claims do not mirror ontology) are true in virtue of some paraphrase strategy. 'The sun moved behind the trees' is true because it is a paraphrase of some complex conjunction of astronomic facts.

H&P's Indirect Correspondence theory is NOT either of these views. They say that IC sentences are true in virtue of the world and that such sentences might not be paraphrase-able to something following DC standards. The problem now is that we're left with a non-systematic way of evaluating sentences. Our semantics cannot be given in "rule" form, and that's weird.

(As an upshot of this, though, is that IC seems to provide a clean answer to problems of reference re: fiction. Claims about Sherlock Holmes reflect something about the world even if 'Sherlock Holmes' does not refer to any actual thing.)

They defend this by arguing that the mind also does not operate under systematic, exceptionless rules. That is, (if I'm getting the terminology right) they deny computational cognitive science. The reasoning is absolutely beyond me, running on mathematical models that map possible thought processes(?).

Now, it seems to me that there are two serious objections to this view that are independent of arguments in cognitive science. The first is that, presumably, other organisms do operate under some form of computational, "rule governed" cognition. Worms, or a similarly neurologically simple organism, might be an example. Given that humans evolved from some version of such an organism, how did it come about that we switched from computational cognition to non-computational cognition?

Second, if H&P are going to hold themselves to the claim that human cognition really is noncomputational, then it must be evolutionarily so. That is, humans (or an ancestor of humans) evolved into such a cognitive state. This means that noncomputational cognition is either (1) evolutionarily neutral, or (2) evolutionarily beneficial. [The actual picture is more complicated, but the simplified version should get the point across.]

If (1), then noncomputational cognition developed through some sort of genetic drift. This would imply that there should be the possibility of computationally-thinking humans. This seems odd, and I suspect a formal argument can be made out of it.

If (2), then H&P and similar noncomputationalists stand in need of an explanation. Why is it evolutionarily beneficial? And why is it (presumably) not beneficial for other organisms?

No comments: