PROBABILITY PROBABILITY AND GOODNESS-FOR-TRUTH Suppose you are interested in assessing beliefs...

38
PROBABILITY AND GOODNESS-FOR-TRUTH Suppose you are interested in assessing beliefs solely by the aim for (responsible) truth (vs. pragmatic happiness, morality, etc.). Suppose this leads you to favor tautologies and avoid self- contradictions. Ok, but that’s not much. Suppose it also leads you to favor simplicity/plainness/unity. One theory (bunch of beliefs) is plainer than another if it has fewer basic “planks”, if it “zips” down to fewer basic beliefs (that, e.g., entail the whole bunch). Anyway, that might not seem like much either. What about probability of truth? That can seem like a purely truth- based target. On the most relevant sense of “p is probable/probably true”, this means the same as “p is plausible/plausibly true”, which means “bel(p) is applause-ible for truth”, or “bel(p) is good for truth” or, for short, “bel(p) is good truth ”.

Transcript of PROBABILITY PROBABILITY AND GOODNESS-FOR-TRUTH Suppose you are interested in assessing beliefs...

PROBABILITY AND GOODNESS-FOR-TRUTH

Suppose you are interested in assessing beliefs solely by the aim for (responsible) truth

(vs. pragmatic happiness, morality, etc.).

Suppose this leads you to favor tautologies and avoid self-contradictions.

Ok, but that’s not much.

Suppose it also leads you to favor simplicity/plainness/unity.

One theory (bunch of beliefs) is plainer than another if it has fewer basic “planks”,

if it “zips” down to fewer basic beliefs (that, e.g., entail the whole bunch).

Anyway, that might not seem like much either.

What about probability of truth? That can seem like a purely truth-based target.

On the most relevant sense of “p is probable/probably true”,

this means the same as “p is plausible/plausibly true”,

which means “bel(p) is applause-ible for truth”,

or “bel(p) is good for truth”

or, for short, “bel(p) is goodtruth”.

POSSIBILITY AND PROBABILITY

How can we get premises about probability, about degree of goodness truth?

Here are four premises that seem to be made true by the basic structure of a responsible search for truth:

(Cg) It’s goodtruth to close off (reject) a candidate belief(p) if it’s outscored by a rival. Or, equivalently:

(Lb) It’s badtruth to leave open (not reject) a candidate belief(p) if it’s outscored by a rival.

(Cb) It’s badtruth to close off (reject) a candidate belief(p) if it’s not outscored by a rival. Or, equivalentlly

(Lg) It’s goodtruth to leave open (not reject) a candidate belief(p) if it’s not outscored by a rival.

For the badness/goodness to be strictly based on the truth-aim, interpret (Cg)-(Lg) by the following rules:

A candidate belief should be about a possibility, so that the truth-aim is what makes (Cb) true.

Two rivals should be about incompatible possibilities, so that the truth-aim is what makes (Cg)-(Lg) true.

The set of rivals should be exhaustive, so that the truth-aim is what makes (Lg) true.

Probability = goodnesstruth derived from leaving open possibilities from incompatible and exhaustive sets.

So suppose there is a family of incompatible and exhaustive possibilities; exactly one is true: die roll = 1/2/3/4/5/6.

Minimal probability—belief that leaves open no possibility (or closes off each): belief(die roll = 7)

Maximal probability—belief that leaves open each possibility (or closes off none): belief(die roll < 7)

Intermediate probability—belief that leaves open some possibilities, closes off other possibilities:

belief(die roll = 2), belief(die roll even), belief(die roll odd), belief(die roll = 1 or 2 or 5 or 6), etc.

IF each possibility is equally good to leave open (or equally bad to close off), THEN we can quantify …

Probability(p) = number of possibilities left open by p / total number of possibilities.

But WHY BELIEVE that each possibility is equally good to leave open (or equally bad to close off)?

If belief(they’re equally good to leave open) is plainer than, e.g., belief(they’re unequally good to leave open).

WHICH POSSIBILITIES ARE EQUIPROBABLE?Indifference Rule: If each of some (incompatible & exhaustive) possibilities is equally good to leave open,

prob(p) = number of the possibilities left open by p ÷ total number of the possibilities.

Die Roll: Plainness favors “each of (roll=1; …; roll=6) is equiprobable”. [so prob(roll is 1) = 1/6]

How do we know how to divvy up the possibilities?

Why not “each of (roll=1; roll = non-1) is equiprobable”? [so prob(roll is 1) = 1/2, not 1/6]

Because “each face is equiprobable” is plainer than “some faces are more probable than others”

and plainer than “each schmace is equiprobable” (where “schmace” = face 1, or non-face-1).

Ditto against “each of (roll=1 on weekend; roll=1 on weekday; roll=2; …; roll=6) is equiprobable”. [so 2/7, not 1/6]

Pair of Coin Tosses: Which is equiprobable: each of (HH; HT; TH; TT), or each of (2H; 1H1T; 2T)?

“each toss-pair is equiprobable” is plainer than “each toss-pair-total is equiprobable”,

but “each toss is equiprobable” is plainer than either of those ... and favors (HH; HT; TH; TT).

What if there are an infinite number of possibilities? Then prob(p) = x ÷ ∞, which may be undefined or zero.

Try dividing the possibilities into a finite number of “matching” subsets—matching according to one’s concepts.

Picking random positive integer n: try “each of (N is even; N is odd) is equiprobable”.

Random point d in circle: try “each of (d in noon-1 slice; d in 1-2 slice; …; d in 11-noon slice) is equiprobable”.

But these are plainer: “each of (N=1;N=2;…) is equiprobable”, “each of (d=point1; d=point2; …) is equiprobable”

Yes, but these are compatible with even/odd or hour-slice equiprobability—not rivals to them.

Random cube factory: makes cubes randomly, with face areas between 1 and 9 in2.

Try “each of (face area is 1-2; face area is 2-3; …; face area is 8-9) is equiprobable”. [so prob(area is 1-4) = 3/8]

Why not “each of (edge length is 1-2 in; edge length is 2-3 in) is equiprobable”? [so prob(area is 1-4) = 1/2]

Why not “each of (volume is 1-2 in3; …; volume is 26-27 in3) is equiprobable”? [so prob(area is 1-4) = 8/26]

Plainness favors length over area/volume? If length is more basic than area/volume. Not clear that concept is.

Plainness favors whichever is in premises? Maybe; that would at least explain our reactions to the cases.

Plainness favors none? Then probability is not assignable by Indifference Rule, which is fine (& not paradoxical).

UNEQUAL PROBABILITIES AND DISTRIBUTIONSIndifference Rule: If each of some (incompatible & exhaustive) possibilities is equally good to leave open,

prob(p) = number of the possibilities left open by p ÷ total number of the possibilities.

Loaded die: if there is general evidence of bias, i.e., against “each of (roll=1; …; roll=6) is equiprobable”, even if none of (roll=1; …; roll=6) outscores any other, no probabilities are assignable w/o further evidence.

One relevant source of evidence is about statistical distributions. A sample S of Fs divides Fs into three “S-chunks”: examined Fs (in S), unexamined Fs (not in S), and all Fs.

Suppose n% of the examined Fs (in sample S) are G. Given only this belief, “all S-chunks of Fs are n% G” provides a plainer theory than

“Some S-chunks are n% G, but some S-chunks aren’t”.(But not always, if any other available premises indicate the sample S is biased/insufficient/etc.)

And this plainer theory entails that n% of the unexamined Fs (in S) are G. [so n% of all Fs, too]

In this way you could have evidence, say, that 3/8 of all actual rolls of the loaded die are rolls of 1.How could you use this evidence to reach the probability of a single roll?

Suppose m of the n Fs are Gs. Suppose o is an F, so (o=F1; ...; o=Fn) are the possibilities.

By default, “(o=F1; …; o=Fn) are equally goodtruth to leave open” is a plainer theory than

“(o=F1; …; o=Fn) are unequally goodtruth to leave open”.[Because “all are x” is plainer than “some are x” and “some are not x”.][But not always, if any of (o=F1; …; o=Fn) outscores any other (so o isn’t “random”).]

And this plainer theory entails that belief(o is G) leaves open m of the n equally goodtruth possibilities.Which (by the Indifference Rule) entails prob(o is G) = m/n.

So given that 3/8 of all actual rolls of the loaded die are rolls of 1, prob(o is an actual roll of 1) = 3/8.And for the other possibilities, plainness favors “each of (roll=2; …; roll=6) is equiprobable”. [so each has prob 1/8]

Distribution Rule (overrides Indifference Rule), GIVEN [m of the n Fs are G]:IF [o is an F] and [for any 2 Fs, prob(o=F1) = prob(o=F2)], THEN [prob(o is G) = (# G Fs) ÷ (#

Fs) = m/n].

DISTRIBUTIONS AND PROPENSITIESIndifference Rule: If each of some (incompatible & exhaustive) possibilities is equally good to leave open,

prob(p) = number of the possibilities left open by p ÷ total number of the possibilities.

Distribution Rule (overrides Indifference Rule), GIVEN [m of the n Fs are G]:

IF [o is an F] and [for any 2 Fs, prob(o=F1) = prob(o=F2)], THEN [prob(o is G) = (# G Fs) ÷ (# Fs) = m/n].

But the Distribution Rule only works given the actual statistical distribution of G among all the Fs.

If every actual F has been examined for G-ness, so o is selected from the sampled Fs, this Rule applies.

Other more limited samples seem biased/insufficient for extrapolation of probability to all Fs.

E.g., Distribution Rule limits probabilities to “grain” determined by the exact number of Fs: (0/n, 1/n, 2/n, 3/n, …).

As soon as even one more F is added, the possible grains change (0/(n+1), 1/(n+1), 2/(n+1), 3/(n+1), …).

Also, probabilities—posited in physics or math—may have irrational values (like 1/π or 1/√2), not equal to any m/n.

Start with actual statistical evidence … m/n of examined Fs are G. Instead of extrapolating that m/n of all Fs are G,

Suppose something causes m/n of examined Fs to be G. {by plainness, if common cause for multiple samples}

Suppose this something causes the fraction of examined Fs that are G, as examined Fs increase,

to approach a (possibly irrational) limit of i (0<=i<=1),

and (given grain limitations set by the number of examined Fs) m/n is the ratio (0/n, 1/n, 2/n, …) closest to i.

Suppose this causation operates locally, case by case: {by plainness, if similar causes act locally}.

So, there is a property each F has, that together cause the ratio of G Fs to approach i as Fs increase.

Then by definition, call this property of each F its “propensity to be G” or “G-propensity”, with a “strength” of i.

Propensities cause distributions: (each F’s G-propensity = 1) (all Fs are G); (propensity = 0) (no Fs are G);

(propensity = i) fraction of G Fs approaches i as Fs increase. So propensities cause probabilities …

Propensity Rule (overrides Indifference Rule, but overridden by Distribution Rule):

IF [o is an F] and [for any 2 Fs, prob(o=F1) = prob(o=F2)], THEN [prob(o is G) = each F’s G-propensity].

Further complications affect how single G-propensities work in the absence of other G-propensities,

and how multiple G-propensities combine.

LOTTERY PARADOX

(D2) P; Q P&Q

(I2) most of the Fs are Gs ~> (it’s good to believe) an arbitrarily selected unexamined F is G

(I2a) most lottery tickets lose ~> this (arbitrarily selected) unexamined one loses.

So (in an unfinished lottery) we should believe for each ticket that it will lose.

By (D2), we can deduce that all tickets will lose.

Option 1: Never draw deductions as in rule (D2). (But seems ok to conjoin until you get to half the tickets.)

Option 2: Stick to probability beliefs - only believe that p has probability x; never just that p.

(But can’t you outright believe what your own name is? And don’t you sometimes disbelieve things that you believe are probable? And can’t you—or a child—believe w/o using a concept of probability?)

Option 3: Try IBB/IBE competition instead of formal inductive/deductive rules:

Suppose: Pt: there are n tickets. P2: no 2 will win. Pm1: ticket #1 may win, …, Pmn: ticket #n may win.

Pb: Each of Pmi-Pmn would be (equally) bad to close off. {From lottery fairness, or plainness by default.}

Believing ticket #k will lose closes off Pmx, so is bad. But believing ticket #k will win closes off the n-1 other Pm’s so is even worse.

So add beliefs: L1: ticket #1 will lose, …, Ln: ticket #n will lose.

Believing no ticket will win (i.e., “ticket #1 & … & ticket #n will lose”) closes off all n of the Pm’s. But believing some ticket will win closes off none of Pm1-Pmn.

So keep believing of each that it won’t win, but don’t use conjunction!

You should contradict yourself: believing of each ticket that it will lose, while believing that some will win.

But isn’t everything deducible from contradictory premises?

Yes! That’s partly why you should stop deducing things! Instead, use IBB/IBE!

E.g., sometimes draw deductions as in rule (D2), but never because of rule (D2).

(E.g., “#i & #j & #k will lose” closes off 3 Pm’s, but “#i or #j or #k will win” closes off n-3 Pm’s. If n>6, ok.)

CREDENCE w/o CONCEPT OF PROBABILITY

Why can’t we reduce credence(p)=x to bel(prob(p)=x) ?

Because credence(p)=x doesn’t involve a concept of probability.

Credences work like beliefs about probabilities, but automatically, and less reflectively.

Similar pairs:

doubt (p) w/o concept of p’s being false

hypothesize(p) w/o concept of p’s being possible

value(p) w/o concept of p’s being valuable

memory(p) w/o concept of p’s being past

expect (p) w/o concept of p’s being future

fear(p) w/o concept of p’s being dangerous

amusement(p) w/o concept of p’s being funny

surprise(p) w/o concept of p’s being surprising

outscore(p,q) w/o concept of p’s outscoring q

These are all states that act like bel(p is …).

They are evaluated (/regulated) in the same ways.

CREDENCE AND ESTIMATION

Why can’t we reduce credence(p)=x to bel(obj-prob(p)=x) ?

Because credence(p)=x is an estimate, not a guess, about objective probability (/truth).

Find a woman on earth. Guess (exactly) how many children she will have. 0? 1? 2? 3? …

Now estimate (closeness counts): 2.47.

A coin is either double-headed or double-tailed, but you have no clue which.

What is the objective probability of heads? Either 1 or 0.

This is the degree of goodness of ruling out tails given all historical facts.

What is (/should be) your subjective probability of heads? ½.

This is (/should be) the degree of goodness of ruling out tails given your historical beliefs.

bel(p has subjective prob x) = bel(my /best/ estimate of p’s objective prob /truth/ is x)

Joyce: For full beliefs, “a miss is as good as a mile”. For credences, “closeness counts”.

But closeness counts for full beliefs, too … when these are beliefs about best estimates.

Also when beliefs have impacts on other beliefs.

Copernicus: circular orbit; Kepler: elliptical orbit

Kepler believes many more other truths (e.g., distance from earth to sun varies).

CONSERVATISM AND CONDITIONING

Quantitative probability assignments can be justified by a core of qualitative theoretical virtues.

Bayes Theorem (synchronic):Conditional Probability: prob(H/D) = prob(H&D) / prob(D) so pr(H/D) * pr(D) = pr(H&D) = pr(D&H) = pr(D/H) * pr(H)

So pr(H/D) = pr(D/H) * pr(H) / pr(D)

Bayes Rule (diachronic):Conditioning: After learning D for certain, new-prob(H) = old-prob(H/D). So new-pr(H) = old-pr(D/H) * old-pr(H) / old-pr(D)Conditioning just masquerades as a theory of rational inference. You can infer any old H from any old D as long as you first form a high-enough prob(H/D). Probability theory just warns you not to contradict yourself when doing so.

Maybe synchronic contradiction is bad, but what’s bad about diachronic contradiction?Conditioning rests on a principle of conservatism in traditional qualitative method.But conservatism may not be a virtue at all. (How is it different from sheer dogmatism?)If it is, it’s usually treated as the lowest-ranked traditional virtue,

applying only as a tie-breaker when there are no other virtues in play.

E.g., suppose one starts out with a very low old-prob(D), and a certain old-prob(H/D).Then one discovers D.Why shouldn’t one reason as follows, instead of copying old-prob(H/D) to new-prob(H)?

Holy crap, D happened? I guess my old-prob(D) was way too low. So I guess lots of my old-prob(X)s were way off.I should rethink prob(H/D), and not just conserve my old one.

PLAINNESS AND FRUGALITY

If we combine Plainess with any other credit, such as Conservatism (or Testability, etc.), we get a less plain method-belief—Conservatism‑&‑PlainnessMethod, so PlainnessMethod would continue to support itself over those rivals. But Conservatism‑&‑PlainnessMethod would undermine itself, because it has a more plain rival, PlainnessMethod. (These are rivals because they each claim to be the complete method. Plainness is a Jealous and Angry Credit and tolerates no rivals or companions.)

Perhaps PlainnessMethod is the only known example of a self-supporting method that is in the running for being complete and fundamental. We should search some more, but pending the discovery of another one, Renee has no other hope to achieve her aims than adopting PlainnessMethod.

But try: Frugality&PlainnessMethod. Is this less plain than PlainnessMethod?

Not if Frugality and Plainness can be reduced to a more basic belief.

It can: Aspect-of-Meaning-with-Lower-Bound-but-no-Upper-BoundMethod.

The various Convergence-to-nMethod beliefs don’t self-support because they are all equally plain. But are they all equally (plain and) frugal?

Yes: they each posit one (evaluative) fact, and don’t posit any descriptive facts.

Plainness seems to outrank Frugality, because:

Renée starts with no beliefs about how many facts (things/kinds) are actually posited by theory T. Do parts count? Prerequisites? Heraclitus: never the same river twice?

Multiple basic beliefs are treated in thought as if they may posit different facts (things/kinds). They can in fact converge on positing the same facts (things/kinds), but if so this is lucky. Reducing structure reduces the number of facts (things/kinds) one “risks” positing.

CANDIDATE BELIEFS AS DATA

Where can Renée (or you) get premises from, without merely lucky prejudice?

Renée can only form beliefs about some necessities and possibilities, and “I think p” beliefs about these beliefs.

But where p is contingent (and not itself of the form “I think q”),

Renée doesn’t initially form the belief that p and so doesn’t initially form the belief that she believes p.

However, she may without prejudice initially form the candidate-belief(p),

and so the genuine belief that she has a candidate-belief(p). [aka, that she has an “appearance” as of p]

Ultimately, beliefs about candidate-beliefs are her only source of (contingent) “data” or premises.

She can also keep track of which candidates she forms in which order, e.g., which she gets “spontaneously”,

versus which others she gets because she aims to generate competitors to the spontaneous ones.

Her key hope for going beyond this data is to find distinctive groups (patterns) of such data

that can be unified/explained by theoretical candidate beliefs … candidates she can accept via plainness.

If she generates all possible candidate beliefs at once, or generates each deliberately, on purpose,

then perhaps she will not be able to find such groups (patterns) to explain.

But if some of her generation processes are partial and spontaneous instead, perhaps she will.

She hopes to explain why she most frequently or spontaneously considers those candidate beliefs vs. others,

why her spontaneous candidate beliefs are similar and different in such-and-such ways.

Maybe her spontaneous patterns of candidate beliefs are products of reliable perception,

and maybe they’re products of unreliable seances or Tarot or dreams or demonic meddling …

May the best (e.g., most plain) package of hypotheses and data win.

Similarly for you, the safest premises are your beliefs about your candidate-beliefs, not random other beliefs,

e.g., “I have a hunch that p” or “p seems/appears true” instead of “p” or “I believe p”.

But if you could find some (responsibly acceptable) symptom of truth for old beliefs, not only from scratch,

maybe you could use this as a shortcut or a way to work backwards towards scratch.

FROM CONSISTENCY TO TRUTH?Let B be some set of old/existing beliefs (e.g., that you are considering using as premises).

Consider these competing beliefs about the set B:

(TB) Beliefs in B are all true.

(FB) Beliefs in B are not all true.

Now suppose that there are no contradictions in B (they are “consistent” or “coherent” in one sense).

Then by default, adding (TB) is simpler (plainer) than adding (FB), because only (TB) explains the consistency.

If they’re all true, they can’t contradict one another. If some are false, they could.

However there are other rivals to (TB) that would explain the consistency. E.g.,

(OB) Beliefs in B never overlap in topic. (So there’s no chance for them to contradict.)

[This is compatible with (TB), so the real rivals are (OB)&not-(TB) vs. (TB)&not-(OB), but set this aside.]

However if one wanted to be in a position to argue for (TB) one could make sure to pick B so that it contains beliefs that do overlap in topic, that answer the same questions. If these agree rather than contradict, they will be “confirmations”. Still, consider

(PB) When beliefs in B overlap in topic, they are always formed by precedent.

In other words, once a question is answered (a belief is formed), repeats of the same question are settled by automatically repeating the old belief (they are not “independently formed”).

If one wanted to argue for (TB) over (PB) based on B’s consistency, one could:

Shun resting on precedent: Aim never to form beliefs by precedent (= by entailment, contradiction, conservatism).

Focus on appearances: Limit B to spontaneous candidate-beliefs (appearances, hunches), instead of beliefs.

Then aim never to form spontaneous candidate-beliefs by precedent, and proceed as above.

Allow routine sources of inconsistencies: Beliefs formed in dreams and by surprise routinely ignore precedent and contradict old beliefs. If these are allowed but still don’t contradict, that works against (PB).

Leave no time for consistency tests: If one’s new beliefs are formed just as quickly/easily even when the number of old beliefs increases, this is a sign that one is not (say, unconsciously) checking for consistency.

Favor widespread belief: Pick B so it contains many (candidate-)beliefs from many different minds. Since (PB) would need to posit many connections among these minds, (TB) can win on simplicity (parsimony).

WHAT ARE “PLAUSIBLE” PREMISES?So in an argument, you may wish to pick as premises old/existing beliefs B {B1, …, Bn, …} that have the following features, that implicitly enable one to argue (/guess) from the consistency of the premises to their truth.

Crucial Virtue for Premises—based on logic alone:

Consistent: There are no (explicit, nonmomentary) contradictions within B. [Vice: Inconsistent.]

Major Virtues for Premises—based on securing at least some gain in simplicity or responsibility:

Fair: Bn is potential common ground between the competitors (doesn’t entail/contradict them). [Vice: Unfair]

Innocent: Challenges to Bn are rebutted. [Vice: Suspect—challenges ignored.]

Useful Virtue for Premises—based on maximizing simplicity or responsibility:

Hedged: Bn is about spontaneous candidate-beliefs (appearances, hunches, seemings),

or is expressed in similarly hedged terms (seems/appears/is-a-hunch). [Vice: Speculative.]

Normal Virtues for Premises—based on generating consistency data:

Candid: Bn is picked “randomly” from relevant old beliefs, w/o an eye to avoiding tension in B.

[Vice: Cherry-picked—old beliefs contain relevant tensions “conveniently” ignored in B.]

Rechecked: B is repeatedly tested, so is overlapping in topic (or picked candidly from old beliefs that are).

[Vice: Unchecked, Once-checked]

Minor Virtues for Premises—based on blocking the competing explanation that one is using precedent:

Widespread: Bn is accepted in many different minds (that don’t collude). [Vice: Idiosyncratic, clique-y.]

Earned: Bn was formed w/o aim for precedent (entailment, contradiction, conservatism). [Vice: Unearned]

Forceful: Bn was formed surprisingly or w/o expectation (or picked candidly from old beliefs that were).

[Vice: Forceless—Bn would not have been formed had it been surprising or unexpected.]

Spontaneous: Bn would have been formed as fast/easily no matter how many old beliefs one had.

[Vice: Deliberate—Bn is consciously inferred vs. reflexive]

Bare Virtues for Premises—based on strengthening the path from consistency to truth:

Resilient: Bn recurs at distant times (when forgetfulness usually sets in). [Vice: Brittle—Bn wouldn’t recur]

Abundant: B is large in number (or picked candidly from a large number of old beliefs). [Vice: Sparse]

A STRATEGY FOR JUSTIFYING LOGICWe use basic logic (e.g., modus ponens) in every rational argument, so it seems that no

noncircular rational argument could be given that basic logic is good to use. So what justifies basic logic over imaginable alternatives?

 An ideal search for truth maximizes the amount of truth reached, minimizes the amount of falsity reached, maximizing the amount of responsibility you have for reaching truth, and minimizing the amount of good luck you take advantage of in the process.

You need some good luck, e.g., you need to exist, and be capable of beliefs. What makes believing “I think, I am” special is not that you have some source of self-knowledge, but that this is justified as a working assumption in conducting a search for truth.

Another piece of good luck you need is to have some basic built-in ways of changing your beliefs. (“Basic” = not dependent on further reasoning.)

So suppose you have a basic way M of reaching B2(new belief) from B1 (old belief). If B2 adds something substantive to B1, then you’re just lucky if B2 is true. But if B2 is a (simple) entailment of B1, then you’re taking advantage of no such luck.

And if (the nature of meaning is such that) your basic use of M is what makes B2 be an entailment of B1, then your responsibility is increased (you are responsible for B2’s truth by being responsible for B2’s meaning). So M’s being a logical deduction would enable you to maximize responsibility and minimize luck.

In similar fashion I claim that what marks off the elements of basic logic is that reasoning in accordance with them is (part of) the minimum needed to have any possibility of maximizing responsible truth.

NON-EPISTEMIC ACHIEVEMENTSRenée might reach the truth (the whole truth, and nothing but the truth) without satisfying

her stricter aim of finding the truth versus being given it.

It would be no proud achievement for Renée to “find” truth only because a guardian angel miraculously interferes with her mind, implanting true beliefs and preventing her from accepting or from generating false candidate beliefs. Similarly, any search method stabilizes on truth if on Judgment Day God reveals all—or better, compels belief in all.

Renée might even have full control over her mind as she uses it to reach truth, without satisfying her aim. If the benign angel insures truth for Renée’s beliefs by changing reality to fit them, Renée would still not be finding truth versus being given it.

Even if Renée has her own angelic powers over reality, this might not satisfy her. Maybe she is so driven by her aim of reaching truth that she laboriously reshapes the world to make her beliefs true. Or maybe accepting candidate beliefs causes them to be true directly, without the intervention of desires or angelic labor. In all these cases although the true beliefs might be proud achievements of hers, they would not be proud epistemic achievements.

What makes a belief an “epistemic” achievement, for Renée? At a minimum, she must control its existence, and its truth must be constituted by something existing independently of it.

But not even this is enough. Suppose Renée is a wishful thinker—believing things simply because she wishes them to be true—but also suppose that her wish is reality’s command, as in Genesis. Then she will reach truth, because her own wishes will be a common cause of both her beliefs and what her beliefs are true of. But she still will not be finding truth, versus creating it.

EPISTEMIC ACHIEVEMENTSIt might be tempting to exclude such common-cause relations by requiring that her beliefs

depend on what they are true of, rather than vice versa. But this is too strict. For example, Renée aims to have a true belief today about what will happen tomorrow, but she doesn’t aim for the existence of her belief today somehow to be dependent on what will happen tomorrow. A belief that is an epistemic achievement need not depend on what makes it true. The missing requirement lies elsewhere.

 Consider two other ways for her to reach truth, that seem closer to being epistemic achievements of the sort she’s aiming at. Renée might have a faculty of perceiving reality and perceiving it whole, and this faculty might compel her trust. Or the success of her method might otherwise be due to the painstakingness or the scrupulousness of her innate or inculcated belief-forming endowment.

She would deserve some credit for reaching true beliefs, since such endowments are part of her. But still there is a similarity to the guardian angel; ishe is merely epistemically lucky to have such splendid help. Same in the wishful-thinking case; she is merely epistemically lucky that the world conforms to her wishes.

 Minimally, Renée wants to find truth by conducting her reason—i.e., her epistemic or theoretical or belief‑ish reason. She aims to have her stock of beliefs after each step be dependent on her stock of beliefs at or before that step, and all in the service of the pure aim of searching for truth. If any true belief depends only on beliefs of hers, it might be a full-fledged proud epistemic achievements of hers. But if a true belief depends on anything other than her beliefs, her epistemic credit is dissipated. If a true belief is independent of her other beliefs, she believes blindly, with exactly the prejudice she aims to avoid.

SPONTANEOUS PROCESSESOne way for Renée to avoid prejudice—and the only one that occurs to me—is for her to

avoid prejudging. Any judgment (belief, theory, intuition) she starts with threatens to violate her ground rules of proceeding step by step and without prejudice.

So she will not be able to reach beliefs at all, whether true or false, unless she has some processes for producing beliefs that are not a result of prior beliefs. Call these processses and the beliefs they produce “spontaneous”.

How can Renée accept spontaneous beliefs without prejudice? Maybe her spontaneous beliefs are products of reliable “perception” or some ability to fathom necessary truths, but maybe they’re products of unreliable seances or Tarot or dreams or demonic meddling. A method that automatically accepts all spontaneous beliefs seems prejudiced, as does a method that automatically discriminates some from others for acceptance, but a method that automatically rejects all of them prevents Renée from reaching any beliefs at all.

The formation of Renée’s spontaneous beliefs are out of her cognitive control (at a time). They are prejudiced if their content covers things beyond what she can control.

The only thing in her control is what she does with them, how she uses them—or at least part of that. So the only way she can exercise control over whether they are prejudiced is for their content to be controlled by the part of their use that she controls.

So one workaround—and the only one that occurs to me—is the idea that Renée can accept beliefs for various purposes. She should accept all spontaneous beliefs, but only for very limited purposes, using only their existence or their form in reasoning, and nothing further which might be false or unreliable about their content. She should treat such a belief as being the belief that it exists (with this form).

BASIC COGNITIVE PROCESSESIn addition to spontaneous processes Renée needs to have processes according to

which beliefs after a step depend on beliefs at or before the step. Let’s call these “cognitive processes”. A functioning cognitive process consists of three parts: preexisting “input” beliefs (maybe degreed) that trigger the process, not necessarily preexisting “output” beliefs (maybe degreed) that are produced or maintained, and the preexisting “channel” mechanisms by which the inputs produce the outputs.

 The channels of some possible cognitive processes are partly or wholly constituted by reasoning—by the operation of “smaller” cognitive processes or further beliefs (beyond the channel’s inputs). For example, your beliefs about the weather forecast may cause your beliefs about the weather via a channel consisting partly of intermediate reasoning about other weather forecasts.

But at each step in Renée’s search there must be some cognitive processes whose channels are not even partly constituted by reasoning. Since reasoning from beliefs is constituted by the operation of cognitive processes, no reasoning would take place if all the cognitive processes at a given step are constituted by reasoning from beliefs.

So there must be “basic” cognitive processes in addition to any “derived” ones. These are also out of Renée’s cognitive control (at a time).

It might be that each basic (as well as derived) cognitive process is subject to having its channel modified over time by reasoning. This would require additional processes by which beliefs can reshape basic processes. I’ll suppose that Renée’s basic processes at each step are cognitively impenetrable over time. This makes her reasoning much easier to study—her basic processes remain the same, or if they change this is not controlled by reasoning. But it is also a realistic assumption for us (and our robots).

MINIMAL BASIC PROCESSES

Since Renée can’t modify the (current) channels of her basic processes by reasoning, she is forced to live with them (at each step). So mightn’t these basic processes constitute prejudice on her part? They would if they inappropriately relate input and output beliefs that should not be related.

So the only way for them not to be prejudiced is for them to constrain the content of the beliefs they relate. If B1 causes B2 by a basic process, this has to shape the contents of B1 and B2 so that [necessarily,] B2 is true if B1 is.

So the processes themselves have to be given formal rather than contentful construals.

Which basic processes does she have? You might guess that she has basic processes akin to our own basic processes, and you might guess that ours are related to all or some of what we call “logic”, to memory storage and recall, to evolutionarily-based epistemic biases, etc.

On the other hand, Renée is a particularly hardy cognizer, perhaps even an epistemic mutant, and to avoid prejudice, it’s best if as little as possible is packed initially into her basic cognitive processes.

So let’s suppose Renée has a minimum package of cognitive processes, the minimum necessary to have even a minimal chance at satisfying her epistemic aims.

We’ll start her off only with what is necessary to maximize responsible truth in some possible world.

TRANSITION PROCESSESA “transition” process is one in which the input produces a different (degree of) belief as

output. Since null-input processes do not create beliefs by reasoning, and since null-transition processes do not create new beliefs, Renée cannot get new beliefs from reasoning unless she has basic transition processes.

What does this mean for meaning? B1 has to mean what B2 does (or maybe “more”).

Since Renée aims to stabilize on true beliefs, she needs some processes in order to stabilize. Stabilizing a search means continuing to accept what passes the tests, continuing to be poised to use them, and not ceasing to accept them (say, by dying).

What sort of process could continue a belief, without changing its content? Renée needs what I will call “null-transition” processes, which relate pairs of beliefs in such a way that either plays output to the other’s input. If B1 is input and B2 is a candidate belief, B2 is output (with the same degree as B1). If B2 is input and B1 is a candidate belief, B1 is output (with the same degree as B2).

For a process to be a way of stabilizing, it must not itself involve reasoning, so it must be a basic process. Also, to avoid prejudice Renée must have available a way to stabilize on any candidate belief that passes her tests.

 The only way for null-transition processes not to constitute prejudice on Renée’s part is if B1 and B2 have the same content.

Her search would not be satisfied by accepting falsehoods as well as truths. But she’d accept every candidate belief unless she has processes according to which some beliefs (or null inputs) cause the absence (or reduced degree) of others. This is prejudiced unless B1 and B2 have contrary contents.

REASONING TO NEW METHODSSuppose B1 (about p1) does not lead to B2 (about p2) by any chain of basic processes. The only

way for Renée to reach a new method of reasoning according to which B1 does lead to B2, and to reach it by reasoning (i.e., by a new belief Bnew), is if she has a basic process that takes B1 and the new belief Bnew and produces B2.

To avoid prejudice, she must leave room for such a new belief to relate any input and output beliefs. (Or the exclusion of output beliefs.)

Does this presuppose modus ponens?

Nothing yet requires Bnew to be the belief that if p1 then p2.

Maybe it could instead be the belief that I (actually) exist and if p1 then p2. Or the belief that if that if p1 then I (actually) exist and p2.

But maybe these beliefs would not maximize Renee’s responsibility; she would be relying (not only on the luck of her existing, which she needs anyway, but also) on the luck of the extra clause (about her existing) in Bnew incurring no extra risks of falsehood.

Maybe if p1 then p2 is the minimally lucky (minimally prejudiced) interpretation of Bnew?

Or maybe it is more prejudiced than: it’s good as a means to responsible truth for bel(p1) to cause bel(p2).

(Not “bel(p1) does cause bel(p2)”, because then Bnew could never itself be added as an epistemic achievement.)

Could they be the same content? It seems that one could bel(if p1 then p2) without having concepts of beliefs, goodness, truth, etc. But (as for degrees-of-belief vs. probability beliefs) we could throw hyphens into the content, or settle for the idea that the rules for conditional beliefs reduce to the rules for good-means-to-truth beliefs.

UPSHOT

To maximize responsibility, we (like Renée) need a basic way to be able to reason to new methods, potentially linking any two beliefs (about p and q) by a linking-belief.

This should mean: “it’s good-for-truth for bel(p) to cause bel(q)” ~ “if p then q”.

This justifies our reasoning as follows: p + if p then q q.

Similarly, we need a basic way to have a belief (about p) exclude other beliefs, by applying a concept to them.

This should mean: “it’s good-for-truth for bel(p) to cause nonbel(q)” ~ “if p then not-q”.

TRUE CONTRADICTIONS?

Is it certain that contradictions can’t be true? Quantum Mechanics

“Even something as fundamental as the principle of noncontradiction can be open to questioning and revision. …

“If time is continuous … there is no very next instant after any given one.

An object that changes … from red to black

either will have a last instant when it is red but no first instant when it is nonred,

or it will have a first instant of being nonred but no last instant of being red. …

[ R …R …R …R …B …B …B

R… R… R… B… B… B… B ]

“Yet there seems to be no special reason to describe the change in one of these ways rather than the other. …

Therefore [to avoid such arbitrariness] it might be suggested that … there is both a last instant of one state and a first instant of the other.

There would be an instant when the object is both red & black, that is, both red & not red.

And in the case of an object that ceases to exist, there would be an instant when the object both exists and does not exist.

“These contradictions hold only for an instant, though; it is no wonder that we do not notice them. … If we accepted such delimited [instantaneous] and motivated exceptions to the principle of

noncontradiction, Logic would not crumble, Reason would not totter.” [More] (Robert Nozick, Invariances, pp. 303-4)

IDEAL CONTRADICTIONS?

Walt Whitman, Song of Myself (abridged)

line

The Wolverine sets traps on the creek that helps fill the Huron; 280And such as it is to be of these, more or less, I am.   321I am … a Hoosier, Badger, Buckeye;  330

Of every hue and caste am I, of every rank and religion;  338

Washes and razors for foofoos—for me freckles and a bristling beard. 458

What blurt is this about virtue and about vice?  469Evil propels me, and reform of evil propels me—I stand indifferent; 470

Walt Whitman am I, a Kosmos, of mighty Manhattan the son,  492

Do I contradict myself? 1321Very well then I contradict myself; 1322(I am large, I contain multitudes.) 1323

IDEAL CONTRADICTIONS?

“Do I contradict myself?Very well then I contradict myself,(I am large, I contain multitudes.)”

Could you persuade WW that (extended, more-than-instantaneous) contradictions aren’t ok, even though he aims ideally to believe contradictions and doubt tautologies?

Try this … WW believes that contradictions are ok … and so believes that contradictions are ok and that contradictions are not ok …

and so (among other things) admits that contradictions are not ok.

So you win. If he says some other stuff on his own time, who cares?

Reason for caring … If contradictions are not ok, then contradictory claims are competitors (rivals), and you really want “contradictions are not ok” to outcompete the rival “contradictions are ok” … but WW accepts both, symmetrically.

[This is one of the reasons for requiring CDs to be competitions; the aim is simultaneously to accept some belief and to avoid accepting another.]

(One asymmetry: WW can’t similarly get you to blurt out that contradictions are ok.)

PRAGMATIC VALUE

Pragmatism (about value of beliefs/assertions): One should form beliefs that help reach general targets (/goals), with no special role for any belief-specific target such as truth.

Egopragmatism: … one’s general targets …Sociopragmatism: … society’s/human targets …Theopragmatism: … God’s/Great Leader’s targets …Ethicopragmatism: … ethical targets …

Evolution Paul Richardvs. Truth: Churchland Rorty

Paul Churchland on Pragmatic Method

Pat Churchland: “Boiled down to essentials, a nervous system enables the organism to succeed in the four F’s: feeding, fleeing, fighting, and [fornicating]. The principle chore of nervous systems is to get the body parts where they should be in order that the organism may survive. … Improvements in sensorimotor control confer an evolutionary advantage: a fancier style of representing is advantageous so long as it is geared to the organism’s way of life and enhances the organism’s chances of survival. Truth, whatever that is, definitely takes the hindmost.”(Journal of Philosophy, 1987)

TRUE HAPPINESS

Repeat … Pragmatism (about value of beliefs/assertions): One should form beliefs that help reach general targets (/goals), with no special role for any belief-specific target such as truth.

Objection: “Pragmatists need true ‘means-end’ beliefs (about which beliefs help reach the general targets other than truth). So truth is a target even for pragmatists.”

Responses from the pragmatist:

(1) “We’re offering a standard (for being a good belief) rather than a procedure (for getting good beliefs). You could meet our standard by luck or by being assisted rather than by (true) means-end beliefs, for all we care.”

(2) “Even if we did need means-end beliefs, why do we need to favor ‘true ones’ rather than ‘ones that (indirectly) help reach the general targets’?”

ACT VS RULE PRAGMATISM

Suppose truth has instrumental value, as a means to other targets.

Does it also have intrinsic value?

If not, shouldn’t we score beliefs not by truth but by how well they lead to the other targets?

Repeat … Pragmatism (about value of beliefs/assertions): One should form beliefs that help reach general targets (/goals), with no special role for any belief-specific target such as truth.

Compare “What’s a good belief?” with “What’s a good joke?” …

“Joke Pragmatism”: One should form jokes that help reach general targets, with no special role for any joke-specific target such as humor.

“Screwy Pragmatism”: One should form screwdrivers that help reach general targets, with no special role for any screwdriver-specific target such as driving screws.

Why are such limited evaluations valuable (those based not on overall value but on specific targets like truth, humor, how well something drives screws, etc.)?

ACT VS RULE PRAGMATISM

Act (/Case-by-case) Pragmatism: One should form a belief if one’s doing so would help reach general targets in one’s particular circumstances.

Rule (/Principle) Pragmatism: One should form a belief if everyone’s doing so (together) would (on balance) help reach general targets, despite varied circumstances.

Some pragmatic reasons to adopt rule “believe/assert what’s true”:

(1) Generality: Truth tends to help reach any old target.

(2) Robustness: Ability to convince others who don’t share targets.

(3) Predictability: Ability to secure general trustgiven expectation that one’s targets shift over time.

(4) Practice: Avoiding truth in rare falsehood-pays cases might make it harder to reach targets in truth-pays cases.

(5) Holism: False beliefs tend to weigh against other true ones.

(6) Efficiency: Not investigating/calculating effects on general targets can save time/effort.

IGNORANCE AND FALSITY

If truth is a target for belief, is complete truth a target/ideal, or is there some limit?

Is avoiding all falsehood a target/ideal?

Are there possible truths you wouldn’t want to believe, even if they were true?Possible falsehoods you would want to believe, even if they were false?

Why would you want (not) to believe them?

Ideally, would you want to have these wants?

CORRESPONDENCE AND COHERENCE TRUTH

Truth = Correspondence: John Searle Ayn Rand

A belief describes some thing(s) S as being some way(s) W,

and if S is W (aka, if it is a fact that S is W),

the belief corresponds to S’s being W (aka, to the fact that S is W).

Spot the Flub: Arthur Fine Narrator

The claim that Truth = Correspondence does not assume (or deny) that things or facts or truth or correspondence are “out there” (in the alleged “external” physical world vs the alleged “internal” mental world).

Nor does the claim assume (or deny) that anyone ever “discovers” anything, vs getting everything incorrect or merely irrationally guessing.

It isn’t a deep (metaphysical) claim about what reality is made ofor a deep (epistemological/psychological) claim about what we can know.

It’s just a (semantic/linguistic) claim about the meaning/use of the English noise “true”, or some similar family of noises and associated ideas.

Truth = Coherence: Richard Rorty

Typically, a belief coheres with another belief if (negatively) they are not inconsistent,

and (positively) if one of them explains or makes sense of the other.

VALUE OF CORRESPONDENCE VS COHERENCE

Let’s not start by asking about the meaning of the noise “truth”.

It’s more important to ask whether correspondence and coherence are valuable.Kinds of targets: (1) ethically good things; (2) things believed to be good; (3) things desired;

(4) natural or artificial functions; (5) things that promote (1)-(4).

Hard to argue that correspondence or coherence meet (1).

Their value seems not to depend on (2) or (3)—kids or beasts without concepts of correspondence or coherence can’t believe them good or want them.

Try (4) S has a (naturally or artificially) designed function to reach T. [E.g., in the simplest case, S was copied from certain prior S-like things that reproduced, despite obstacles, because they brought about (past analogs of) T]

Ask: How are beliefs used with systematic payoff in reproducing/proliferating beliefs/believers?

Beliefs are typically used like maps are used:

We use maps to alter our motions, given fixed destinations.

We use beliefs to alter our behaviors, given fixed desires.

When things go according to plan, beliefs [like maps] get us to our desired ends [destinations] because beliefs [maps] correspond to events

[places], not because beliefs [maps] cohere with other beliefs [maps].We mainly use beliefs as if they correspond,

not (what would this even mean?) as if they cohere together.

Correspondence (to a coherent world) may explain value of coherence (of beliefs), but not v.v.

Correspondence, unlike coherence, seems clearly instrumentally valuable.

FEASIBILITY OF CORRESPONDENCE VS COHERENCERorty’s rhetorical question: “What more do you want [than coherence]?”

Resonates not because of doubt that correspondence would (also) be valuable,

but because of doubt that correspondence is easily available (though it would be valuable).

Correspondence, unlike coherence, seems hard, at least for questions about the “external” world.

Coherence may seem easier to control or use or “check from the inside”.

Coherence seems equally easy/hard for both questions about the external world and questions about the internal world.

But for some questions about the “internal” world, correspondence is easier.

Belief-belief correspondence vs. belief-belief coherence: Here’s a fact about S: S believes that p.

What belief of S would correspond to that fact?

S’s belief that S believes that p. Easy!

Unlike … What belief of S would cohere with S’s belief (that p)??

Kinds of coherence: Two beliefs don’t contradict each other?

Two beliefs are loosely similar?

One belief helps confirm another (how? And confirm what? That it corresponds)? …

Correspondence, unlike coherence, seems hard for the external world,

easier for the internal world (coherence seems equally easy/hard for both).

In this way correspondence rather than coherence is a better fit for truth.

CONTRIBUTIONS TO CORRESPONDENCELimitations of Correspondence: DM Mellor Hilary Putnam

(Again, let’s first ask about what’s valuable, not about the meaning of the noise “truth”.)

Some beliefs [unlike maps] have only uses for which correspondence is completely useless.[We use logical symbols/numerals/ethical terms in operating on beliefs, in counting coins, in getting others to do what we want. These beliefs are tools, but they can work flawlessly even without number-objects, logic-events, ethical-properties. So they aren’t even aimed at corresponding (e.g., don’t have a function to correspond).]But some of those beliefs seem valuable. [if p, p; 2+2=4; torturing babies to death just for fun is bad].

Instead of just “Correspondence is valuable” or “Coherence is valuable” try:For maplike beliefs, correspondence is valuable, and for nonmaplike beliefs, coherence is valuable.

On the other hand: nonmappy uses of beliefs seem parasitic on mappy uses. We use logical/math/ ethics beliefs mainly to evaluate or alter ordinary mappy beliefs (or the things they map onto).[Nonmappy beliefs typically work like map legends/rules.]

A system of only nonmappy beliefs would be a mere “glass bead game”, like a legend with no maps. Which may not even be possible--what would make them more belief-like than hammers/yawns/ kisses/desires/skills/etc.? [If not for the maps, the marks on the paper edge wouldn’t be a legend.]

So try: For maplike beliefs, correspondence is valuable, and for nonmaplike beliefs,(purely positive) contribution to correspondence (of maplike beliefs) is valuable.

Or more briefly: For any belief, (purely positive) contribution to correspondence is valuable.

PRAGMATIST TRUTH

Truth = Usefulness: William James

Robert Nozick on pragmatism, correspondence, coherence, and relativism: “[O]ur original interest in truth was instrumentally based. … Truth [is] something that is useful for a very wide range of purposes, almost all,

….Was William James right, then, in saying the truth is what[ever] works? We might see James as depicting the value of truth, not its nature.

Rather than hold that truth simply is ‘serviceability’, we can construe truth as that property, whatever it is, that [predominantly and systematically] underlies and explains serviceability. If one property [does so for] different subject matters, [it] will have to be very

general…. The various theories of truth—correspondence, coherence, etc.—then would be … about [what] underlies and explains serviceability. …

It might be distinct properties that make statements serviceable to members of different groups.

[“Truth and Relativism” in Invariances , pp. 44-55]

EPISTEMIC HAPPINESS

Suppose truth (whole truth, and avoiding falsity) is a target for beliefs (/assertions) – intrinsic or instrumental.Then ideally one would have a desire for truth

e.g., ideally one would be happy at (apparently) achieving truth,and would be happy at (apparently) seeking truth (versus mere expedience, pragmatism, etc.).

Call this “epistemic happiness”.

Here are some beliefs that (if adopted by default, at the start, in the absence of overriding considerations of considerations of logic/probability/simplicity) would increase epistemic happiness:

Excluded middle: A candidate belief(p) is either true or false.If you assume it’s neither, it’s harder to take yourself to seek/achieve truth about p.

Excluded opposites: A candidate belief(p) is not both true and false.If you assume it’s both, it’s harder to take yourself to seek/achieve avoiding falsity about p.

Possible falsity: One can have beliefs that are false.If one assumes one can’t, it’s harder to feel pride or relief (and so, happiness) at avoiding falsity.

Possible ignorance: One can believe a mere part of the whole truth.If one assumes one can’t (i.e., it’s all or nothing), it’s harder to feel pride or relief at reaching whole truth rather than merely a part.

TRUE VS. TRUEFER RELATIVISMA candidate belief(p) is either true or false. [Excluded middle]A candidate belief(p) is not both true and false. [Excluded opposites]One can have beliefs that are false. [Possible falsity]One can believe a mere part of the whole truth. [Possible ignorance]

Instead of “true/false”, why not “true/false-for-(believer)-S”, e.g., “the belief that p is true-for-me but false-for-you”?

What does “p is true-for-S” mean? S believes p?What about “p is false-for-S” mean? S believes not-p?

Oddities about truefer and falsefer: [No possible ignorance-fer-S] S can’t believe less than the whole truth-for-S.[No possible falsity-fer-S] S can’t have beliefs that are not true-for-S.[No excluded opposites-fer-S] S can have contradictory beliefs (while drunk, distracted, w/o

realizing, etc.).So even if p is true-for-S, not-p can also be true-for-S [S might believe both]

so p can also be false-for-S.[No excluded middle-fer-S]: For most p, S has no belief about p, & no concept of p at all.

So normally, even if p is not true-for-S, not-p is also not true for S [S might believe neither] so p is also not false-for-

S.

[Also: p = it’s true that p = it’s true that it’s true that p = etc. But S bels p ≠ S bels (S bels p) ≠ etc.]

Moral: Quit saying “truefer” and “falsefer”. Quit it. Figure out whether you mean “believed”/”unbelieved” or “true”/”false”. And say that

instead.

PERFECT-BELIEVER RELATIVISMEven if truth isn’t just truthfer me or truthfer you,

could truth = truthfer the majority of believers?

or truthfer the experts?

or truthfer some (merely imaginary, but possible) perfect (ideally rational) believers

in perfect conditions?

Truth = Consensus: Richard Rorty Jon Stewart

PBR: x is true = x would be believed by perfect believers (in perfect conditions).

Rival: truth is something independent of, or more basic than, perfect belief.

Maybe PBR is simpler or more testable than its rival, or posits fewer mysterious things …

But let b = A perfect believer exists (and is in perfect conditions).

b would be believed by perfect believers in perfect conditions …

so if PBR is true, b is true! [IS, not WOULD BE]

If PBR, a perfect believer exists [now, somewhere].

So PBR is surprisingly more complex than its rival.

Try instead:

PBR2: x is true = x would be believed by perfect believers but would not be due to them.

But then b is impossible, guaranteed false no matter what. Because b would have to be due to the perfect believers.

And if the perfect believers are impossible, it seems there’s nothing they “would” believe.