In a number of parallel Universes, some of the permutations of me have had the privilege of meeting together repeatedly with various versions of Richard Feldman, Alvin Goldman, Robert Nozick, Alvin Plantinga, and Dr. Jefferey Watson and others at a party in honor of Edmund Gettier hosted by Laurence Bonjour and Susan Haack. Feldman had arrived with his friends, A.J. Ayer, William Alston, Michael Clark and Keith Lehrer. Typically, the conversation will go something like this …
AJA: The standard view of knowledge is having the right to be sure. Tonight, I would like you to earn me that right be assuring me that this is so. If P is true and S is sure that P is true, and S has the right to be sure that p is true, then we can all agree that S has knowledge of p.
EG: Unfortunately, I can think of cases where that is not true. Think of Smith and Jones. The day he met, they were interviewing for the same job. Smith was sure Jones would get the job, as Jones bragged of knowing the owner, giving ten convincing reasons, one for each coin he had in his pocket, so Smith concluded the man who got the job would surely have ten coins in his pocket. This turned out to be true, but not because Jones got the job. To Smith’s surprise, he landed the job himself and hadn’t realized he had ten coins in his own pocket. P was thus true, S was sure that P was true, and S had the right to be sure that p was true, but S didn’t have knowledge of p. The same applied the time that Smith and Jones worked together and Jones kept bragging about his Ford. Smith quipped “Jones owns a Ford or Brown is in Barcelona.” Sure enough, Brown was in Barcelona but Jones didn’t own a Ford. It was just a rental. MC: S didn’t have the right to be sure that p in either of those cases, but he would have so long as all of S’s grounds for believing p were true. So just add this as a fourth condition and S knows that p.
RF: Just the explicit grounds, or a whole chain of grounds? What if one ground in a chain of grounds is false? What if some attain greater certainty? Do those with less certainty negate the grounds with greater certainty? Work on that.
KL: As I was discussing with my friend, Paxson, what you really need for S to know that p is no defeating arguments as your fourth condition. There is no true proposition t such that, if S were justified in believing t, that S would not be justified in believing p. (Feldman 34)
RF: Nice try, Keith. But don’t you remember the Radio that Smith knew was off but playing “Girl, You’ll be a Woman Soon?” There was also that Tom and Tim Grabit case and their lying mother. Sight gets defeated by lies. My own more modest proposal is to add that S’s justification for p does not essentially depend on any falsehood. Admittedly, this isn’t completely clear, but it’s clearer than Clark’s no false grounds idea, which I rather like otherwise. We have knowledge so long as each premise is sound. And there is always some epistemically basic belief that every piece of knowledge ultimately rests on,
LB: Either something shows you evidence for its truth somehow or it doesn’t. There are no epistemically basic beliefs that things rest on. Evidence is something that fits together like a web coherently with everything we know.
KL: I’m with Laurence. You say that “a basic statement must be self-justified and must not be justified by any non-basic belief. Second, a basic belief must either be irrefutable or, if refutable at all, it must only be refutable by other basic beliefs. Third, beliefs must be such that all other beliefs that are justified or refuted are justified or refuted by basic beliefs.” (Lehrer, Keith. Knowledge Oxford: Clarendon Press, 1974. pp.76-77) (Huemer 408). And our friend, Fred Will, would add words like “infallible,” “indubitable” and “incorrigible” to this (Huemer 402). One’s sensation may be deceived in various ways.
WA: Maybe DesCartes wanted that level of certainty and that would be ideal, but I just want justification to be sufficient for belief. Mediately justified beliefs lead to immediately justified beliefs along branches. If not, then the premises of a belief are unsound. Looped and infinite chains or those that terminate in unjustified beliefs would fail to constitute sufficient grounds for belief.
RF: I think we can all agree that a priori knowledge is too limited to be practical but we certainly need evidence for justification.
SH: How about foundherentism?
AG: Nothing wrong with evidence, but if you want to truly satisfy Gettier here, S truly has knowledge if and only if the fact of p is causally connected in an appropriate way with S’s believing in p. Here’s another case. Let’s say Gerald falls down the steps and hits his head, giving him amnesia and an assortment of strange beliefs, none of which are true, but included in that random set of beliefs is the notion that he has just fallen down the steps. His belief is justified because he has the memory and it is true. But the belief was not causally connected in the right way. Therefore, it would not be knowledge. Edmund would be honored. The same holds true if there is a more complex causal chain. Now if you see a tree in front of you, the cause is your eyesight. Or I might remember a tree, so the cause in my belief there was a tree would be my memory. Or there might be a more complex causal chain, such as Smith seeing sawdust and wood chips where there once was a tree, remembering the tree and a notice he saw from the city saying they would cut it down. You might see this as evidence for belief, but they are also causes for belief. How I come to believe matters more than why.
RF: Well and true, but how do you deal with generalizations? How, for instance, would you know that all men are mortal, if you have not seen every man, past present or future to cause such a belief? Also, what if you lack some information in a causal chain? If Edgar believes Allan Poe died and knew he’d taken a fatal dose of poison for which there is no antidote and some time passed so he believed he was dead, but Alaln actually died of a heart attack from worry rather than poison, Edgar would be wrong about the causal chain in Allan’s death even if he was right that he was dead. He would then be justified in believing Allan was dead and it would be true, but he would not possess an appropriate causal connection.
AG: True. You would call it knowledge. I wouldn’t if I didn’t consider that the instances from the generality are not still causally connected – there is something to be said for that. Or perhaps your standard of what constitutes knowledge is lower. than mine.
RF: Well then consider the twins Trudy and Judy Smith met. Judy comes to him one day and he believes it’s Judy even though he knows about Judy’s twin sister Trudy. Without good evidence, Smith assumes Judy is talking to him, when it could have been Trudy. You would say Judy caused the belief. It would be true. It would not be justified.
AG: I agree, it would not be justified. I thought about this problem for over a decade and realized what was needed was a reliable process of belief formation. Just seeing someone and being rash about it would not count as a reliable process of belief formation. S’s belief in p at t is justified ‘if S’s belief in p at t results from a belief-independent process that is reliable, then S’s belief in p at t is justified’; And if S’s belief in p at t results from a belief-dependent process that is conditionally reliable, and the beliefs the process operates on are themselves justified.’ (Feldman 95). This, by the way, is why sensory experience is justified for believing – it is a highly reliable process. Calling it an epistemically basic belief is unnecessary.
RF: If you don’t have a body but are really a brain in a vat causing all sorts of beliefs, then what process applies? Or what if you only look at a broken clock at the right time by pure coincidence every time you look at it, unaware that it is broken?
LB: I concur with Richard on this one. Consider Norman, the clairvoyant. He was always right. Suppose one day Norman believes the president is in New York City for no reason other than a hunch obtained by his clairvoyance, and he’s right. That belief would not be justified. I’ll admit it would be a reliable process, but it would fail to cohere with any evidence Norman would otherwise have.
RF: Yes, evidence. You’ll need to spell out the process better, Alvin.
Just then David Hume enters the room.
DH: The clairvoyant’s process is mere numerical inference. It only predicts that past. Backgammon anyone? (add Scotch accent)
AG: No thanks, Hume. Well I have made some distinctions, like the difference between a hasty scan and a detailed observation or the qualitative difference between seeing nearby objects and distant ones – process types.
RF: Not good enough. Each category still gets treated as though every token example has the same reliability as a process.
JW: I don’t think Freedman gets it. These types need to be general to be all embracing.
AG:: We might say, “if s’s belief in p at t results from a belief-independent process token whose relevant type is reliable, then S’s belief in p at t is justified.” (Feldman 9Ish 8) How’s that?
RF: Consider an umpire at a baseball game. Some calls are easy. Others are tough. The process is the same.
JC: No, it’s not. An umpire scrutinizing over a tough call involves more scrutiny than an easy call. He scrutinizes. That’s another process type.
RF: You people just think up examples to give you the results that you want. There’s no general theory here. This violates the Same Evidence Principle. Evaluation supervenes on evidence.
RN: I appreciate that you strive for high and consistent standards, Richard, but I have to agree that causal chains might improve over reasons alone for justification. Method certainly matters. So does process. And what you want is not just any process type, but something more reliably reliable. The only way to do this would be through a process that actually tracks the truth. I’ll admit you do need a good method, but that method also has to be used in the right way. You ought to be asking yourself if things had been different, would you still have known. You need to track counterfactuals. S only knows p if S believes p, p is true and S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” Do this and you can’t go wrong.
RN: Consider the broken clock you mentioned, which you looked on only at lucky times, getting it right, but didn’t know. Why? Because the cause was right but the evidence was unjustified. The premise, you would argue, was that the clock worked, but it didn’t. So for you, as a foundationalist, there would be no knowledge, but Allan’s causal theory fails, as you said. But had Allan tracked the truth using the clock method, he would have learned within seconds that the clock was broken. He would then have found a different method more suitable for determining the correct time or simply confess he didn’t know what time it was and be correct in that belief instead. There are many examples like this – it could be a thermometer that was broken instead. Knowers are truth trackers.
RF: Well, that would solve Edmund’s cases just as neatly as Alvin’s solution would.
RN: Indeed, and there are many other such cases of lucky knowledge. For instance, Ms. Black, working in her office, getting up to stretch – she looks out the window – and just happens to see a mugging on the street and becomes a witness. Her method is luck. What kind of a reliable process is that? In fact, she has no method. Yet she certainly saw. And seeing was her process. She didn’t track the truth because she was looking for it over time. That’s why I said, “S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” At that time, her method was seeing from a timely stretch but it wasn’t even what she intended to do. One might suppose she tracked it over the time she needed to – just that moment. But this is still not enough for knowledge, because what if things had been different? What if she had stretched at any other time? Then she would not have known. And I say that if you would not have known, then you aren’t tracking the truth. I raise the standard of what knowledge ought to mean in this way. I say this because truth matters. In many cases our lives may depend on it!
JC: I agree but I’m not sure I understand. You are introducing counterfactuals in saying that something is not knowledge unless they can say that if things had been different, then such and such would be true, and of course they would have to be right about that. Do you mean that they should be able to know both the truth or falsity under any condition?
Nozick goes to the chalk board.
RN: Yes. However, I would temper this by distance. Here we are talking about the responsibility toward truth that human beings ought to consider. So I’ll offer a third and fourth condition for knowledge as follows. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that if p were true, S would believe that p and not disbelieve that p were true. This is actually a step up from a fourth condition I previously had expressed, that if p were true, S would believe it – (iv) p → S believes that p. But, as I said, realizing that we, as human beings, are quite limited in our methods and knowledge, for a realistic aim at what one would use for saying that someone knew something, at least given a certain method, this would be the responsible way to treat whether one knows something or not.
JC: I’m confused. Can you give me an example of what you mean when you say, “when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p”?
RN: Certainly. Consider, the unattentive security guard who plays SODUKU all night long instead of attentively watching the monitors in his store. He gets lucky and catches a thief out of the corner of his eye as he thinks about something completely different. His assigned method is to watch. And indeed, he sees. But he is not tracking the truth by that method. Therefore, even though it could be said that he has knowledge of the thief, the standard of knowledge I am referring to has not been met. He was derelict in his duty, which was to track the truth by the method of watching the store monitors. Had he looked up at some other time, he would not have believed p or known p was true. Had p been untrue, he would not have known whether p were true or not either way. This is not a responsible way of knowing things. Tracking the truth is a responsibility. Using reliable processes for doing so goes along with this responsibility.
RF: I used that very same example to show why tracking was not necessary for knowledge.
JC: Clearly, the difference is in what the standards are for the term.
Just then Saul Kripke walks in.
SK: I heard what you were saying, Nozick. Your truth tracking theory is hocus and I’ll prove it! You’ve heard about that town with fake barns, right? The town replaced their old barns and left a few standing but ran out of red paint, so they created a bunch of white barn facades to please the tourists. Smith drives through the town, sees a red barn and deduces that he sees a red barn. Now if Smith had tracked the truth, he would have known that all the white barns were fake. As it stands, he got lucky and properly identified a real barn, a red one, but since he didn’t track it, all he really knows is that he saw a red barn. However, he doesn’t know that he saw a barn because he wasn’t tracking the truth. See the problem?
JC: I’m not sure.
JW: According to Truth-Tracking theory, he saw a red barn, James, but he did not see a barn.
JC: Goodness. I can see that!
JW: Nozick’s theory is “half right.” … “Truth-Tracking is really getting at the counterfactual: would S have believed p if not -p? Objections are to the condition that S would have believed not-p if p. So a revised Truth-Tracking theory would be a causal theory.” (Dr. Watson Unit 4 Video Lectures 4.3 The Truth-Tracking Theory)
JC: Truth-tracking is confusing, Doc. What does he mean by “tempered by distance”?
JW: He’s talking about how far fetched the alternate world of possibilities might be. It’s “in the neighborhood” if it’s relevant to tracking the truth about something specific using a specific method. He doesn’t mean every possibility., only the stuff directly related to tracking the facts.
JC: Oh. So he’s not saying we have to be Omniscient to have knowledge, then? I’m not so sure I agree with that. As long as we’re after high standards here, I think maybe we do. Look, there’s Keith DeRose. What do you think about standards for the word, “knowledge,” Keith?
KD: I think it depends on context, James. Nozick here is talking about standard every day knowledge and responsibility. Our standard of what we consider “knowledge” can change from moment to moment. Recall the time that Smith and his wife had to deposit some checks into the bank on a Friday night and the line was too long so Smith suggested waiting till Saturday since he knew they were open till noon on Saturdays. His wife had doubts about the wisdom of that, so she asked if he really knew that? So he says, “sure I know it. I was just deposited a check there two Saturdays ago.” But as it stands, she had a particularly large check to deposit and had a bill due early in the week so it was very important to get that check deposited by Saturday. So she informs him of all this and says, “do you know for sure?” What really is the difference between knowing and knowing for sure, James?
JC: Ask Nozick here. I think he’d want Smith to track the truth by checking the web site or stopping in.
KD: That’s right. If Smith was talking to Robert, he might not have said he knew the first time around, but in his routine, his memory was reliable enough, and the odds of the bank changing their rules weren’t all that great. Neither Smith nor his wife had seen any announcements in the news lately about banks closing on Saturdays in the area.
AG: Depends on subject factors and attributor factors.
JC: What? Do you have to have something to say about everything Alvin?
KD: He’s talking about relevant alternative theory, James. Not all of these are invariantists. Some are contextualists, like me. The key is the attribution factor. Smith would attribute knowledge to the idea the bank was open. Saturdays ordinarily, but when he circumstances changed, the content behind the word “knowledge” was different. The character of the word, “knowledge” may stay the same in all circumstances, but the content can change.
AG: Linguistic and psychological context are also very important, James. They are attributor factors. If you’re in a class with Descartes talking about an evil demon fooling you, the attribution of the word “knowledge” is affected. (Huemer 495)
KD: Precisely my point. When Smith’s wife puts pressure on him, he’s not saying he didn’t know before, he’s addressing a higher expectation.
RF: That’s just pragmatism. It’s not epistemic responsibility. Smith was wrong to deny he knew it the first time. His memory was sufficient. You’re throwing in the “Get the Evidence Principle” (Feldman 47). You can never have enough information. The evidence you have at a given time is a fair basis for whether you can believe something, and if it’s true it’s knowledge. Simple as that. The Get the Evidence Principle becomes irrational to satisfy. This shouldn’t be confused the fact that even though it’s highly improbable that I’m having a heart attack when I’m having chest pains after eating buffalo wings, I might msyelf go get my heart checked. Action and knowledge aren’t the same thing. Uncertainty does not mean lack of knowledge either. Truth, on the other hand, would affect the status of knowledge. If Smith was wrong, he wouldn’t have known. Knowledge, as the word ought to be used, does not require certainty. It just needs to be reasonable. Smith’s memory was reasonable. His belief was justified.
RN: Did you say “epistemic responsibility,” Richard? Where is the responsibility in not tracking the truth? Checking your heart was exactly that!
JC: Professor Feldman has a point. I can see going to the doctor just in case. Even when things are improbable, it depends on what’s at stake. Business people use an expected utility formula. I’m a probabilist myself. But how sure do you really need to be to track the truth? We’re just human beings? How often do you have to go to the web site to see if the bank is open? Every ten minutes? How would anyone know to check to see if barns were really facades? Who would care to do that but the locals in a town? And what if Smith’s wife knew her husband’s memory was unreliable from an onset of Alzheimers? And all that aside, who can really track the truth but God?
AP: If I may interject here … We are limited by proper function. The human brain was not designed with such great capacity to know all that might have happened. Proper Function is a reliable process for getting at the facts that respects epistemic virtue and responsibility. We are all here because truth matters but the various organs have different functions – the heart pumps blood, the liver cleanses it, and so on. We have many instincts and we don’t track the truth nearly as much as we ought.. Any appropriate method wold do, but we need to start with the knowledge of our own need for epistemic virtue.
RF: You must have spoken to my friend W.K. Clifford. He says it’s always wrong everywhere to “believe with insufficient evidence.” (Feldman 44)
AP: Quite. And not just evidence. There are many types of epistemic responsibilities we have, concerning which virtue is often lacking. We can do better, objectively, subjectively, in what we believe and especially in our general “disposition to have coherent beliefs.” We won’t evaluate evidence well without a proper disposition towards evidence. When do we know our evidence is adequate? What of our faculties? Are they themselves reliable? What is our “epistemic goal”? I have lots of beliefs and goals – not all of them are epistemic. “There are a thousand other epistemic virtues” besides these for determining whether a belief has warrant.
RF: Could you reduce this all into something precise for us?
AP: Surely. “A belief has warrant for me only if (1) it has been produced in me by cognitive faculties that are working properly (functioning as they ought to, subject to no cognitive dysfunction) in a cognitive environment that is appropriate for my kinds of cognitive faculties, (2) the segment of the design plan governing the production of the belief is aimed at the production of true beliefs, and (3) there is a high statistical probability that a belief produced under those conditions will be true.” (Plantinga, Alvin. Warrant and Proper Function (New York: Oxford University Press, 1993), p. 59) (Feldman 100)
JC: So … S knows or has warrant for believing that p if, (but not only if), you aren’t cognitively impaired by drinking, dreaming or in a brain in a vat hallucinating, or suffering from dementia, and in the right environment. Can you explain what a cognitive environment is?
AP: Well, your brain wasn’t meant to concentrate on an important matter when you are being distracted. If you were in a tub of worms and scorpions or high up in snowy mountains being chased by a yetti, you might not be able to score well on a test in philosophy. Your cognitive functions might work just fine, but your environment would not be conducive to its optimal operation. Your cognition might be just fine for the environment it was designed for. You might have just passed your exams at MIT, but if suddenly you were transported to a planet in Alpha Centuari where there were invisible elephants sending cosmic signals into your brain making it believe there was a trumpet playing, your belief would not be warranted. And even if there was indeed a trumpet playing, say a silent one in a nearby phone booth, your belief might be true, but it wouldn’t be warranted. Would it?
JC: I suppose not. That would seem more like belief than knowledge. Right Professor Goldman? And what is this “segment of the design plan?”
AP: Well, your brain is designed with many functions – such as interpreting what you see, or signaling your finger to move, or giving you input as to what you’d like to eat, and such sensory knowledge functions for its purpose, but if you are tasked with determining the truth about a proposition, it might not be any of those segments of your cognitive functions that would be needed for determining that truth. It would be the segment that governs the production of the belief. And specifically, it would be that which aims at the truth. You, for instance, might believe that Jesus rose from the dead. Aiming at the truth without bias would require a level of objectivity you might not possess. You do, however, possess the capacity to be objective. You can, in fact, overcome biases and predispositions. So you might be asking questions like whether an empty tomb necessarily implied a resurrection, or whether a report that a tomb was empty was reliable, or truly given on the third day, how consistent the reports are, or whether various details were added to a story later. If you were biased, you might choose not to investigate for yourself. If you used that segment of your cognitive design plan that governed discernment of true beliefs with a high statistical reliability, your design plan would be segmented properly for the task. A belief produced under those conditions is certainly warranted. As long as there was no cognitive dysfunction, you might be capable of knowledge.
JC: Can you give me an example?
to be continued …