In a number of parallel Universes, some of the permutations of me have had the privilege of meeting together repeatedly with various versions of Richard Feldman, Alvin Goldman, Robert Nozick, Alvin Plantinga, and Dr. Jefferey Watson and others at a party in honor of Edmund Gettier hosted by Laurence Bonjour and Susan Haack. Feldman had arrived with his friends, A.J. Ayer, William Alston, Michael Clark and Keith Lehrer. Typically, the conversation will go something like this …
AJA: The standard view of knowledge is having the right to be sure. Tonight, I would like you to earn me that right be assuring me that this is so. If P is true and S is sure that P is true, and S has the right to be sure that p is true, then we can all agree that S has knowledge of p.
EG: Unfortunately, I can think of cases where that is not true. Think of Smith and Jones. The day he met, they were interviewing for the same job. Smith was sure Jones would get the job, as Jones bragged of knowing the owner, giving ten convincing reasons, one for each coin he had in his pocket, so Smith concluded the man who got the job would surely have ten coins in his pocket. This turned out to be true, but not because Jones got the job. To Smith’s surprise, he landed the job himself and hadn’t realized he had ten coins in his own pocket. P was thus true, S was sure that P was true, and S had the right to be sure that p was true, but S didn’t have knowledge of p. The same applied the time that Smith and Jones worked together and Jones kept bragging about his Ford. Smith quipped “Jones owns a Ford or Brown is in Barcelona.” Sure enough, Brown was in Barcelona but Jones didn’t own a Ford. It was just a rental. MC: S didn’t have the right to be sure that p in either of those cases, but he would have so long as all of S’s grounds for believing p were true. So just add this as a fourth condition and S knows that p.
RF: Just the explicit grounds, or a whole chain of grounds? What if one ground in a chain of grounds is false? What if some attain greater certainty? Do those with less certainty negate the grounds with greater certainty? Work on that.
KL: As I was discussing with my friend, Paxson, what you really need for S to know that p is no defeating arguments as your fourth condition. There is no true proposition t such that, if S were justified in believing t, that S would not be justified in believing p. (Feldman 34)
RF: Nice try, Keith. But don’t you remember the Radio that Smith knew was off but playing “Girl, You’ll be a Woman Soon?” There was also that Tom and Tim Grabit case and their lying mother. Sight gets defeated by lies. My own more modest proposal is to add that S’s justification for p does not essentially depend on any falsehood. Admittedly, this isn’t completely clear, but it’s clearer than Clark’s no false grounds idea, which I rather like otherwise. We have knowledge so long as each premise is sound. And there is always some epistemically basic belief that every piece of knowledge ultimately rests on,
LB: Either something shows you evidence for its truth somehow or it doesn’t. There are no epistemically basic beliefs that things rest on. Evidence is something that fits together like a web coherently with everything we know.
KL: I’m with Laurence. You say that “a basic statement must be self-justified and must not be justified by any non-basic belief. Second, a basic belief must either be irrefutable or, if refutable at all, it must only be refutable by other basic beliefs. Third, beliefs must be such that all other beliefs that are justified or refuted are justified or refuted by basic beliefs.” (Lehrer, Keith. Knowledge Oxford: Clarendon Press, 1974. pp.76-77) (Huemer 408). And our friend, Fred Will, would add words like “infallible,” “indubitable” and “incorrigible” to this (Huemer 402). One’s sensation may be deceived in various ways.
WA: Maybe DesCartes wanted that level of certainty and that would be ideal, but I just want justification to be sufficient for belief. Mediately justified beliefs lead to immediately justified beliefs along branches. If not, then the premises of a belief are unsound. Looped and infinite chains or those that terminate in unjustified beliefs would fail to constitute sufficient grounds for belief.
RF: I think we can all agree that a priori knowledge is too limited to be practical but we certainly need evidence for justification.
SH: How about foundherentism?
AG: Nothing wrong with evidence, but if you want to truly satisfy Gettier here, S truly has knowledge if and only if the fact of p is causally connected in an appropriate way with S’s believing in p. Here’s another case. Let’s say Gerald falls down the steps and hits his head, giving him amnesia and an assortment of strange beliefs, none of which are true, but included in that random set of beliefs is the notion that he has just fallen down the steps. His belief is justified because he has the memory and it is true. But the belief was not causally connected in the right way. Therefore, it would not be knowledge. Edmund would be honored. The same holds true if there is a more complex causal chain. Now if you see a tree in front of you, the cause is your eyesight. Or I might remember a tree, so the cause in my belief there was a tree would be my memory. Or there might be a more complex causal chain, such as Smith seeing sawdust and wood chips where there once was a tree, remembering the tree and a notice he saw from the city saying they would cut it down. You might see this as evidence for belief, but they are also causes for belief. How I come to believe matters more than why.
RF: Well and true, but how do you deal with generalizations? How, for instance, would you know that all men are mortal, if you have not seen every man, past present or future to cause such a belief? Also, what if you lack some information in a causal chain? If Edgar believes Allan Poe died and knew he’d taken a fatal dose of poison for which there is no antidote and some time passed so he believed he was dead, but Alaln actually died of a heart attack from worry rather than poison, Edgar would be wrong about the causal chain in Allan’s death even if he was right that he was dead. He would then be justified in believing Allan was dead and it would be true, but he would not possess an appropriate causal connection.
AG: True. You would call it knowledge. I wouldn’t if I didn’t consider that the instances from the generality are not still causally connected – there is something to be said for that. Or perhaps your standard of what constitutes knowledge is lower. than mine.
RF: Well then consider the twins Trudy and Judy Smith met. Judy comes to him one day and he believes it’s Judy even though he knows about Judy’s twin sister Trudy. Without good evidence, Smith assumes Judy is talking to him, when it could have been Trudy. You would say Judy caused the belief. It would be true. It would not be justified.
AG: I agree, it would not be justified. I thought about this problem for over a decade and realized what was needed was a reliable process of belief formation. Just seeing someone and being rash about it would not count as a reliable process of belief formation. S’s belief in p at t is justified ‘if S’s belief in p at t results from a belief-independent process that is reliable, then S’s belief in p at t is justified’; And if S’s belief in p at t results from a belief-dependent process that is conditionally reliable, and the beliefs the process operates on are themselves justified.’ (Feldman 95). This, by the way, is why sensory experience is justified for believing – it is a highly reliable process. Calling it an epistemically basic belief is unnecessary.
RF: If you don’t have a body but are really a brain in a vat causing all sorts of beliefs, then what process applies? Or what if you only look at a broken clock at the right time by pure coincidence every time you look at it, unaware that it is broken?
LB: I concur with Richard on this one. Consider Norman, the clairvoyant. He was always right. Suppose one day Norman believes the president is in New York City for no reason other than a hunch obtained by his clairvoyance, and he’s right. That belief would not be justified. I’ll admit it would be a reliable process, but it would fail to cohere with any evidence Norman would otherwise have.
RF: Yes, evidence. You’ll need to spell out the process better, Alvin.
Just then David Hume enters the room.
DH: The clairvoyant’s process is mere numerical inference. It only predicts that past. Backgammon anyone? (add Scotch accent)
AG: No thanks, Hume. Well I have made some distinctions, like the difference between a hasty scan and a detailed observation or the qualitative difference between seeing nearby objects and distant ones – process types.
RF: Not good enough. Each category still gets treated as though every token example has the same reliability as a process.
JW: I don’t think Freedman gets it. These types need to be general to be all embracing.
AG:: We might say, “if s’s belief in p at t results from a belief-independent process token whose relevant type is reliable, then S’s belief in p at t is justified.” (Feldman 9Ish 8) How’s that?
RF: Consider an umpire at a baseball game. Some calls are easy. Others are tough. The process is the same.
JC: No, it’s not. An umpire scrutinizing over a tough call involves more scrutiny than an easy call. He scrutinizes. That’s another process type.
RF: You people just think up examples to give you the results that you want. There’s no general theory here. This violates the Same Evidence Principle. Evaluation supervenes on evidence.
RN: I appreciate that you strive for high and consistent standards, Richard, but I have to agree that causal chains might improve over reasons alone for justification. Method certainly matters. So does process. And what you want is not just any process type, but something more reliably reliable. The only way to do this would be through a process that actually tracks the truth. I’ll admit you do need a good method, but that method also has to be used in the right way. You ought to be asking yourself if things had been different, would you still have known. You need to track counterfactuals. S only knows p if S believes p, p is true and S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” Do this and you can’t go wrong.
RF: Huh?
RN: Consider the broken clock you mentioned, which you looked on only at lucky times, getting it right, but didn’t know. Why? Because the cause was right but the evidence was unjustified. The premise, you would argue, was that the clock worked, but it didn’t. So for you, as a foundationalist, there would be no knowledge, but Allan’s causal theory fails, as you said. But had Allan tracked the truth using the clock method, he would have learned within seconds that the clock was broken. He would then have found a different method more suitable for determining the correct time or simply confess he didn’t know what time it was and be correct in that belief instead. There are many examples like this – it could be a thermometer that was broken instead. Knowers are truth trackers.
RF: Well, that would solve Edmund’s cases just as neatly as Alvin’s solution would.
RN: Indeed, and there are many other such cases of lucky knowledge. For instance, Ms. Black, working in her office, getting up to stretch – she looks out the window – and just happens to see a mugging on the street and becomes a witness. Her method is luck. What kind of a reliable process is that? In fact, she has no method. Yet she certainly saw. And seeing was her process. She didn’t track the truth because she was looking for it over time. That’s why I said, “S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” At that time, her method was seeing from a timely stretch but it wasn’t even what she intended to do. One might suppose she tracked it over the time she needed to – just that moment. But this is still not enough for knowledge, because what if things had been different? What if she had stretched at any other time? Then she would not have known. And I say that if you would not have known, then you aren’t tracking the truth. I raise the standard of what knowledge ought to mean in this way. I say this because truth matters. In many cases our lives may depend on it!
JC: I agree but I’m not sure I understand. You are introducing counterfactuals in saying that something is not knowledge unless they can say that if things had been different, then such and such would be true, and of course they would have to be right about that. Do you mean that they should be able to know both the truth or falsity under any condition?
Nozick goes to the chalk board.
RN: Yes. However, I would temper this by distance. Here we are talking about the responsibility toward truth that human beings ought to consider. So I’ll offer a third and fourth condition for knowledge as follows. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that if p were true, S would believe that p and not disbelieve that p were true. This is actually a step up from a fourth condition I previously had expressed, that if p were true, S would believe it – (iv) p → S believes that p. But, as I said, realizing that we, as human beings, are quite limited in our methods and knowledge, for a realistic aim at what one would use for saying that someone knew something, at least given a certain method, this would be the responsible way to treat whether one knows something or not.
JC: I’m confused. Can you give me an example of what you mean when you say, “when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p”?
RN: Certainly. Consider, the unattentive security guard who plays SODUKU all night long instead of attentively watching the monitors in his store. He gets lucky and catches a thief out of the corner of his eye as he thinks about something completely different. His assigned method is to watch. And indeed, he sees. But he is not tracking the truth by that method. Therefore, even though it could be said that he has knowledge of the thief, the standard of knowledge I am referring to has not been met. He was derelict in his duty, which was to track the truth by the method of watching the store monitors. Had he looked up at some other time, he would not have believed p or known p was true. Had p been untrue, he would not have known whether p were true or not either way. This is not a responsible way of knowing things. Tracking the truth is a responsibility. Using reliable processes for doing so goes along with this responsibility.
RF: I used that very same example to show why tracking was not necessary for knowledge.
JC: Clearly, the difference is in what the standards are for the term.
Just then Saul Kripke walks in.
SK: I heard what you were saying, Nozick. Your truth tracking theory is hocus and I’ll prove it! You’ve heard about that town with fake barns, right? The town replaced their old barns and left a few standing but ran out of red paint, so they created a bunch of white barn facades to please the tourists. Smith drives through the town, sees a red barn and deduces that he sees a red barn. Now if Smith had tracked the truth, he would have known that all the white barns were fake. As it stands, he got lucky and properly identified a real barn, a red one, but since he didn’t track it, all he really knows is that he saw a red barn. However, he doesn’t know that he saw a barn because he wasn’t tracking the truth. See the problem?
JC: I’m not sure.
JW: According to Truth-Tracking theory, he saw a red barn, James, but he did not see a barn.
JC: Goodness. I can see that!
JW: Nozick’s theory is “half right.” … “Truth-Tracking is really getting at the counterfactual: would S have believed p if not -p? Objections are to the condition that S would have believed not-p if p. So a revised Truth-Tracking theory would be a causal theory.” (Dr. Watson Unit 4 Video Lectures 4.3 The Truth-Tracking Theory)
JC: Truth-tracking is confusing, Doc. What does he mean by “tempered by distance”?
JW: He’s talking about how far fetched the alternate world of possibilities might be. It’s “in the neighborhood” if it’s relevant to tracking the truth about something specific using a specific method. He doesn’t mean every possibility., only the stuff directly related to tracking the facts.
JC: Oh. So he’s not saying we have to be Omniscient to have knowledge, then? I’m not so sure I agree with that. As long as we’re after high standards here, I think maybe we do. Look, there’s Keith DeRose. What do you think about standards for the word, “knowledge,” Keith?
KD: I think it depends on context, James. Nozick here is talking about standard every day knowledge and responsibility. Our standard of what we consider “knowledge” can change from moment to moment. Recall the time that Smith and his wife had to deposit some checks into the bank on a Friday night and the line was too long so Smith suggested waiting till Saturday since he knew they were open till noon on Saturdays. His wife had doubts about the wisdom of that, so she asked if he really knew that? So he says, “sure I know it. I was just deposited a check there two Saturdays ago.” But as it stands, she had a particularly large check to deposit and had a bill due early in the week so it was very important to get that check deposited by Saturday. So she informs him of all this and says, “do you know for sure?” What really is the difference between knowing and knowing for sure, James?
JC: Ask Nozick here. I think he’d want Smith to track the truth by checking the web site or stopping in.
KD: That’s right. If Smith was talking to Robert, he might not have said he knew the first time around, but in his routine, his memory was reliable enough, and the odds of the bank changing their rules weren’t all that great. Neither Smith nor his wife had seen any announcements in the news lately about banks closing on Saturdays in the area.
AG: Depends on subject factors and attributor factors.
JC: What? Do you have to have something to say about everything Alvin?
KD: He’s talking about relevant alternative theory, James. Not all of these are invariantists. Some are contextualists, like me. The key is the attribution factor. Smith would attribute knowledge to the idea the bank was open. Saturdays ordinarily, but when he circumstances changed, the content behind the word “knowledge” was different. The character of the word, “knowledge” may stay the same in all circumstances, but the content can change.
AG: Linguistic and psychological context are also very important, James. They are attributor factors. If you’re in a class with Descartes talking about an evil demon fooling you, the attribution of the word “knowledge” is affected. (Huemer 495)
KD: Precisely my point. When Smith’s wife puts pressure on him, he’s not saying he didn’t know before, he’s addressing a higher expectation.
RF: That’s just pragmatism. It’s not epistemic responsibility. Smith was wrong to deny he knew it the first time. His memory was sufficient. You’re throwing in the “Get the Evidence Principle” (Feldman 47). You can never have enough information. The evidence you have at a given time is a fair basis for whether you can believe something, and if it’s true it’s knowledge. Simple as that. The Get the Evidence Principle becomes irrational to satisfy. This shouldn’t be confused the fact that even though it’s highly improbable that I’m having a heart attack when I’m having chest pains after eating buffalo wings, I might msyelf go get my heart checked. Action and knowledge aren’t the same thing. Uncertainty does not mean lack of knowledge either. Truth, on the other hand, would affect the status of knowledge. If Smith was wrong, he wouldn’t have known. Knowledge, as the word ought to be used, does not require certainty. It just needs to be reasonable. Smith’s memory was reasonable. His belief was justified.
RN: Did you say “epistemic responsibility,” Richard? Where is the responsibility in not tracking the truth? Checking your heart was exactly that!
JC: Professor Feldman has a point. I can see going to the doctor just in case. Even when things are improbable, it depends on what’s at stake. Business people use an expected utility formula. I’m a probabilist myself. But how sure do you really need to be to track the truth? We’re just human beings? How often do you have to go to the web site to see if the bank is open? Every ten minutes? How would anyone know to check to see if barns were really facades? Who would care to do that but the locals in a town? And what if Smith’s wife knew her husband’s memory was unreliable from an onset of Alzheimers? And all that aside, who can really track the truth but God?
AP: If I may interject here … We are limited by proper function. The human brain was not designed with such great capacity to know all that might have happened. Proper Function is a reliable process for getting at the facts that respects epistemic virtue and responsibility. We are all here because truth matters but the various organs have different functions – the heart pumps blood, the liver cleanses it, and so on. We have many instincts and we don’t track the truth nearly as much as we ought.. Any appropriate method wold do, but we need to start with the knowledge of our own need for epistemic virtue.
RF: You must have spoken to my friend W.K. Clifford. He says it’s always wrong everywhere to “believe with insufficient evidence.” (Feldman 44)
AP: Quite. And not just evidence. There are many types of epistemic responsibilities we have, concerning which virtue is often lacking. We can do better, objectively, subjectively, in what we believe and especially in our general “disposition to have coherent beliefs.” We won’t evaluate evidence well without a proper disposition towards evidence. When do we know our evidence is adequate? What of our faculties? Are they themselves reliable? What is our “epistemic goal”? I have lots of beliefs and goals – not all of them are epistemic. “There are a thousand other epistemic virtues” besides these for determining whether a belief has warrant.
RF: Could you reduce this all into something precise for us?
AP: Surely. “A belief has warrant for me only if (1) it has been produced in me by cognitive faculties that are working properly (functioning as they ought to, subject to no cognitive dysfunction) in a cognitive environment that is appropriate for my kinds of cognitive faculties, (2) the segment of the design plan governing the production of the belief is aimed at the production of true beliefs, and (3) there is a high statistical probability that a belief produced under those conditions will be true.” (Plantinga, Alvin. Warrant and Proper Function (New York: Oxford University Press, 1993), p. 59) (Feldman 100)
JC: So … S knows or has warrant for believing that p if, (but not only if), you aren’t cognitively impaired by drinking, dreaming or in a brain in a vat hallucinating, or suffering from dementia, and in the right environment. Can you explain what a cognitive environment is?
AP: Well, your brain wasn’t meant to concentrate on an important matter when you are being distracted. If you were in a tub of worms and scorpions or high up in snowy mountains being chased by a yetti, you might not be able to score well on a test in philosophy. Your cognitive functions might work just fine, but your environment would not be conducive to its optimal operation. Your cognition might be just fine for the environment it was designed for. You might have just passed your exams at MIT, but if suddenly you were transported to a planet in Alpha Centuari where there were invisible elephants sending cosmic signals into your brain making it believe there was a trumpet playing, your belief would not be warranted. And even if there was indeed a trumpet playing, say a silent one in a nearby phone booth, your belief might be true, but it wouldn’t be warranted. Would it?
JC: I suppose not. That would seem more like belief than knowledge. Right Professor Goldman? And what is this “segment of the design plan?”
AP: Well, your brain is designed with many functions – such as interpreting what you see, or signaling your finger to move, or giving you input as to what you’d like to eat, and such sensory knowledge functions for its purpose, but if you are tasked with determining the truth about a proposition, it might not be any of those segments of your cognitive functions that would be needed for determining that truth. It would be the segment that governs the production of the belief. And specifically, it would be that which aims at the truth. You, for instance, might believe that Jesus rose from the dead. Aiming at the truth without bias would require a level of objectivity you might not possess. You do, however, possess the capacity to be objective. You can, in fact, overcome biases and predispositions. So you might be asking questions like whether an empty tomb necessarily implied a resurrection, or whether a report that a tomb was empty was reliable, or truly given on the third day, how consistent the reports are, or whether various details were added to a story later. If you were biased, you might choose not to investigate for yourself. If you used that segment of your cognitive design plan that governed discernment of true beliefs with a high statistical reliability, your design plan would be segmented properly for the task. A belief produced under those conditions is certainly warranted. As long as there was no cognitive dysfunction, you might be capable of knowledge.
All I wanted to do was start the Pamalogy Society, I tell ya! Next thing ya know, I’m enrolled at Arizona State University taking philosophy courses, courses in organizational leadership and interdisciplinary studies. It follows that I’ll be writing some academic blogs in the coming years. And guess who they’ll be for? Me. I’m writing them to myself! Writing helps me remind myself of what I’ve learned. Why not put my notes online, where I can access them on other devices later? So this one is on an epistemological system that I’ve come up with as an untested solution to the still raging debate over the theory of knowledge. I call it “Contextualized SuperFoundherantism.” Catchy title. Huh?
Background
Richard Feldman asked three questions based on the following two assertions of foundationalism:
F1. There are justified basic beliefs. F2. All justified nonbasic beliefs are justified in virtue of their relation to justified basic beliefs.
QF1. What are the kinds of things our justified basic beliefs are about? QF2. How are these basic beliefs justified? If they are not justified by other beliefs, how do they get justified? QF3. What sort of connection must a nonbasic belief have to basic beliefs in order to be justified?
(Feldman 52)1
After some review of Cartesian Foundationalism and Coherentism, and then offering his own more “Modest Foundationalism,” he answers these questions with the following:
“MF1. Basic beliefs are spontaneously formed beliefs. Typically, beliefs about the external world, including beliefs about the kinds of objects experienced or their sensory qualities, are justified and basic. Beliefs about mental states can also be justified and basic.
MF2b. A spontaneously formed belief is justified provided it is a proper response to experiences and it is not defeated by other evidence the believer has
MF3. Nonbasic beliefs are justified when they are supported by strong inductive inferences – including enumerative induction and inference to the best explanation- from justified basic beliefs.
(Feldman 75)
Feldman defends his Modest Foundationalism against the Coherentist, Laurence Bonjour. Bonjour rejected Foundationalism’s concept of an epistemically basic belief, insisting that any basic a posteriori belief should have some quality indicating its truth. Feldman didn’t think that was necessary. Deductive processes in basic empirical beliefs like sensory perception were taken for granted and subconscious as they were happening at best.
Bonjour rejects Foundationalism for other reasons. For one thing, Coherentism is misunderstood as using circular premises, when the image is more like a web of belief, or as Susan Haack liked to compare it, a crossword puzzle. There is a practical matter in all of this – how many beliefs can we really have if we have to justify every premise. Alston uses the image of the roots of a tree. Each root has to terminate in an immediately justified belief or the whole argument is unsound. It’s a great theory but its a lot to expect. In a fast paced world, we form beliefs on the fly. The Coherentist’s concept doesn’t require us to think through the basis of our basis of our basis for every decision we make. We rely on an intuition that has a picture of what it knows coming into the situation, and processes propositions based on what it knows. If something doesn’t square up coherently with that set of pre-vetted beliefs, then it is rejected – unless something compelling about it is a deal breaker for some of the pre-existing assumptions, a defeater for our biases. Potentially, this can knock down everything we think we know. Bonjour adds an “Observation Requirement” to his coherentism, to skirt Feldman’s criticism that a coherentist need never learn anything new if it doesn’t fit their world view.
Seeing merit in the two divergent paradigms for evidential knowledge, Susan Haack proposed a blend of Foundationalism and Coherentism, which she called “Foundherentism.”2 I must admit that I found her work hard to follow on account of the terminology she used, such as “C-belief” (the content of a belief) and an “S-belief,” (mental state someone is believing) (Huemer 420). She then refers to things such as “S-reasons” and “A’s S-evidence.” (Huemer 421) Perhaps, if I read it a few more times…
Still … the general thought of blending the two paradigms seemed worthwhile to me. As I considered the differences between Foundationalism in several of its versions and Coherentism, I found their mutual objections and rebuttals to be quite reasonable. I also noticed that Feldman had omitted reference to logical inference and a priori knowledge. So I came up with a “Foundherentist” statement of my own in response to his answers to his three questions. The parts I added are italicized:
“MF1. Basic beliefs are spontaneously formed beliefs. Typically, beliefs about the external world, including beliefs about the kinds of objects experienced or their sensory qualities, are justified and basic. Beliefs about mental states can also be justified and basic. Logic and math are also basic.
MF2b. Both empirically justified basic beliefs and non-basic beliefs are contextually reliable and fallible. A spontaneously formed belief is (gradiently) justified provided it is a proper response to experiences and it is not defeated by other evidence the believer has over time. Such a response entails the believer’s existing and changing mutually supportive network of beliefs as a totality of evidence, cross checking the experience and belief, such that prior beliefs withstand its incompatibility or are modified accordingly.
MF3. Nonbasic beliefs are justified when they are supported either by deduction or by strong inductive inferences – including probabilistic enumerative induction and abduction – from justified basic beliefs foremost. Their justification also increases or decreases in relation to a vector of force encountered through the holistic network of beliefs that a subject has when a proposition is evaluated.
The above, which originally aimed at a modest foundationalism, now presents a modest foundherentism (and some additional features). We still have something like epistemically basic beliefs that we ideally search for and find as we explore the premises our premises are based on, but we acknowledge the value of coherent systems of belief, while we’re at it. Coherentism isn’t just practical, since it processes faster than Foundationalism, but it makes sense epistemically in some ways too. When it’s looked at as a crossword puzzle that has pieces fitting together, rather than as a circle loop of premises that provides no justification, it may just be the best thing we, as humans have, for ascertaining many types of truth. For one thing, Foundationalism has to be watered down just so sensory perception and experience can be seen as a generally epistemically basic belief. To do this, any sense of infallible certainty has to be removed. Deductive logic is great, but it doesn’t tell us things like – there is a car headed towards me. I must first believe there is a car and then I can infer that I should move, lest I lose my life. Coherentism tells me things like, “cars can be dangerous when they hit you” and “they can kill you” and “cars are things with four wheels that are sort of big and made of solid materials like missiles” and “Missiles kill.” All this adds up to decision making. Part of that decision is not, “prove to me that I’m not in the Matrix.”
Don’t get me wrong. Even if a person believes in simulation theory, only a skeptic would take every thought to that level in real time and Feldman is no skeptic. Feldman wants a practical every day knowledge. Unlikely defeaters are useless for Coherentists and Foundationalists alike. Both Bonjour and Feldman will get out of the way of moving traffic. To be real, a skeptic would too, and then the former would accuse the latter of hypocrisy.
Coherentism is something we actually practice. You. Yes you. You have a set of beliefs and what you believe has various reasons for making sense to you. Both Coherentism and Foundationalism are considered “evidentialist” theories of knowledge that are “traditional.” And Feldman contrasts these theories, which he seems to be less wont to accept, with evidentialism, calling them “non-evidentialist,” as he describes four new paradigms in chapter 5, (Feldman 81-107). This is how the student is introduced to Alvin Goldman’s Causal Connection theory, Robert Nozick’s Truth-Tracking theory, Goldman’s later Reliabilism Theory and finally Alvin Plantinga’s Proper Function theory. I’ll be describing these theories in this blog post as I work to combine them into my singular master theory, which I’m calling “Contextualized SuperFoundherentism.” Are you ready?
Dismissing Skepticism
I should start out by mentioning that the use of the term “knowledge” is contextual. I take this from Keith DeRose, so I am a bit ahead of myself. For a skeptic, such as Sextus Empericus, nothing can truly be known. That is why Descartes’ foundationalism could only say that something appeared to be such and such by his thought. If it weren’t for his ability to prove that a good God exists, Who wouldn’t allow such deception, he might allow that an evil demon might be making him believe incorrectly that 2+3=5. For Descartes, even math and logic would be subject to doubt under those conditions.
Contextualism allows knowledge to have a different meaning depending on context. When talking to a skeptic, I might agree that theoretically it is possible that math and logic are untrue, and while such a proposition may be a logical possibility in a Universe with laws different than those which are assumed by the Universe which appears to exist, by the non-laws or different laws of logic in such a hypothetical Universe or Universes, neither can the skeptic prove his doubting is true. Or maybe he can, but since such a condition implodes logic on itself with nothing to go on but speculation in such a Universe, there is no point taking any of it seriously, unless, of course, speaking with such a skeptic or considering one of their arguments. For the whole, I will accept deductive inference and math as the soundest sort of knowledge.
I think I mentioned I was a probabilist. What I mean by this is I believe things based on what I find most probable. I don’t sweat over the term “knowledge.” I just say why I believe something is probably true, though if you ask me whether I know something I may tell you that I do. Feldman talks about inference to the best alternative, a thing called abduction. I believe in calculating odds wherever possible. I uses Baye’s Theorem using rough estimates all day long. When I heard that a school closed down recently because one of the kindergarten kids tested positive for COVID, the first question I asked was what was the rate of false positives? If the rate was 1% and there were 300 students, then there would normally be three positive tests for that group. The fact that there was a positive test ignores base rate info.
But I digress. My point is that skeptical arguments don’t bother me unless I’m talking about the word “knowledge.” For me, this term clearly has multiple meanings, just like many words in the dictionary do. The word “know” is like the word “tall.” I’m over six feet tall. I’m tall compared to the rest of my family and by most standards. But compared to Shaquille O’Neal, I’m not tall at all.
Going Super
Clearly there are high and low standards of what people call knowledge. In one sense, I think that Omniscience is all that truly knows anything. I don’t know how many subjects S possess Omniscience. That’s an extremely high standard. To me there is nothing wrong with that, or for the skeptic to ask for absolute certainty before calling something “knowledge.” It is one thing to draw a line at doubting everything. It is another to search for truth, and suppose that a principle such as evidence being equal, evaluation should be equal ought to be questioned based on evaluative perspectives such as whether the cause of belief matters as much as the evidence itself, or whether we have a right to call something truth, if we failed to track that truth by considering the methods we used or failed to use in measuring something. And what methods would, in fact, have been most reliable for determining that truth? If those methods were not employed, do we have a better guarantee of knowledge of a fact? And what finally of our own cognitive processes as they may relate to such methods? The details of our evidence may remain the same throughout, but do any of these non-evidential factors weigh in on whether we have knowledge? Does the fact that I checked in at 3am and there was no thief, add to any evidence that there was no thief on the monitor at 4am? Is there any sense in which my epistemic responsibility matters through good habits like checking? Should a fact like that be considered part of the evidence?
Feldman would have us choose between evidentialist or non-evidentialist theories of knowledge, but just as Haack finds a blend for a foundationalist coherentism, so also I think the reality of knowledge blends evidence with questions of causal connections, tracking possibilities that didn’t happen, the importance of reliable methods and cognitive function unfettered by environments that might disable good discernment. I don’t think any of these so called “non-evidentialists” are actually non-evidentialists at all. They simply don’t follow the dogma that equal evidence merits equal rationale for belief. They find, as well, a plethora of epistemological virtues and values that ought to be considered in the aim for truth. For this reason, I would not refer to any of these as ‘non-evidentialists.” To be fair, I would prefer to call them “super-evidentialists.” And furthermore, since the evidence is a given in any proper evaluation of the truth, even if that evidence must be weighed against defeating facts or ideas, I would call for a double-blend of all this, which I will call “SuperFoundherentism.”
Yeah that’s right. I’m a rationalist, probabilist, fallibilist quasi-skeptical SuperFoundherentist.
I also think we have a bad approach when we seek to build toward a theory of knowledge from belief forward. We know that knowledge must be true. That is simple enough. We know that the truth can’t be known if someone doesn’t know it, in which case they would also believe it – also quite elementary. But after this, everyone starts disagreeing and the counterexamples, beginning with Edmund Gettier, keep calling for formulations of what constitutes knowledge that are debatable. While it might be expected that someone would eventually add to justification or cause some fourth condition that could prove incorrigible, and I do hope this happens, I am in the meantime satisfied with believing that knowledge stands on its own. Robert Nozick thinks that we should track the truth by knowing what things would have been like if things didn’t happen as they did. (Huemer 475-490) This might be restricted to proximate possible worlds. How far-fetched do the possibilities have to be before the responsibility to anticipate such things no longer burden us with further epistemic responsibility? Only the Imagination of Omniscience Itself has the capacity to consider the farthest-fetched possibilities and scenarios, exploring how truth might have been tracked had things proven otherwise. By this standard, once we arrive at the Imagination of Omniscience, and we also know the Discernment of Omniscience to distinguish between the actual and the possible, then and only then, do we have Knowledge. This I think we can call the metaphysically highest possible standard for the term, “knowledge.” I’ll captitalize that one with a big “K” but its still got the same character as the word “knowledge’ used in any other way. The content of that knowledge is different. That’s all.
This isn’t just some sort of grand compromise. It is about the premise of knowledge itself, which is reality – a reality that can’t be fully known without this “big K” qualification. Seeing the big K as the only invariant Knowledge, if anyone is to speak of “invariantism,” is why I think we should build down toward belief when considering knowledge, rather than up toward knowledge from belief. It is a sort of Tower of Babel problem. If anyone wants to talk about knowledge, we need to contextualize the term. Whose knowledge? What knowledge? When speaking of that abstract or actual essence which both knows all imaginable and all that is actual, even the skeptic is defeated by its mere logical possibility. From there we build down toward mere Earthlings and other possible knowers, seeing each belief as a matter of sharing what has already been tracked in the grand set of “Knowledge” or “Truth.” Berkeley and I have something in common here, I think.
Contextualized SuperFoundherentism
The above consideration makes clear that there is a total set of possible knowledge and knowledge of what is actual that is quite different from a limited set of evidence for a proposition that any human being will typically encounter or cognitively embrace. Not having awareness of all possible and actual truth, we experience a world that begins with belief and makes claims of having knowledge, when a supremely high standard would certainly discount even a true statement as constituting knowledge. Context matters. Descending from the top down rather than ascending from the bottom up puts belief into perspective. We may know more than amoebas, (not that we know the experience of an amoeba itself), but we don’t know what Knowledge Itself knows. Unless we somehow transcend ourselves into higher beings like butterflies, we merely share portions of It in places and times, and even then, our knowledge will be intermediate if it fails to contain all possibility and fact.
The idea of knowing the experience of an amoeba is a helpful one. Human beings lack that knowledge, but the total set of knowledge of the possible and actual, if it is known, does not lack that knowledge. To know all things, requires also not knowing, lest the knowledge of what it is to be an amoeba be missing from the totality of Knowledge. Segmenting into context is intrinsic to Omniscience. It is thus, most accurate as lowly human beings to say merely that we know in part and to gladly join the skeptics in agreeing that we have no true knowledge. This does not mean, of course, that we should also surrender the idea that we can enjoy any degree of certainty concerning what we know in part.
If a possibility or context is offered in which we might be wrong, so be it, but under the presumption that such a context or possibility is untrue, then we would be right given whatever reasons, methods, skills, causes, tracking and cognitive functioning, including coherent prior beliefs reinforcing experience we possess at any time t statistically likely to be true in those environments and contexts. In this sense, we can speak of a “contextualized superfoundherentism.”
On the one hand, we require no high definition of “knowledge” as being limited to the notion of the Imagination of Omniscience as a singular Knower of all that is actual and true or what could be. At the same time, we exclude possibilities such as being brains in vats. And on the other hand, we require no third, fourth or fifth condition for knowledge.
Moving then, from the “traditional analysis of knowledge, which has … S knows that p iff: (i) S believes that p (ii) p is true (iii) p is justified (iv) some fourth condition that anticipates counterexamples
What I have is more fundamental and only requires two conditions, where K is the total set of knowledge belonging to the imagination and knowledge of what is included in Omniscience,
S knows that p iff: (i) K includes S in believing p (ii) p is true
I’m not going to cop out with this though. There is another important possible aspect of context, which this reduced formula is not sufficient for determining. What of the context of someone who wants to know a particular fact or determine whether something or other is true? This, after all, is the typical context for which most of the formulations of us lowly human beings are designed. Until we’ve agreed upon a ground up architecture for superfoundherentism in more humble contexts, we haven’t offered the best we might have, given our limitations. For this we need a blended formula.
I doubt any blended formula I can come up with will be perfect, especially as a first year philosophy student. When every other attempt has failed, my expectations are not great. But why not take a shot at it? What’s the harm? So the rest of this article is dedicated to doing just that. As a starting framework, I would propose adding Goldman’s causality to the traditional analysis of knowledge as a fourth condition, rather than replacing justification as one of the first three conditions for knowledge, as Goldman had it. That would give us the following …
S knows that p iff: (i) S believes that p (ii) p is true (iii) p is justified (iv) S’s belief is causally connected to the truth.
Feldman complains that causal connectedness fails to handle generalizations such as how one might know that all men are mortal. Goldman responds by pointing out that such a belief is connected to the fact that every known example of man is one in which humans have been mortal.”The fact that all men are mortal is logically related to its instances.” (Huemer 459) It is a fair generalization caused by the observation of known cases, or a belief in reports of such, which caused them. It is the truth which causes the connection.
Feldman also rejects causal connectedness as a replacement for justification because sometimes true beliefs are formed from inaccurate assumptions. He calls this “overdetermination,” citing the example of Edgar, who knows Allan has taken a fatal dose of poison for which there is no known antidote. While Edgar runs off to get help and assumes correctly that Allan has died by the time that help arrives, futile as it would have been, Allan actually dies from a heart attack from the stress. If Edgar is incorrect about the real cause of Allan’s death, for Feldman, according to causal connectedness, it follows that Edgar did not really know that Allan died. (Feldman 85)
What here is the difference between knowing or not knowing? It is in the way we use the word “know.” In the context of expecting this word to mean that Edgar knows every detail about a cause or causes of his belief, Edgar only knows why he believes. In the context of expecting “know” simply to mean Edgar believes and is correct that Allan is dead, his belief is true and he is justified in believing. The causal connection needs to be as precise as the conversation we are having about it demands that we be. If we want absolute precision, then let us count the subatomic particles in the poison and produce their locations as they move, as an Omniscient mind might. But in the context of a far more humble human interaction, such an expectation is rarely in force.
It seems fair to me that if we are not rejecting foundationalism because we can arrive at a more modest foundationalism, that we should be able to think in terms of more modest causal connectedness, as well. It all depends on what technically we are referring to when we use this word, “knowledge.” What is the context of our conversation? Who is having it? What are they expected to know? What are they attempting to prove or disprove?
Feldman also complains that a cause can technically lack evidence. He uses the example of the twins, Trudy and Judy to make his point. Smith knows they are identical twins and one day sees Judy and is glad to see her. He had no evidence she is Judy. He just believes it. Judy caused his belief when he saw her. If we were to include causal connectedness as fourth condition, as I have it, then this leaves evidence for justification, or reasons why, as a condition for knowledge. His counterexample no longer works. Smith does not know he sees Judy.
We might want to refine the words “causally connected” just as we might want to refine the word, “justified.” The more precise and specific we make it, the less inclusive it is likely to be. This is why formulas tend to be so all encompassing. They lose something from prose descriptions, which consider nuance and temper ideas. If we said “justified by evidence” then what “evidence” is there in the cube root of X1,800 is X1,797? The word “evidence” is better expressed as justification. Similarly, the word “connected” is very general. It solves certain counterexamples to the traditional analysis of knowledge. No definition is quite perfect for that which knows less than Omniscience. A formula is one context. An essay is another. A conversation with a skeptic raises standards for the term “knowledge.” A conversation with a politician can be nearly meaningless. Again and again, context matters.
The Fifth Element and a Sixth
If we are satisfied that these four conditions are an improvement, then we might ask whether we should add a fifth condition to the four that we now have, such as no false grounds, as offered by Michael Clark, or no defeating arguments, as offered by Keith Lehrer and Thomas Paxson. We have to consider these one at a time. Feldman points out that the no false grounds condition may be too narrow or too broad. (Feldman 31-33) When it is too narrow, only the explicit steps of forming a belief are included. This skirts problematic false grounds in the background, leaving us with a better chance at saying we had knowledge. We can skip over inconvenient details and base our evidence on everything else. If we too broadly define false grounds, then almost any unfounded fact in the background can render a proposition as failing as knowledge. We wind up knowing very little. Again, I would say that it all depends on what one expects of the term, “knowledge.” To meet the most common expectations, we might say that it “lacks significant false grounds” or “there are no deal breaking false grounds.” We would then have something like (i) S believes that p; (ii) p is true; (iii) p is justified; (iv) S’s belief is causally connected to the truth, and (v) there are no significant false grounds for S’s belief in p.
This would meet Feldman’s two objections. False premises would certainly undermine the concept of knowledge. Lehrer and Paxson would add that additional evidence defeating an argument should cast sufficient doubt on a belief, undermining knowledge. This would apply even with all of the above in place. I see no escape from including a provision, seeing that S believes at time t. If S fails to consider some new information that would defeat what they know, then it could only be bias or some other cognitive failure in that segment of the mind normally aimed at the truth that would sustain the belief. Knowledge disappears with belief, whether or not the facts behind that truth remain the same. A defeater may turn out to be entirely unsound, but as it pertains to S believing, as facts are in the process of being gathered, calls for some suspension of belief, at the least, for as long as it takes to validate the chain of premises the defeating premise may be based on. The defeater itself, must however, be of significant weight to warrant such a suspension of belief. I see no choice but to add a sixth condition from this. “(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at time t.”
Freedman has this argument as “There is no true proposition t such that, if S were justified in believing t, then S would not be justified in believing p.” (Freedman 34). This seems too strong for what the gist of the problem is. There may be plenty of justification for belief in a proposition, including its causal connection to belief, especially if there are no significant false grounds for that belief, but if there is some defeating information presented to S, S should not believe it. This, it seems, should be the thrust of the no defeater inclusion, not whether there are any such true and justified defeaters at all. Those defeaters have to both be true and justified, of course, but what matters is that their significance be to the point where S actually changes s’s mind and decides not to believe it on account of it. If S has no cognitive impairment toward the truth, S’s belief should correspond with what S knows in total. That is why I’ve fomulated (vi) as I have, in place of Feldman’s rendering of Lehrer and Paxson’s own more complex formulation. For their part, Lehrer and Paxson add a defeater defeater clause, which I must confess, as someone new to epistemology, I cannot comprehend. (Huemer 464-474) Feldman then addresses subjunctive conditionals – “sentences that say that if one thing were true, then another thing would be true.” (Feldman 35). He finds these confusing. They’ll get worse as I attempt to simplify and make good use of Nozick’s Truth-Tracking below, but presently, I would round out the foundationalist’s portion of the Contextualized SuperFoundherentism concept with the following stack:
S knows that p at time t iff: (i) S believes that p at t. (ii) p is true at t. (iii) p is justified in believing p at .t (iv) S’s belief is causally connected to the truth of p at t. (v) There are no significant false grounds for S’s belief in p. (vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t.
Adjusting for Coherentism
The essence of foundationalism is the notion that for any true belief, there must be either an immediate justification of that belief so that the belief is epistemic, or a basis for that belief that is epistemically basic, or a chain of beliefs leading to an epistemically basic belief for each premise for each belief. William Alston ((Huemer 402-416) uses the paradigm of the roots of a tree, rather than of the foundation for building a house to describe this. The end of each root system terminates in an epistemically basic truth. If not, the proposition is unsound. Laurence Bonjour and those espousing coherentism reject sensory experience as epistemically basic on the ground that an experience must have some feature indicating truth. Feldman doesn’t see the need to distinguish between the inference at the point of perception that the perception is true. Bonjour responds to more modest foundationalism, while Will and Lehrer attack a more stringent form of it that demands infallibility, incorrigibility and certainty from basic beliefs. Feldman and Alston defend a more modest form of foundationalism, but Bonjour disagrees with something the modest foundationalists don’t counter adequately. We just don’t typically form beliefs by thinking through chains of premises to check whether everything is sound. We could all stand to test this. Next time you express an opinion with the words “I think that” explore the roots of your basis for that opinion. Ask yourself, whether you checked each premise for each premise for each premise. Are there any loops? Is there anything unjustified? Is there anything that just keeps going deeper and deeper into more and more mediate beliefs? You might find you have to consider thousands of things just to justify your belief in one thing.
As human beings, we hardly have the capacity to think through such things. We might not be able to survive if we verified every mediate belief consciously. We would be endlessly processing information. We are capable of thinking through things, at aiming at the truth, but we lack the cognitive ability that the imagination of Omniscience would have. Having a coherent system of beliefs is far more practical. We readily have an existing set of beliefs that we can check any new information with as we make split second decisions. As I previously noted, Bonjour adds an “Observation Requirement” to his coherentism theory (Huemer 396). Coherentism fails if it isn’t open to change from new sensory input. This leads us to the question of what it takes to change sets of belief. Many ideas are dependent on other ideas. Remove one and a whole Jenga tower may fall. One observation might create a shift in the force.
So be it. The force of coherent belief is real in human beings. We may hold very different beliefs from one another. Our political leanings are obvious examples of this. Looking for information that would challenge our existing over-all opinion runs counter to the proper function of a cognitive system aimed at the truth. It is aimed instead at what I will call “coherency bias.” Coherency, is some part of “justification” in the formula for knowledge. Coherentism itself comes classified by today’s epistemologists as an “evidentialist” theory. This fact calls for some precaution in the justification clause. When we say something is “justified,” what is it justified by? Our biases? Of course not. It needs to be justified by that rare breed of coherence that continually checks its own facts, not to determine whether they are right, but to disprove itself, that it might be free from the bondage of bias and readily aimed at the truth, whatever that may be. The previously considered facts and opinions, having already been subjected to this process, ought to be reliable measures for anything new. But so what if they aren’t? Let them fall as they may. The power of knowledge includes the power to walk away from our presuppositions. Foundherentism, then, merely requires a slight adjustment to the third condition: (iii) p must be justified according to a sufficiently maintained, cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t.
Alvin Plantinga will have more to say about the proper function of a cognitive system than this, which we’ll talk about below, but I think that coherentism is not just about assembling many ideas together in a way that fits like a crossword puzzle. While this model seems more practical than foundationalism, the fact that not all people possess the same set of pre-existing beliefs, calls for what Plantinga provides with his Proper Function Theory. Coherentism is a cognitive predisposition. As such, it is a bridge from evidentialism to super-evidentialism, rather than non-evidentialism. Before we move on to how each of these theories should fit in with our more inclusive formula blend, let’s just look at what we have so far …
S knows that p at time t iff: (i) S believes that p at t. (ii) p is true at t. (iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t. (iv) S’s belief is causally connected to the truth of p at t. (v) There are no significant false grounds for S’s belief in p. (vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t.
Super-Evidentialist Contributions
With this step in building up a blended theory of knowledge, we are ready to consider more carefully the Super-Evidentialist contributions that might help enhance this formula with something approximating something definitive. I’ve already brought Plantinga into this by relating coherentism to cognitive disposition, but there is more to consider.
Feldman orders his introduction to the “non-evidentialists” with Causal Theory first, Truth-Tracking second, Reliabilism third, and Proper Function Theory fourth. About twelve years after he introduced the Causal Connection Theory, Goldman offered a Reliabilism Theory, as well. Reliabilism focused on the method of belief formation. If the method or process was statistically reliable, this increased the fair basis for belief. Feldman’s fondness for foundationalism has him lauding Goldman’s distinction between conditionally reliable belief-dependent processes and belief-independent processes that are reliable. Of course, only if both types in any chain of beliefs are reliable processes, can a belief be justified.
For Feldman, justification by reliable process parallels his foundationalist dependency theory, but he favors evidence over process and knocks down process with a few objections we should consider. His first is the oft heard brain in a vat problem. This is similar to the Matrix, only instead of being a whole body hooked up to a computer simulation program, all that remains is the brain. To distinguish the brain from what the brain is being led to believe, Feldman gives each a name – Brain and Brian. Brain does not know Brain has no hands. Brian thinks he has a brain and does not know Brain exists. Brian has no other reliable process to go by than what works in Brian’s simulated world. So when Brian thinks he sees Brian’s hands, contextually, Brian is using a reliable process for determining that Brian has hands. Brain, by contrast, lives in the context of an actual, rather than simulated world, where Brain is a brain in a vat. Brain supposes that the process for determining that Brian has hands is the simulation Brain experiences as Brian, not knowing anything about any simulation any more than Brian does, but Brain is wrong about that process, as well. The simulation is, therefore, anything but reliable for determining the truth. Brain actually has no access for determining anything at all because Brain is unaware that it is in a vat. There is nothing that Brain can know, except what Decartes knew.
So that’s Feldman’s objection to Goldman’s Reliabilism but here’s my tke. Whenever anyone turns to the brain in a vat argument, there is a chance there is a lack of more reasonable objections. How can it be unreasonable to ask that the methods for determining the truth be statistically reliable? This can only increase the chances that what we believe is true. While it may be entirely true that an argument like Brian and Brain can defeat them, these fall under the category of no defeating evidence with weight significant enough to warrant suspension of our belief in the proposition – here the proposition being that statistically proven methods of discerning truth about beliefs are a reliable justification. That said, why would we hesitate to add Reliabilism to our formula?
As for his part, Goldman doesn’t confront the brain in a vat argument that I’m aware of, but he does respond to the complaint of accidental or unknown reliability defeating his basic reliability condition. I’ll get to that in a minute. To put his basic reliabilism into some simpler words than his own:
If the method or process is statistically reliable, this increases the fair basis for belief. Only conditionally reliable belief-dependent processes and belief-independent processes that are reliable in any chain of beliefs can produce justified beliefs.
This is a rather rigid statement that should be rejected for reasons that I’ll discuss below so we can see where we are as we build our formula. We can see here that Goldman’s focus is on condition (iii), once again. We’ve already added to this condition so that it reads, “(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t.” We’ve also moved Goldman’s causal connection to condition (iv) “S’s belief is causally connected to the truth of p at t.” We should add in his caviat – “in an appropriate way” to condition (iv), to cover such things as Smith’s inappropriate response to Trudy and Judy. This gives us (iv) “S’s belief is causally connected to the truth of p in an appropriate way at t.”
Next, we’d like to include a less rigid reliabilism statement as part of our formula, if we can come up with one. To reduce his verbiage, Goldman basically says, “Only conditionally reliable belief-dependent processes and belief-independent processes that are reliable in any chain of beliefs can produce justified beliefs.” Since we have things like clairvoyants creating accidental reliabilism from time to time, (Feldman 95-96 per Bonjour) we need to soften the definition. Remember, we are including this with what we already have in condition three:
(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t,
We could perhaps change this to:
(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable.
Goldman, seeing a problem with defeating arguments, also proposes the following condition:
“If S’s belief in p at t results from a reliable cognitive process, and there is no reliable or conditionally reliable process available to S which, had it been used by S in addition to the process actually used, would have resulted in S’s no believing p at t, then S’s belief in p at t is justified.” (Feldman 95)
Since we already have a no defeater clause in (vi), this is unnecessarily bulky. Let’s not include it. Feldman complains that Goldman fails to go into detail about process types. Goldman calls an individual process a token and a type is a general category of a process. A single token process may involve multiple types. I’ll ignore these differences because including them in a single formula that is meant to be all encompassing is like saying “any whole number.” It doesn’t have to specify every detail. If we are looking for a definition for knowledge that at least in certain contexts we can agree on, I would exclude the specifications.
So so far this gives us …
S knows that p at time t iff: (i) S believes that p at t. (ii) p is true at t. (iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable. (iv) S’s belief is causally connected to the truth of p in an appropriate way at t. (v) There are no significant false grounds for S’s belief in p. (vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t.
Truth-Tracking
I’ve saved the best and the worst for last and we’ve come full circle in this discussion. Robert Nozick wants to elevate the standard of truth to include things that could have been. It’s not an easy theory to understand, so I’ve broken it down into an imaginary conversation that I’ve had at a party with philosophers from all ages. This party conversation is something I might revise from time to time. Here is an excerpt of one of its earlier drafts. (RF is Richard Feldman, RN is Robert Nozick, JC is me, James Carvin, JW is Dr. Jeffery Watson, my epistemology professor) …
RF: You people just think up examples to give you the results that you want. There’s no general theory here. This violates the Same Evidence Principle. Evaluation supervenes on evidence.
RN: I appreciate that you strive for high and consistent standards, Richard, but I have to agree that causal chains might improve over reasons alone for justification. Method certainly matters. So does process. And what you want is not just any process type, but something more reliably reliable. The only way to do this would be through a process that actually tracks the truth. I’ll admit you do need a good method, but that method also has to be used in the right way. You ought to be asking yourself if things had been different, would you still have known. To be epistemically responsible, you need to track counterfactuals. S only knows p if S believes p, p is true and S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” Do this and you can’t go wrong.
RF: Huh?
RN: Consider the broken clock you mentioned, which you looked on only at lucky times, getting it right, but didn’t know. Why? Because the cause was right but the evidence was unjustified. The premise, you would argue, was that the clock worked, but it didn’t. So for you, as a foundationalist, there would be no knowledge, but Allan’s causal theory fails, as you said. But had Allan tracked the truth using the clock method, he would have learned within seconds that the clock was broken. He would then have found a different method more suitable for determining the correct time or simply confess he didn’t know what time it was and be correct in that belief instead. There are many examples like this – it could be a thermometer that was broken instead. Knowers are truth trackers.
RF: Well, that would solve Edmund’s cases just as neatly as Alvin’s solution would.
RN: Indeed, and there are many other such cases of lucky knowledge. For instance, Ms. Black, working in her office, getting up to stretch – she looks out the window – and just happens to see a mugging on the street and becomes a witness. Her method is luck. What kind of a reliable process is that? In fact, she has no method. Yet she certainly saw. And seeing was her process. She didn’t track the truth because she was looking for it over time. That’s why I said, “S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” At that time, her method was seeing from a timely stretch but it wasn’t even what she intended to do. One might suppose she tracked it over the time she needed to – just that moment. But this is still not enough for knowledge, because what if things had been different? What if she had stretched at any other time? Then she would not have known. And I say that if you would not have known, then you aren’t tracking the truth. I raise the standard of what knowledge ought to mean in this way. I say this because truth matters. In many cases our lives may depend on it!
JC: I agree but I’m not sure I understand. You are introducing counterfactuals in saying that something is not knowledge unless they can say that if things had been different, then such and such would be true, and of course they would have to be right about that. Do you mean that they should be able to know both the truth or falsity under any condition?
Nozick goes to the chalk board.
RN: Yes. However, I would temper this by distance to be practical and we are only talking about the truth tracking method applied. Maybe an Omniscient being could know all possibilities but here we are talking about the responsibility toward truth that human beings ought to consider. So I’ll offer a third and fourth condition for knowledge as follows. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that under any other nearby condition, if p were true, S would believe that p and not disbelieve that p were true. This is actually a step up from a fourth condition I previously had expressed, that if p were true, S would believe it – (iv) p → S believes that p. But, as I said, realizing that we, as human beings, are quite limited in our methods and knowledge, for a realistic aim at what one would use for saying that someone knew something, at least given a certain method, this would be the responsible way to treat whether one knows something or not.
JC: I think I’m starting to understand. Can you offer another example?
RN: Certainly. Consider, the unattentive security guard who plays SODUKU all night long instead of responsibly watching the monitors in his store. He gets lucky and catches a thief out of the corner of his eye as he thinks about something completely different. His assigned method is to watch. And indeed, he sees. But he is not tracking the truth by that method. Therefore, even though it could be said that he has knowledge of the thief, the standard of knowledge I am referring to has not been met. He was derelict in his duty, which was to track the truth by the method of watching the store monitors. Had he looked up at some other time, he would not have believed p or known p was true. Had p been untrue, he would not have known whether p were true or not either way. This is not a responsible way of knowing things. Tracking the truth is a responsibility. Using reliable processes for doing so goes along with this responsibility.
RF: I used that very same example to show why tracking was not necessary for knowledge.
JC: Clearly, the difference is in what the standards are for the term.
Just then Saul Kripke comes in saying, “truth tracking is hocus. You’ve heard about that town with fake barns, right? The town replaced their old barns and left a few standing but ran out of red paint, so they created a bunch of white barn facades to please the tourists. Smith drives through the town, sees a red barn and deduces that he sees a red barn. Now if Smith had tracked the truth, he would have known that all the white barns were fake. As it stands, he got lucky and properly identified a real barn, a red one, but since he didn’t track it, all he really knows is that he saw a red barn. However, he doesn’t know that he saw a barn because he wasn’t tracking the truth. See the problem?”
JC: I’m not sure.
JW: According to Truth-Tracking theory, he saw a red barn, James, but he did not see a barn. You got this wrong on the quiz. Try again … The Fake Barns case counts against the Truth Tracking theory because either Smith’s belief that he sees a red barn ________________, but yet _____________ … or else Smith’s belief that he sees a barn __________, but yet ___________.
JC: The choices were:
A) doesn’t track the truth; is knowledge… tracks the truth; isn’t knowledge B) tracks the truth; is knowledge… doesn’t track the truth; isn’t knowledge C) doesn’t track the truth; isn’t knowledge… tracks the truth; is knowledge D) tracks the truth; isn’t knowledge… doesn’t track the truth; is knowledge
I chose B. I can see I was hasty on that. It’s D. Right?
JW: I won’t give away the answers, James. I’ll give you my opinion though. I think Nozick’s theory is “half right.” … “Truth-Tracking is really getting at the counterfactual: would S have believed p if not -p? Objections are to the condition that S would have believed not-p if p. So a revised Truth-Tracking theory would be a causal theory.” (Dr. Watson Unit 4 Video Lectures 4.3 The Truth-Tracking Theory)
JC: Truth-tracking is confusing, Doc. What does he mean by “tempered by distance”?
JW: He’s talking about how farfetched the alternate world of possibilities might be. It’s “in the neighborhood” if it’s relevant to tracking the truth about something specific using a specific method. He doesn’t mean every possibility, only the stuff directly related to tracking the facts.
This ends the portion of the dialog. For the entirety of the conversation, see my blog, Parallel Universe Epistemology Party, from which this is an excerpt.
I write this sort of thing to simplify the complex. Here, two very difficult formulations are explained. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that if p were true, S would believe that p and not disbelieve that p were true. For Nozick, (1) is p is true and (2)_ is S believes that p. So adding his third and fourth condition for S knows that p iff, in plain English, Nozick is saying (3) if p weren’t true, S wouldn’t believe that p and (4) if p were true, S would believe that p and not disbelieve that p were true. Again …
(i) p is true (ii) S believes that p is true (iii) If p weren’t true, S wouldn’t believe p (iv) if p were true, S would believe that p and not disbelieve that p were true
Of course, since we aren’t talking about the imagination of Omniscience here, not in this context anyway, as we search for ways that human beings might really know things, as we raise the standard for what we would consider knowledge to be, we might deny that something was knowledge, or somebody really knew something if they weren’t actually tracking it. The examples from the dialog at the party explain this.
For this reason, I consider Nozick’s Truth-Tracking Theory to be somewhat optional depending on context. What context? Let’s say we are fighting hackers who really want to check our vulnerabilities and break into our National Treasury. Did we really check everything? We create redundant systems and then we monitor them. How often? What methods do we employ? We have to verify that everything is working and no thieves are entering continually. A certain level of importance warrants a higher standard of certainty than most other types of what we would consider knowledge. Lives depend on it. An airline pilot does routine checks we would not likely perform on our automobiles. We might say we knew our car was working because we checked our tires a week ago, or that we had a scheduled maintenance just yesterday. Does that mean it works? Really? Do we know that? How do we know someone didn’t just slash our tire? We don’t know. So is it knowledge even if we know we just paid for regular maintenance yesterday? How about if you have a compulsion to jump off a building because you think you can fly? Shouldn’t we be asking whether we are dreaming? What is our epistemic responsibility? You see that context matters. It’s sort of knowledge. It’s more knowledge-like when we’ve been checking, taking every precaution. This is what Nozick is getting at. It’s actually a fairly simple concept when its intent isn’t obscured by fancy word formulations, definitions and defeating counterexamples.
It all brings me back to the imagination of Omniscience, the ultimate Truth-Tracker, who not only sees the neighborhood of possibilities, but every conceivable possibility. That is a context that matters for Pamalogy.
A Contextualized Super- Foundherentist Formula
We are now ready for a final definition of knowledge. I’m not so certain I agree with Dr. Watson about Truth-Tracking being half right. Certainly, the formula is confusing. I think Nozick meant well in adding more to it than he originally had. Maybe what he meant to say was, that in nearby conditions, if the truth had been different but true, I still would have known it, or if things had been different and p wasn’t true, I would have known that too. I might even add in that if the truth had been different, no matter what it was, I would have known it, whether a proposition was true or false. None of those formulations fail to make sense for someone who simply wants a higher standard of knowledge than what we typically employ. To say that someone doesn’t know something if all these conditions aren’t true, is not to say they have no justified true belief that something either is or is not true. It is simply to say that they haven’t met Nozick’s standard for truth-tracking. And since there are multiple standards, let’s see if we can offer a contextualizing option in the theory itself. So with those thoughts, I can add this as condition (vii):
S knows that p at time t iff: (i) S believes that p at t. (ii) p is true at t. (iii) p must be justified according to a sufficiently maintained cognitive system in an unimpairing environment that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable. (iv) S’s belief is causally connected to the truth of p in an appropriate way at t. (v) There are no significant false grounds for S’s belief in p. (vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t. (vii) all of the above and only all of the above, unless truth tracking or some higher standard of a definition of knowledge is expected in context, in which case: viia – in a condition that was monitored, If p weren’t true, S wouldn’t believe p. viib – if p were true in a condition that was monitored and wasn’t farfetched, S would believe that p and would not disbelieve that p were true.
viic – in the context of an even higher standard, whether p were true or not, in a superhuman condition, the truth of p or not-p would be believed and known by that superhuman entity, even if the counterfactual subjunctive conditions were farfetched, to the extent that it would be appropriate to the relative cognitive capacities and corresponding expectations toward those superhuman entities.
The seventh condition thus scales up gradually in terms of the standard of knowledge as S changes context, both as to the environment of S and the increasingly capable cognitive types of entities S might be, as to how they might be designed with the capability of aiming at the truth. The scale of expectations for knowledge keep increasing until the expectations one might have toward an Omniscient being with the capacity to imagine all actual and imaginable possibilities is the subject, S.
You can see, then, that I’ve dropped the Truth-Tracking expectation for human beings. We can say that a woman looking out the window at just the right time to see a mugging, just because she is stretching, not because she is monitoring, both does and does not know that a mugging has taken place. Her sensory perception is all that is required in one context for the meaning of knowledge, but not another. Further, her random seeing, is lucky, but her sight itself is not something lacking knowledge. Her method is seeing. And seeing is reliable. It doesn’t invalidate the formula with a counterfactual. Condemnation of the security guard for failure to watch more attentively is warranted. A higher expectation for watching regularly is expected as an epistemic responsibility. In this way, contextualized superfoundherentism trumps the counterexample of the inattentive security guard.
Footnotes
General Note: What I call “Feldman,” refers to the author of the primary textbook for our Epistemology 330 course at ASU. There will be lots of references to it in this article. Rather than using the ibid. method, I’ll use an informal inline with page numbers source citation technique as I go. The plus is you won’t have to scroll up and down to see these footnotes. And since this is the web, there may be some papers, I’ll hook you up to directly. The other textbook we used I call “Huemer.” So where you see something like, (Huemer 487), what you are seeing is the page number for our primary source text. Huemer is an anthology of contemporary writings, so quotes from other authors will be found here with Huemer inline markings and page numbers. the first time they are introduced, I’ll put the reference to the article title in as a footnote.
Feldman, Richard. Epistemology, Prentice Hall, New Jersey, 2003
Haack, Susan. “A Foundherentist Theory of Empirical Justification” Epistemelogy: Contemporary Readings, ed. Huemer, Michael. Routledge London and New York, 2002, pp. 417-430.
We studied Contextualism in my Introduction to Philosophy Course with Prof. Nestor Pinillos and it struck a chord with me. Keith DeRose offered some potent examples, as I recall. “Contextualism” can be contrasted with “Invariantism.” Feldman has a few pages on it (Feldman 156-160), but we haven’t yet arrived at them in my Epistemology course at ASU with Dr. Watson. For those interested in digging deeper, here is a summary article on it in the Stanford Encyclopedia of Philosophy – https://plato.stanford.edu/entries/contextualism-epistemology/ 2007, rev. 2016