The Multiverse

I’ve become an insurance agent. If I had my way, things would be very different. There would be no need for insurance – neither health insurance, nor life insurance. There would be some forms of property insurance, but that wouldn’t involve money. It would involve privilege and rights. It’s a long story. But I don’t mind sharing it. Do you have a moment?

Well, if you are interested in insurance as privilege, click here. That’ll take care of that. But to get to what drives me, I’ve got to start at the beginning. The drift of it all begins with the question of what is wrong with everything.

I’m serious. What is wrong with everything?

We don’t know the small, the large, the young or the old. Could pure reason explain it better?

You might think I’m asking a rhetorical question, but I am serious. And on a metaphysical level, I’m being literal – what is wrong with everything? And …

… what is everything to begin with? The particle physicists can’t even tell you what quarks are made of. The cosmologists don’t know how big or old the Universe is. How can we delve into what is wrong with everything if we don’t know what everything is?

We might also ask why everything is. Why?

As a philosopher, I think the answer to this second question provides the key to the first. The purpose of everything is the maximization of awesomeness. This is not just a catchy phrase. It is the actual reason. If you want to follow my reasoning and see how I came up with this answer, subscribe to Season One of my Podcast. I’m 100% serious.

So that takes care of that.

If you follow those links, you’ll see that I’ve covered a lot of ground I won’t have to repeat here. Let’s start with that context. Assume for a minute I’m right. If so, then the why question leads us to other big questions. Is what concerns us on a daily basis fufilling our purpose? If we are here to maximize awesomeness, then how? If your life is just, “blah” and “getting by” then does it concern you at all that you may have been missing out on something wonderful? Why is your answer not the maximization of your awesomeness?!

What if everything was very different? Imagine, for instance, if we had always lived in an incentivized asynalogonomy, rather than in the constant tension between capitalist and socialist dystopias. (Click here if you don’t know what that means).

There are better ways to deal with economic problems than plagues and wars. Just sayin’

If only more people knew about the HAND System. (Refer to the above link if you don’t know what the HAND System is). I believe that in the Multiverse, there are many Universes that incorporate the HAND System. I believe there are a vast number of worlds that aren’t nearly this dysfuntional. I’m presently in the insurance business because this world doesn’t know anything about the HAND System. They don’t know that it would fix this world much better than insurance ever could. People don’t know. So I have to settle for the insurance business.

I know that all sounds strange. I’m a little geeky, I know, but I’m not so other-worldly. In fact, I’m perfectly normal and safe to be around – just a little different than what you’re used to. And I can be practical. I may not be of this world, but I live as though I’m really in it. My philosophy holds that this world is at least partially real, even if it seems to be falling apart. Belief in the Multiverse as a product of Maximized Awesomeness does not make a person so other-worldly that they can be no earthly good. I see it as my mission to make this world a little less bad for people, if I can.

Other worldly leadership?

The Ghost Machine was another example of an attempt I made to maximize the good, as I knew how, given what I knew and thought I might be able. As with all of my business ventures, I started with the belief that it wouldn’t necessarily take money to make money. I still haven’t given up on that thought but I’m reminded daily of how pollyannic the concept is. I so much wanted that to be true. I had very little money. If I wanted to be in business for myslef, it would have to be true.

My father hadn’t wanted me to go into business for myself. He knew I wanted to have a positive impact in this world, one that required some entrepreneurial courage, but he wanted me to keep the good government job I had. He said that all the people he knew who’d gone into business for themselves had sacrificed a life without worry. Taking risks could rob a person of their time and of their health. He tried to spare me. He meant well. And I know he was right.

Dad spoke from his personal pain. He had lived in a very different world. It started well enough. He had inherited a fortune from his own father. He started out with a great career in textiles his Dad, my Pop Pop, Charles Carvin, Sr. had taught him. Dad became a marketing man in the textile industry. Even before his Allied Chemical days, he was doing commercials for Chemstrand. If you followed the Netflix series Madmen, you may have caught some of the flavor of my father. He had a trophy wife in Rye. He took the same train from Mamaroneck to Grand Central Station and back every day. “Draper” was even the name of one of his peer VPs at Allied.

One of Dad’s Chemstrand Commercials – images of home

I hope you enjoyed that video. Please watch it. See what I mean by “flavor”? And if you are really astute, you may have found the word, “Cumuloft” familiar too. Well, I should tell you that I was no more than one or two years old when the above commercial was made. Dad’s move to Allied’s Caprolan from Chemstrand was in 1962 and there was no more talk of any Cumuloft after that, if there ever was any in my presence. Assuming it wasn’t just a coincidence, somehow the term “Cumuloft” appears to have managed to stick in my young brain for forty years or so before I incorporated it into the Ghost Machine. The Cumuloft was the Ghost Machine’s “cloud” security storage system. Then one day, I decided to check to see if “Cumuloft” was a word I could trademark. The Internet didn’t show me anything at the turn of the Millennium, as I was developing the Ghost Machine’s blue prints. I only encountered my own father’s commercial, well after that search. I wonder what else is stored in my head without my knowledge. I don’t think we found this video until about 2016. Development of the mobile app started in 2011.

But I digress. Thinking about Dad can make my mind wander. I was talking to you about the insurance business. Specifically, I was thinking about how a person like me, who’s head is happier in the clouds, could possibly take interest in solving the problems inherant in dystopian economic systems. Insurance is a help. I was telling you it was the best I could do. At least it helps some people in this world. I’m sure there are many Universes much like this one where it comes in handy.

And I was telling you about the Ghost Machine. Like the insurance business, it would have put a bandaid on socio-capitalism. The idea was to create a fun way for people to make money. This world can be a very dull and even cruel place. Why isn’t earning a living easy and fun? It’s such a shame I was unable to raise the cash needed to build the Ghost Machine. The world would be a very different place right now if I had succeeded.

Use the ghost machine's finder to plan your ghost hunting all day long every day so you can earn Ghost Bucks!
The Ghost Machine App
Pokemon Go meets Bitcoin

with some ghostly edutainment

At least in the insurance business I don’t have to start from scratch. There’s already billions and even trillions of dollars in this business. Getting my personal brokerage started has taken time, more than money. I was able to survive on rideshare earnings while getting my first few sales. It didn’t take hundreds of thousands of dollars.

But you deserve a little more back story.

Dad lost his fortune. My mom, who outlived him by decades, lived a miserly life after he passed. And she was spared the difficult ending to her life that my sister and brother had. She was in great health clear up to the age of ninety. Then she had a heart attack. She never had to pay for assisted living, much less a nursing facility.

Contrast that with my sister, Corinne. She had a stroke ten years before she died. The majority of her remaining years were spent on a feeding tube in a nursing home. Her husband abandoned her. Her four remaining brothers deliberated over her care. One wanted to put her on palitative care and another thought she wouldn’t want that and the right thing to do was to bring her to a new nursing home in Tallahassee. He prevailed. And that is why we moved here – to watch over her before she died.

My wife, six years my younger, also had had a stroke the year before my sister, six years my elder. She was only thirty eight. She’d been in perfect health until it happened in 2004 but she’s been paralyzed in her left side ever since. It shows me that bad things can happen to anyone at any time. Jeanne Calment, who according to the Guiness Book of World Records is the oldest documented woman to ever live, was a smoker. Lisa never smoked, drank or took drugs and she worked out regularly. Bad things happen to good people. It doesn’t matter how rare it is. It happens. And that is why we have insurance.

We still managed to have some good times after Lisa's stroke even when we had no money. Hardship is bad. God is better.
We still had some good times after Lisa’s stroke even when we had no money. Hardship is bad. But you can work through it.

With that picture, it may make better sense why an inventive philosopher would wind up in the insurance business. It positions me to help people deal with the hard realities that exist when bad health, or death, comes at an unexpected time. We can deal with problems before they happen. I can’t prevent an occult arterio-veinous malformation, like Lisa’s, from bursting. But I can make it easier for families to deal with the financial issues that ensue if and when such a thing happens.

And it wasn’t just Lisa, or my sister Corinne. Do you know what was really eye-opening to me? It was visiting Corinne in that nursing home. Nursing homes are places filled with people suffering. I wish more people knew. Maybe they would visit them. There is so much loneliness there.

And they need care. They shouldn’t go without much needed care. Caregiving takes money.

I don’t mean family caregivers. Lord knows I’ve never been paid for family caregiving and few have. Maybe we all should have been and maybe there are even some insurance plans, ones that I can even write for, that actually cover that to some extent. If anyone would know about those plans, I would. But what I mainly mean is professional nurses, doctors, medications and treatments. It all has to be paid for.

So, I’ve decided that the best way for me to maximize my personal awesomeness, is to learn all I can about health insurance for young and old. And life insurance too. When I was young, I had my days of fun and adventure. I have no shortage of stories to tell. But I continue to ask how I can do the most good for the most people – not just myself – before I die. I continue to focus the answer on the areas I would be most capable of. I still have a few brain cells left. Let me see what I can dig up for you. You have not because you ask not.

Philosophy

In this section, you will find out about my philosophical thinking. I will start with a summary of my academic history.

Academic Background

In 1980, I graduated with a degree in music composition from the University of South Carolina. Composition, invention, creativity, innovation and discipline are what I was trained in. Expect new ideas to mix with old ones in my philosophical views.

From 1982-1986, I studied theology part time at St. Vincent de Paul Regional Seminary in Boynton Beach, Florida. I learned a lot about what Catholics think. I considered the priesthood during that time, but in the end, I chose not to be Roman Catholic.

From 1991-1995, I studied theology at St. Michael Academy of Eschatology in West Palm Beach, where I received a Masters Degree in Christian Theology. The school has an untraditional accreditation, which is not accepted in many places. I was an adjunct professor there while earning my degree and in college in 2008 I was a Business Administrator and built the majority of their present web site and online curriculum.

The jurisdiction is controversial with the Orthodox Church. Expect me to have some familiarity with Orthodox politics and Church history as an insider, but please do not judge me by association. I was a helpful guy learning what I could. Eschatology is the study of the last things, including the return of Jesus as Lord. It is a subject I have considered more deeply than most.

Some might not consider me to be an Orthodox Christian despite having received the rite of chrismation at St. Mary’s Antiochian Orthodox Church in 1996. Whether the Church would consider me a heretic should have more to do with how they might view my philosophical system than any association I may have had with the Metropolitan Xaralambos, who as far as I know, still believes himself to be one of the two last days witnesses the Bible describes. For the record, I did not hold that he was. I will simply state here that I understand Orthodox eschatology and not many do.

In 2019, I enrolled as a student of Interdisciplinary Studies at Arizona State University, with concentrations in Organizational Leadership and Philosophy. I expect to earn my degree there by 2022. There are many reasons I chose to go back to school. Among them, I wanted to start a Philosophical Society that would serve to discuss my philosohical system, which I call Pamalogy.

Pamalogy

Pamalogy is short for “Poly Astronomically Maximized Awesomeology.” It has a metaphysical side and an axiological side. I should start with the axiological side. How does one maximize their awesomeness? What does it mean to maximize your awesomeness? The word, “axiology” comes from the Greek word “axios,” which means worthy. What is worthy? Awesomeness and worthiness might be considered synonyms. Generally, there categories of axiology – ethics and aesthetics. Aesthetics might ask what is beautiful. Ethics might ask what is good or bad.

One thing I think is awesome is practicing what we preach. I much prefer action to talk. Sure, I’ll leave some writing, but rather than trying to talk philosophy with anyone, my goal is to start a Pamalogy Center, where all sorts of worthy projects might be realized. A Pamalogy Center is an arts and tech guild. It would feature a music and video recording studio, co-working spaces, sound proof practice rooms, instrument and equipment rentals, art displays, an informal theater and a relaxing bar and lounge for members. It would focus on collaborative projects and royalty and equity sharing for lean start ups. It would encourage members to contribute to other member’s work to improve their own member status. Member status earns members the right to ask for more help from other members. Artists and inventors would see it as an incubator for their projects.

Pamalogy Discernment Chart
Pamalogy Discernment Chart

The metaphysical side of Pamalogy asks what maximized awesomeness would be in an absolute sense. It would suppose that no one can conceive of it. It is that than which there could be nothing better. It would compare maximized awesomeness to the concept of infinity. No one can count to infinity, yet we do understand the concept to mean that any number we can think of is always less than infinity. Maximized Awesomeness is like that. There is something greater and that thing does not exclude the goodness one might positively conceive of.

Pamalogical metaphysics would ask whether this abstract Maximized Awesomeness concept is part of reality. Does it exist? If so, why is life not always awesome to us? Why do bad things happen? Also, is it possible for Maximized Awesomeness to exist in just one Universe? I think the answer to that question is no. I think that for every good thing to be real, there must be many Universes – not just one. That is why I call it, “Poly Astronomical.” If Maximized Awesomeness is real, there are many astronomies, not just one.

New Words

Every now and then I’ll make up a word and start using it. Pamalogy is one such word. I have a reason for coining certain terms related to philosophy or theology. Usually, it has to do with the fact that there is no other single word I know of to describe something. It may have to do with wanting to be specific or to distinguish one idea from another similar idea. If I introduce a word that I’ve made up, I’ll spend some time defining it so we can both understand what that word means and start using it together. Afterwards, maybe it will become part of the English language. That would be cool but the vanity behind coining a term is not what drives me. It’s about precision.

Here you will find a JamesCarvin.com menu list that will lead to some of my philosophy articles and web properties. This menu will use some of those new words. So, right here, I’ll explain their meaning up front and alphabetically.

Asynalagonomy – from the Greek root, συναλαγων (trade). The prefix “a” means not. “Nomy” means law. I’m using it here in the same way you would use it in the word “economy. ” Together it refers to an economic system, or set of laws, without trade – a tradeless economy. In general, a tradeless economy would fail. If anyone in such an economy were to own property, they would not be permitted to sell it or trade it. This would make it impossible to possess anything for any reason other than personal consumption. An economy would be frozen. No one could have a business. No one could work for money. Money itself is an exchange. A true asynalagonomy does not have money, or any other form of exchange. Fortunately, no such thing exists.

An incentivized asynalagonomy is something quite different. As a rule, there would be no trade but there would be incentives to work as determined by a system that managed the incentives. In such a system, workers earn privileges. Then in order to obtain goods and services they exercise the privileges they’ve earned. They don’t trade those privileges. They can only earn them, or fail to earn them, and thus lose them. A system of incentives can keep an economy moving because workers have reason to produce goods and services, which can then be consumed by those with sufficient privilege to them. I will have much to say about incentivized asynalagonomies because pamalonomies are a type of incentivized asynalagonomy. See pamalonomy.

Cosmostrophy – the way that a person reconciles their faith with their metaphysical view of the Universe – what it is, how it was formed, how old it is, ideas about time, matter and energy, whether there is a multiverse, whether there is a creator, etc. Each person has a cosmostrophy no matter what their religion. It is a general term like the word “metaphysics.” The suffix, “strophy” indicates harmony or pattern. How do you harmonize your faith with science?

Foundationism – Sometimes I think I’ve coined a term but actually haven’t. When I made up this word, I was unaware it was associated with Reneé Descartes. Like Descartes, I believe that derivative forms of certainty can be obtained by basing beliefs on what we already know we may be certain of. Unlike Descartes, I think there can be more known with certainty than the statement, “I think, therefore I am.” I don’t reject math or logic, for instance. Descartes supposed that an evil demon might be deceiving him even about math and logic. I don’t make that assumption. In fact, the entirety Pamalogical metaphysics is foundationist. When I first used the term, I had never studied epistemology. I was a theology student looking for a word to contrast with fundamentalism. I was referring to theology and eschatology that was logical. For instance, it is logical that if God is perfect, that there is no good thing that can be added to God that God does not already possess in either divine being or divine action. As such, God does not ever change. I don’t have to find the Bible verse that supports that idea. It stems from what Perfection means in the absolute sense often attributed to God. Therefore, it is a foundational principle that if God is perfect, God does not change.

Pamalogy – Poly Astronomically Maximized Awesomeology. See above.

Pamalonomy – a hybrid between socio-capitalism and incentivized asynalagonomy. It is a way of experimenting with incentivized asynalagonomies on a small scale within a broader socio-capitalist framework. A guild concept might be an example. In general, the guild does not own its creations. It fosters and helps manage them as an incubator. The members enjoy their own profits but work collaboratively to overcome the cost of starting enterprises without shared resources and talent.

Stromagesis – a method of interpretation that considers multiple perspectives without holding one perspective to be invalid when another seems to be valid. It holds that even if viewpoints may seem to be in conflict, both viewpoints may be true. The prefix is from the Greek στρομα, which means layer. Stromagesis can be compared to the more commonly known words, exegesis and eisegesis. Exegesis holds the intended meaning of the author to be a valid interpretation. Eisegesis refers to the interpretation of the hearer or reader. It is often thought to be invalid if it does not account for the intended meaning of the writer or speaker.

Theogesis – refers to God’s intended meaning and purpose. I hold that this is more valid than exegesis and may incude stromagesis. See above. Notice that I have not referred to “Biblical” interpretation here, but to interpretation in general. You may be an epistle. How shall I interpret you? How does God interpret you?

Large Numbers

When you count by multiples of a thousand, you get some interesting names for numbers, but were you aware that the Europeans and the Brits call what Americans call a Trillion a Billion? Probably not. But if you’re like me, you aren’t satisfied with what comes after nine hundred ninety nine octillion nine hundred ninety nine septillion nine hundred ninety nine sextillion nine hundred ninety nine quintillion nine hundred ninety nine quadrillion nine hundred ninety nine trillion nine hundred ninety nine billion nine hundred ninety nine million nine hundred ninety nine thousand nine hundred ninety nine. You’ve spent a long time counting so you won’t be in the mood to argue with the a European numberphile. So here is the low down on the lingo …

https://simple.wikipedia.org/wiki/Names_for_large_numbers

Parallel Universe Epistemology Party

In a number of parallel Universes, some of the permutations of me have had the privilege of meeting together repeatedly with various versions of Richard Feldman, Alvin Goldman, Robert Nozick, Alvin Plantinga, and Dr. Jefferey Watson and others at a party in honor of Edmund Gettier hosted by Laurence Bonjour and Susan Haack. Feldman had arrived with his friends, A.J. Ayer, William Alston, Michael Clark and Keith Lehrer. Typically, the conversation will go something like this …


AJA: The standard view of knowledge is having the right to be sure. Tonight, I would like you to earn me that right be assuring me that this is so. If P is true and S is sure that P is true, and S has the right to be sure that p is true, then we can all agree that S has knowledge of p.

EG: Unfortunately, I can think of cases where that is not true. Think of Smith and Jones. The day he met, they were interviewing for the same job. Smith was sure Jones would get the job, as Jones bragged of knowing the owner, giving ten convincing reasons, one for each coin he had in his pocket, so Smith concluded the man who got the job would surely have ten coins in his pocket. This turned out to be true, but not because Jones got the job. To Smith’s surprise, he landed the job himself and hadn’t realized he had ten coins in his own pocket. P was thus true, S was sure that P was true, and S had the right to be sure that p was true, but S didn’t have knowledge of p. The same applied the time that Smith and Jones worked together and Jones kept bragging about his Ford. Smith quipped “Jones owns a Ford or Brown is in Barcelona.” Sure enough, Brown was in Barcelona but Jones didn’t own a Ford. It was just a rental. MC: S didn’t have the right to be sure that p in either of those cases, but he would have so long as all of S’s grounds for believing p were true. So just add this as a fourth condition and S knows that p.

RF: Just the explicit grounds, or a whole chain of grounds? What if one ground in a chain of grounds is false? What if some attain greater certainty? Do those with less certainty negate the grounds with greater certainty? Work on that.

KL: As I was discussing with my friend, Paxson, what you really need for S to know that p is no defeating arguments as your fourth condition. There is no true proposition t such that, if S were justified in believing t, that S would not be justified in believing p. (Feldman 34)

RF: Nice try, Keith. But don’t you remember the Radio that Smith knew was off but playing “Girl, You’ll be a Woman Soon?” There was also that Tom and Tim Grabit case and their lying mother. Sight gets defeated by lies.  My own more modest proposal is to add that S’s justification for p does not essentially depend on any falsehood. Admittedly, this isn’t completely clear, but it’s clearer than Clark’s no false grounds idea, which I rather like otherwise. We have knowledge so long as each premise is sound. And there is always some epistemically basic belief that every piece of knowledge ultimately rests on,

LB: Either something shows you evidence for its truth somehow or it doesn’t. There are no epistemically basic beliefs that things rest on. Evidence is something that fits together like a web coherently with everything we know.

KL: I’m with Laurence. You say that “a basic statement must be self-justified and must not be justified by any non-basic belief. Second, a basic belief must either be irrefutable or, if refutable at all, it must only be refutable by other basic beliefs. Third, beliefs must be such that all other beliefs that are justified or refuted are justified or refuted by basic beliefs.” (Lehrer, Keith. Knowledge Oxford: Clarendon Press, 1974. pp.76-77) (Huemer 408). And our friend, Fred Will, would add words like “infallible,” “indubitable” and “incorrigible” to this (Huemer 402). One’s sensation may be deceived in various ways.

WA: Maybe DesCartes wanted that level of certainty and that would be ideal, but I just want justification to be sufficient for belief. Mediately justified beliefs lead to immediately justified beliefs along branches. If not, then the premises of a belief are unsound. Looped and infinite chains or those that terminate in unjustified beliefs would fail to constitute sufficient grounds for belief. 

RF: I think we can all agree that a priori knowledge is too limited to be practical but we certainly need evidence for justification.

SH: How about foundherentism?

AG: Nothing wrong with evidence, but if you want to truly satisfy Gettier here, S truly has knowledge if and only if the fact of p is causally connected in an appropriate way with S’s believing in p. Here’s another case. Let’s say Gerald falls down the steps and hits his head, giving him amnesia and an assortment of strange beliefs, none of which are true, but included in that random set of beliefs is the notion that he has just fallen down the steps. His belief is justified because he has the memory and it is true. But the belief was not causally connected in the right way. Therefore, it would not be knowledge. Edmund would be honored. The same holds true if there is a more complex causal chain. Now if you see a tree in front of you, the cause is your eyesight. Or I might remember a tree, so the cause in my belief there was a tree would be my memory. Or there might be a more complex causal chain, such as Smith seeing sawdust and wood chips where there once was a tree, remembering the tree and a notice he saw from the city saying they would cut it down. You might see this as evidence for belief, but they are also causes for belief. How I come to believe matters more than why.

RF: Well and true, but how do you deal with generalizations? How, for instance, would you know that all men are mortal, if you have not seen every man, past present or future to cause such a belief? Also, what if you lack some information in a causal chain? If Edgar believes Allan Poe died and knew he’d taken a fatal dose of poison for which there is no antidote and some time passed so he believed he was dead, but Alaln actually died of a heart attack from worry rather than poison, Edgar would be wrong about the causal chain in Allan’s death even if he was right that he was dead. He would then be justified in believing Allan was dead and it would be true, but he would not possess an appropriate causal connection. 

AG:  True. You would call it knowledge. I wouldn’t if I didn’t consider that the instances from the generality are not still causally connected – there is something to be said for that. Or perhaps your standard of what constitutes knowledge is lower. than mine. 

RF: Well then consider the twins Trudy and Judy Smith met. Judy comes to him one day and he believes it’s Judy even though he knows about Judy’s twin sister Trudy. Without good evidence, Smith assumes Judy is talking to him, when it could have been Trudy. You would say Judy caused the belief. It would be true. It would not be justified.

AG: I agree, it would not be justified. I thought about this problem for over a decade and realized what was needed was a reliable process of belief formation. Just seeing someone and being rash about it would not count as a reliable process of belief formation. S’s belief in p at t is justified ‘if S’s belief in p at t results from a belief-independent process that is reliable, then S’s belief in p at t is justified’; And if S’s belief in p at t results from a belief-dependent process that is conditionally reliable, and the beliefs the process operates on are themselves justified.’ (Feldman 95). This, by the way, is why sensory experience is justified for believing – it is a highly reliable process. Calling it an epistemically basic belief is unnecessary.

RF: If you don’t have a body but are really a brain in a vat causing all sorts of beliefs, then what process applies? Or what if you only look at a broken clock at the right time by pure coincidence every time you look at it, unaware that it is broken? 

LB: I concur with Richard on this one. Consider Norman, the clairvoyant. He was always right. Suppose one day Norman believes the president is in New York City for no reason other than a hunch obtained by his clairvoyance, and he’s right. That belief would not be justified. I’ll admit it would be a reliable process, but it would fail to cohere with any evidence Norman would otherwise have.

RF: Yes, evidence. You’ll need to spell out the process better, Alvin. 

Just then David Hume enters the room. 

DH: The clairvoyant’s process is mere numerical inference. It only predicts that past. Backgammon anyone? (add Scotch accent)

AG: No thanks, Hume. Well I have made some distinctions, like the difference between a hasty scan and a detailed observation or the qualitative difference between seeing nearby objects and distant ones – process types.

RF: Not good enough. Each category still gets treated as though every token example has the same reliability as a process. 

JW: I don’t think Freedman gets it. These types need to be general to be all embracing.

AG:: We might say, “if s’s belief in p at t results from a belief-independent process token whose relevant type is reliable, then S’s belief in p at t is justified.” (Feldman 9Ish 8) How’s that?

RF: Consider an umpire at a baseball game. Some calls are easy. Others are tough. The process is the same. 

JC: No, it’s not. An umpire scrutinizing over a tough call involves more scrutiny than an easy call. He scrutinizes. That’s another process type.

RF: You people just think up examples to give you the results that you want. There’s no general theory here. This violates the Same Evidence Principle. Evaluation supervenes on evidence.

RN: I appreciate that you strive for high and consistent standards, Richard, but I have to agree that causal chains might improve over reasons alone for justification. Method certainly matters. So does process. And what you want is not just any process type, but something more reliably reliable. The only way to do this would be through a process that actually tracks the truth. I’ll admit you do need a good method, but that method also has to be used in the right way. You ought to be asking yourself if things had been different, would you still have known. You need to track counterfactuals. S only knows p if S believes p, p is true and S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” Do this and you can’t go wrong.

RF: Huh?

RN: Consider the broken clock you mentioned, which you looked on only at lucky times, getting it right, but didn’t know. Why? Because the cause was right but the evidence was unjustified. The premise, you would argue, was that the clock worked, but it didn’t. So for you, as a foundationalist, there would be no knowledge, but Allan’s causal theory fails, as you said. But had Allan tracked the truth using the clock method, he would have learned within seconds that the clock was broken. He would then have found a different method more suitable for determining the correct time or simply confess he didn’t know what time it was and be correct in that belief instead. There are many examples like this – it could be a thermometer that was broken instead. Knowers are truth trackers. 

RF: Well, that would solve Edmund’s cases just as neatly as Alvin’s solution would. 

RN: Indeed, and there are many other such cases of lucky knowledge. For instance, Ms. Black, working in her office, getting up to stretch – she looks out the window – and just happens to see a mugging on the street and becomes a witness. Her method is luck. What kind of a reliable process is that? In fact, she has no method. Yet she certainly saw. And seeing was her process. She didn’t track the truth because she was looking for it over time. That’s why I said, “S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” At that time, her method was seeing from a timely stretch but it wasn’t even what she intended to do. One might suppose she tracked it over the time she needed to – just that moment. But this is still not enough for knowledge, because what if things had been different? What if she had stretched at any other time? Then she would not have known. And I say that if you would not have known, then you aren’t tracking the truth. I raise the standard of what knowledge ought to mean in this way. I say this because truth matters. In many cases our lives may depend on it!

JC: I agree but I’m not sure I understand. You are introducing counterfactuals in saying that something is not knowledge unless they can say that if things had been different, then such and such would be true, and of course they would have to be right about that. Do you mean that they should be able to know both the truth or falsity under any condition?

Nozick goes to the chalk board.

RN: Yes. However, I would temper this by distance. Here we are talking about the responsibility toward truth that human beings ought to consider. So I’ll offer a third and fourth condition for knowledge as follows. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that if p were true, S would believe that p and not disbelieve that p were true. This is actually a step up from a fourth condition I previously had expressed, that if p were true, S would believe it – (iv) p → S believes that p. But, as I said, realizing that we, as human beings, are quite limited in our methods and knowledge, for a realistic aim at what one would use for saying that someone knew something, at least given a certain method, this would be the responsible way to treat whether one knows something or not.

JC: I’m confused. Can you give me an example of what you mean when you say, “when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p”?

RN: Certainly. Consider, the unattentive security guard who plays SODUKU all night long instead of attentively watching the monitors in his store. He gets lucky and catches a thief out of the corner of his eye as he thinks about something completely different. His assigned method is to watch. And indeed, he sees. But he is not tracking the truth by that method. Therefore, even though it could be said that he has knowledge of the thief, the standard of knowledge I am referring to has not been met. He was derelict in his duty, which was to track the truth by the method of watching the store monitors. Had he looked up at some other time, he would not have believed p or known p was true. Had p been untrue, he would not have known whether p were true or not either way. This is not a responsible way of knowing things. Tracking the truth is a responsibility. Using reliable processes for doing so goes along with this responsibility.

RF: I used that very same example to show why tracking was not necessary for knowledge. 

JC: Clearly, the difference is in what the standards are for the term.

Just then Saul Kripke walks in.

SK: I heard what you were saying, Nozick. Your truth tracking theory is hocus and I’ll prove it! You’ve heard about that town with fake barns, right? The town replaced their old barns and left a few standing but ran out of red paint, so they created a bunch of white barn facades to please the tourists. Smith drives through the town, sees a red barn and deduces that he sees a red barn. Now if Smith had tracked the truth, he would have known that all the white barns were fake. As it stands, he got lucky and properly identified a real barn, a red one, but since he didn’t track it, all he really knows is that he saw a red barn. However, he doesn’t know that he saw a barn because he wasn’t tracking the truth. See the problem?

JC: I’m not sure.

JW: According to Truth-Tracking theory, he saw a red barn, James, but he did not see a barn.  

JC: Goodness. I can see that! 

JW: Nozick’s theory is “half right.” … “Truth-Tracking is really getting at the counterfactual: would S have believed p if not -p? Objections are to the condition that S would have believed not-p if p. So a revised Truth-Tracking theory would be a causal theory.” (Dr. Watson Unit 4 Video Lectures 4.3 The Truth-Tracking Theory)

JC: Truth-tracking is confusing, Doc. What does he mean by “tempered by distance”? 

JW: He’s talking about how far fetched the alternate world of possibilities might be. It’s “in the neighborhood” if it’s relevant to tracking the truth about something specific using a specific method. He doesn’t mean every possibility., only the stuff directly related to tracking the facts.

JC: Oh. So he’s not saying we have to be Omniscient to have knowledge, then? I’m not so sure I agree with that. As long as we’re after high standards here, I think maybe we do. Look, there’s Keith DeRose. What do you think about standards for the word, “knowledge,” Keith?

KD: I think it depends on context, James. Nozick here is talking about standard every day knowledge and responsibility. Our standard of what we consider “knowledge” can change from moment to moment. Recall the time that Smith and his wife had to deposit some checks into the bank on a Friday night and the line was too long so Smith suggested waiting till Saturday since he knew they were open till noon on Saturdays. His wife had doubts about the wisdom of that, so she asked if he really knew that? So he says, “sure I know it. I was just deposited a check there two Saturdays ago.” But as it stands, she had a particularly large check to deposit and had a bill due early in the week so it was very important to get that check deposited by Saturday. So she informs him of all this and says, “do you know for sure?” What really is the difference between knowing and knowing for sure, James?

JC: Ask Nozick here. I think he’d want Smith to track the truth by checking the web site or stopping in.

KD: That’s right. If Smith was talking to Robert, he might not have said he knew the first time around, but in his routine, his memory was reliable enough, and the odds of the bank changing their rules weren’t all that great. Neither Smith nor his wife had seen any announcements in the news lately about banks closing on Saturdays in the area.

AG: Depends on subject factors and attributor factors.

JC: What? Do you have to have something to say about everything Alvin?

KD: He’s talking about relevant alternative theory, James. Not all of these are invariantists. Some are contextualists, like me. The key is the attribution factor. Smith would attribute knowledge to the idea the bank was open. Saturdays ordinarily, but when he circumstances changed, the content behind the word “knowledge” was different. The character of the word, “knowledge” may stay the same in all circumstances, but the content can change.

AG: Linguistic and psychological context are also very important, James. They are attributor factors. If you’re in a class with Descartes talking about an evil demon fooling you, the attribution of the word “knowledge” is affected. (Huemer 495)

KD: Precisely my point. When Smith’s wife puts pressure on him, he’s not saying he didn’t know before, he’s addressing a higher expectation.

RF: That’s just pragmatism. It’s not epistemic responsibility. Smith was wrong to deny he knew it the first time. His memory was sufficient. You’re throwing in the “Get the Evidence Principle” (Feldman 47). You can never have enough information. The evidence you have at a given time is a fair basis for whether you can believe something, and if it’s true it’s knowledge. Simple as that. The Get the Evidence Principle becomes irrational to satisfy. This shouldn’t be confused the fact that even though it’s highly improbable that I’m having a heart attack when I’m having chest pains after eating buffalo wings, I might msyelf go get my heart checked. Action and knowledge aren’t the same thing. Uncertainty does not mean lack of knowledge either. Truth, on the other hand, would affect the status of knowledge. If Smith was wrong, he wouldn’t have known. Knowledge, as the word ought to be used, does not require certainty. It just needs to be reasonable. Smith’s memory was reasonable. His belief was justified.

RN: Did you say “epistemic responsibility,” Richard? Where is the responsibility in not tracking the truth? Checking your heart was exactly that!

JC: Professor Feldman has a point. I can see going to the doctor just in case. Even when things are improbable, it depends on what’s at stake. Business people use an expected utility formula. I’m a probabilist myself. But how sure do you really need to be to track the truth? We’re just human beings? How often do you have to go to the web site to see if the bank is open? Every ten minutes? How would anyone know to check to see if barns were really facades? Who would care to do that but the locals in a town? And what if Smith’s wife knew her husband’s memory was unreliable from an onset of Alzheimers? And all that aside, who can really track the truth but God?

AP: If I may interject here … We are limited by proper function. The human brain was not designed with such great capacity to know all that might have happened. Proper Function is a reliable process for getting at the facts that respects epistemic virtue and responsibility. We are all here because truth matters but the various organs have different functions – the heart pumps blood, the liver cleanses it, and so on. We have many instincts and we don’t track the truth nearly as much as we ought.. Any appropriate method wold do, but we need to start with the knowledge of our own need for epistemic virtue.

RF: You must have spoken to my friend W.K. Clifford. He says it’s always wrong everywhere to “believe with insufficient evidence.” (Feldman 44)

AP: Quite. And not just evidence. There are many types of epistemic responsibilities we have, concerning which virtue is often lacking. We can do better, objectively, subjectively, in what we believe and especially in our general “disposition to have coherent beliefs.” We won’t evaluate evidence well without a proper disposition towards evidence. When do we know our evidence is adequate? What of our faculties? Are they themselves reliable? What is our “epistemic goal”? I have lots of beliefs and goals – not all of them are epistemic. “There are a thousand other epistemic virtues” besides these for determining whether a belief has warrant. 

RF: Could you reduce this all into something precise for us?

AP: Surely. “A belief has warrant for me only if (1) it has been produced in me by cognitive faculties that are working properly (functioning as they ought to, subject to no cognitive dysfunction) in a cognitive environment that is appropriate for my kinds of cognitive faculties, (2) the segment of the design plan governing the production of the belief is aimed at the production of true beliefs, and (3) there is a high statistical probability that a belief produced under those conditions will be true.” (Plantinga, Alvin. Warrant and Proper Function (New York: Oxford University Press, 1993), p. 59) (Feldman 100)  

JC: So … S knows or has warrant for believing that p if, (but not only if), you aren’t cognitively impaired by drinking, dreaming or in a brain in a vat hallucinating, or suffering from dementia, and in the right environment. Can you explain what a cognitive environment is?

AP: Well, your brain wasn’t meant to concentrate on an important matter when you are being distracted. If you were in a tub of worms and scorpions or high up in snowy mountains being chased by a yetti, you might not be able to score well on a test in philosophy. Your cognitive functions might work just fine, but your environment would not be conducive to its optimal operation. Your cognition might be just fine for the environment it was designed for. You might have just passed your exams at MIT, but if suddenly you were transported to a planet in Alpha Centuari where there were invisible elephants sending cosmic signals into your brain making it believe there was a trumpet playing, your belief would not be warranted. And even if there was indeed a trumpet playing, say a silent one in a nearby phone booth, your belief might be true, but it wouldn’t be warranted. Would it?

JC: I suppose not. That would seem more like belief than knowledge. Right Professor Goldman? And what is this “segment of the design plan?” 

AP: Well, your brain is designed with many functions – such as interpreting what you see, or signaling your finger to move, or giving you input as to what you’d like to eat, and such sensory knowledge functions for its purpose, but if you are tasked with determining the truth about a proposition, it might not be any of those segments of your cognitive functions that would be needed for determining that truth. It would be the segment that governs the production of the belief. And specifically, it would be that which aims at the truth. You, for instance, might believe that Jesus rose from the dead. Aiming at the truth without bias would require a level of objectivity you might not possess. You do, however, possess the capacity to be objective. You can, in fact, overcome biases and predispositions. So you might be asking questions like whether an empty tomb necessarily implied a resurrection, or whether a report that a tomb was empty was reliable, or truly given on the third day, how consistent the reports are, or whether various details were added to a story later. If you were biased, you might choose not to investigate for yourself. If you used that segment of your cognitive design plan that governed discernment of true beliefs with a high statistical reliability, your design plan would be segmented properly for the task.  A belief produced under those conditions is certainly warranted. As long as there was no cognitive dysfunction, you might be capable of knowledge.

JC: Can you give me an example?

to be continued …

Contextualized SuperFoundherentism

All I wanted to do was start the Pamalogy Society, I tell ya! Next thing ya know, I’m enrolled at Arizona State University taking philosophy courses, courses in organizational leadership and interdisciplinary studies. It follows that I’ll be writing some academic blogs in the coming years. And guess who they’ll be for? Me. I’m writing them to myself! Writing helps me remind myself of what I’ve learned. Why not put my notes online, where I can access them on other devices later? So this one is on an epistemological system that I’ve come up with as an untested solution to the still raging debate over the theory of knowledge. I call it “Contextualized SuperFoundherantism.” Catchy title. Huh?

Background 

Richard Feldman asked three questions based on the following two assertions of foundationalism: 

F1. There are justified basic beliefs. 
F2. All justified nonbasic beliefs are justified in virtue of their relation to justified basic beliefs. 

QF1. What are the kinds of things our justified basic beliefs are about? 
QF2. How are these basic beliefs justified? If they are not justified by other beliefs, how do they get justified? 
QF3. What sort of connection must a nonbasic belief have to basic beliefs in order to be justified? 

(Feldman 52)1 

After some review of Cartesian Foundationalism and Coherentism, and then offering his own more “Modest Foundationalism,” he answers these questions with the following: 

“MF1. Basic beliefs are spontaneously formed beliefs. Typically, beliefs about the external world, including beliefs about the kinds of objects experienced or their sensory qualities, are justified and basic. Beliefs about mental states can also be justified and basic.   

MF2b. A spontaneously formed belief is justified provided it is a proper response to experiences and it is not defeated by other evidence the believer has  

MF3. Nonbasic beliefs are justified when they are supported by strong inductive inferences – including enumerative induction and inference to the best explanation- from justified basic beliefs. 

(Feldman 75) 

Feldman defends his Modest Foundationalism against the Coherentist, Laurence Bonjour. Bonjour rejected Foundationalism’s concept of an epistemically basic belief, insisting that any basic a posteriori belief should have some quality indicating its truth. Feldman didn’t think that was necessary. Deductive processes in basic empirical beliefs like sensory perception were taken for granted and subconscious as they were happening at best.

Bonjour rejects Foundationalism for other reasons. For one thing, Coherentism is misunderstood as using circular premises, when the image is more like a web of belief, or as Susan Haack liked to compare it, a crossword puzzle. There is a practical matter in all of this – how many beliefs can we really have if we have to justify every premise. Alston uses the image of the roots of a tree. Each root has to terminate in an immediately justified belief or the whole argument is unsound. It’s a great theory but its a lot to expect. In a fast paced world, we form beliefs on the fly. The Coherentist’s concept doesn’t require us to think through the basis of our basis of our basis for every decision we make. We rely on an intuition that has a picture of what it knows coming into the situation, and processes propositions based on what it knows. If something doesn’t square up coherently with that set of pre-vetted beliefs, then it is rejected – unless something compelling about it is a deal breaker for some of the pre-existing assumptions, a defeater for our biases. Potentially, this can knock down everything we think we know. Bonjour adds an “Observation Requirement” to his coherentism, to skirt Feldman’s criticism that a coherentist need never learn anything new if it doesn’t fit their world view.

Seeing merit in the two divergent paradigms for evidential knowledge, Susan Haack proposed a blend of Foundationalism and Coherentism, which she called “Foundherentism.”2 I must admit that I found her work hard to follow on account of the terminology she used, such as “C-belief”  (the content of a belief) and an “S-belief,” (mental state someone is believing) (Huemer 420).  She then refers to things such as “S-reasons” and “A’s S-evidence.” (Huemer 421) Perhaps, if I read it a few more times…

Still … the general thought of blending the two paradigms seemed worthwhile to me. As I considered the differences between Foundationalism in several of its versions and Coherentism, I found their mutual objections and rebuttals to be quite reasonable. I also noticed that Feldman had omitted reference to logical inference and a priori knowledge. So I came up with a “Foundherentist” statement of my own in response to his answers to his three questions. The parts I added are italicized:

“MF1. Basic beliefs are spontaneously formed beliefs. Typically, beliefs about the external world, including beliefs about the kinds of objects experienced or their sensory qualities, are justified and basic. Beliefs about mental states can also be justified and basic. Logic and math are also basic. 

MF2b. Both empirically justified basic beliefs and non-basic beliefs are contextually reliable and fallible. A spontaneously formed belief is (gradiently) justified provided it is a proper response to experiences and it is not defeated by other evidence the believer has over time. Such a response entails the believer’s existing and changing mutually supportive network of beliefs as a totality of evidence, cross checking the experience and belief, such that prior beliefs withstand its incompatibility or are modified accordingly. 

MF3. Nonbasic beliefs are justified when they are supported either by deduction or by strong inductive inferences – including probabilistic enumerative induction and abduction – from justified basic beliefs foremost. Their justification also increases or decreases in relation to a vector of force encountered through the holistic network of beliefs that a subject has when a proposition is evaluated.   

The above, which originally aimed at a modest foundationalism, now presents a modest foundherentism (and some additional features). We still have something like epistemically basic beliefs that we ideally search for and find as we explore the premises our premises are based on, but we acknowledge the value of coherent systems of belief, while we’re at it. Coherentism isn’t just practical, since it processes faster than Foundationalism, but it makes sense epistemically in some ways too. When it’s looked at as a crossword puzzle that has pieces fitting together, rather than as a circle loop of premises that provides no justification, it may just be the best thing we, as humans have, for ascertaining many types of truth. For one thing, Foundationalism has to be watered down just so sensory perception and experience can be seen as a generally epistemically basic belief. To do this, any sense of infallible certainty has to be removed. Deductive logic is great, but it doesn’t tell us things like – there is a car headed towards me. I must first believe there is a car and then I can infer that I should move, lest I lose my life. Coherentism tells me things like, “cars can be dangerous when they hit you” and “they can kill you” and “cars are things with four wheels that are sort of big and made of solid materials like missiles” and “Missiles kill.” All this adds up to decision making. Part of that decision is not, “prove to me that I’m not in the Matrix.”

Don’t get me wrong. Even if a person believes in simulation theory, only a skeptic would take every thought to that level in real time and Feldman is no skeptic. Feldman wants a practical every day knowledge. Unlikely defeaters are useless for Coherentists and Foundationalists alike. Both Bonjour and Feldman will get out of the way of moving traffic. To be real, a skeptic would too, and then the former would accuse the latter of hypocrisy.

Coherentism is something we actually practice. You. Yes you. You have a set of beliefs and what you believe has various reasons for making sense to you. Both Coherentism and Foundationalism are considered “evidentialist” theories of knowledge that are “traditional.” And Feldman contrasts these theories, which he seems to be less wont to accept, with evidentialism, calling them “non-evidentialist,” as he describes four new paradigms in chapter 5, (Feldman 81-107). This is how the student is introduced to Alvin Goldman’s Causal Connection theory, Robert Nozick’s Truth-Tracking theory, Goldman’s later Reliabilism Theory and finally Alvin Plantinga’s Proper Function theory.  I’ll be describing these theories in this blog post as I work to combine them into my singular master theory, which I’m calling “Contextualized SuperFoundherentism.” Are you ready?

Dismissing Skepticism 

I should start out by mentioning that the use of the term “knowledge” is contextual. I take this from Keith DeRose, so I am a bit ahead of myself. For a skeptic, such as Sextus Empericus, nothing can truly be known. That is why Descartes’ foundationalism could only say that something appeared to be such and such by his thought. If it weren’t for his ability to prove that a good God exists, Who wouldn’t allow such deception, he might allow that an evil demon might be making him believe incorrectly that 2+3=5. For Descartes, even math and logic would be subject to doubt under those conditions.

Contextualism allows knowledge to have a different meaning depending on context. When talking to a skeptic, I might agree that theoretically it is possible that math and logic are untrue, and while such a proposition may be a logical possibility in a Universe with laws different than those which are assumed by the Universe which appears to exist, by the non-laws or different laws of logic in such a hypothetical Universe or Universes, neither can the skeptic prove his doubting is true. Or maybe he can, but since such a condition implodes logic on itself with nothing to go on but speculation in such a Universe, there is no point taking any of it seriously, unless, of course, speaking with such a skeptic or considering one of their arguments. For the whole, I will accept deductive inference and math as the soundest sort of knowledge. 

I think I mentioned I was a probabilist. What I mean by this is I believe things based on what I find most probable. I don’t sweat over the term “knowledge.” I just say why I believe something is probably true, though if you ask me whether I know something I may tell you that I do. Feldman talks about inference to the best alternative, a thing called abduction. I believe in calculating odds wherever possible. I uses Baye’s Theorem using rough estimates all day long. When I heard that a school closed down recently because one of the kindergarten kids tested positive for COVID, the first question I asked was what was the rate of false positives? If the rate was 1% and there were 300 students, then there would normally be three positive tests for that group. The fact that there was a positive test ignores base rate info.

But I digress. My point is that skeptical arguments don’t bother me unless I’m talking about the word “knowledge.” For me, this term clearly has multiple meanings, just like many words in the dictionary do. The word “know” is like the word “tall.” I’m over six feet tall. I’m tall compared to the rest of my family and by most standards. But compared to Shaquille O’Neal, I’m not tall at all.

James Carvin goes for the tickle maneuver, rendering Shaq powerless to defeat him. But who is taller?

Going Super 

Clearly there are high and low standards of what people call knowledge. In one sense, I think that Omniscience is all that truly knows anything. I don’t know how many subjects S possess Omniscience. That’s an extremely high standard. To me there is nothing wrong with that, or for the skeptic to ask for absolute certainty before calling something “knowledge.” It is one thing to draw a line at doubting everything. It is another to search for truth, and suppose that a principle such as evidence being equal, evaluation should be equal ought to be questioned based on evaluative perspectives such as whether the cause of belief matters as much as the evidence itself, or whether we have a right to call something truth, if we failed to track that truth by considering the methods we used or failed to use in measuring something. And what methods would, in fact, have been most reliable for determining that truth? If those methods were not employed, do we have a better guarantee of knowledge of a fact? And what finally of our own cognitive processes as they may relate to such methods? The details of our evidence may remain the same throughout, but do any of these non-evidential factors weigh in on whether we have knowledge?  Does the fact that I checked in at 3am and there was no thief, add to any evidence that there was no thief on the monitor at 4am? Is there any sense in which my epistemic responsibility matters through good habits like checking? Should a fact like that be considered part of the evidence?

Feldman would have us choose between evidentialist or non-evidentialist theories of knowledge, but just as Haack finds a blend for a foundationalist coherentism, so also I think the reality of knowledge blends evidence with questions of causal connections, tracking possibilities that didn’t happen, the importance of reliable methods and cognitive function unfettered by environments that might disable good discernment. I don’t think any of these so called “non-evidentialists” are actually non-evidentialists at all. They simply don’t follow the dogma that equal evidence merits equal rationale for belief.  They find, as well, a plethora of epistemological virtues and values that ought to be considered in the aim for truth. For this reason, I would not refer to any of these as ‘non-evidentialists.” To be fair, I would prefer to call them “super-evidentialists.” And furthermore, since the evidence is a given in any proper evaluation of the truth, even if that evidence must be weighed against defeating facts or ideas, I would call for a double-blend of all this, which I will call “SuperFoundherentism.”  

Yeah that’s right. I’m a rationalist, probabilist, fallibilist quasi-skeptical SuperFoundherentist.

I also think we have a bad approach when we seek to build toward a theory of knowledge from belief forward. We know that knowledge must be true. That is simple enough. We know that the truth can’t be known if someone doesn’t know it, in which case they would also believe it – also quite elementary. But after this, everyone starts disagreeing and the counterexamples, beginning with Edmund Gettier, keep calling for formulations of what constitutes knowledge that are debatable. While it might be expected that someone would eventually add to justification or cause some fourth condition that could prove incorrigible, and I do hope this happens, I am in the meantime satisfied with believing that knowledge stands on its own. Robert Nozick thinks that we should track the truth by knowing what things would have been like if things didn’t happen as they did. (Huemer 475-490) This might be restricted to proximate possible worlds. How far-fetched do the possibilities have to be before the responsibility to anticipate such things no longer burden us with further epistemic responsibility? Only the Imagination of Omniscience Itself has the capacity to consider the farthest-fetched possibilities and scenarios, exploring how truth might have been tracked had things proven otherwise. By this standard, once we arrive at the Imagination of Omniscience, and we also know the Discernment of Omniscience to distinguish between the actual and the possible, then and only then, do we have Knowledge.  This I think we can call the metaphysically highest possible standard for the term, “knowledge.” I’ll captitalize that one with a big “K” but its still got the same character as the word “knowledge’ used in any other way. The content of that knowledge is different. That’s all.

This isn’t just some sort of grand compromise. It is about the premise of knowledge itself, which is reality – a reality that can’t be fully known without this “big K” qualification. Seeing the big K as the only invariant Knowledge, if anyone is to speak of “invariantism,” is why I think we should build down toward belief when considering knowledge, rather than up toward knowledge from belief. It is a sort of Tower of Babel problem. If anyone wants to talk about knowledge, we need to contextualize the term. Whose knowledge? What knowledge? When speaking of that abstract or actual essence which both knows all imaginable and all that is actual, even the skeptic is defeated by its mere logical possibility. From there we build down toward mere Earthlings and other possible knowers, seeing each belief as a matter of sharing what has already been tracked in the grand set of “Knowledge” or “Truth.”  Berkeley and I have something in common here, I think.

Contextualized SuperFoundherentism 

The above consideration makes clear that there is a total set of possible knowledge and knowledge of what is actual that is quite different from a limited set of evidence for a proposition that any human being will typically encounter or cognitively embrace.  Not having awareness of all possible and actual truth, we experience a world that begins with belief and makes claims of having knowledge, when a supremely high standard would certainly discount even a true statement as constituting knowledge. Context matters. Descending from the top down rather than ascending from the bottom up puts belief into perspective. We may know more than amoebas, (not that we know the experience of an amoeba itself), but we don’t know what Knowledge Itself knows. Unless we somehow transcend ourselves into higher beings like butterflies, we merely share portions of It in places and times, and even then, our knowledge will be intermediate if it fails to contain all possibility and fact. 

The idea of knowing the experience of an amoeba is a helpful one. Human beings lack that knowledge, but the total set of knowledge of the possible and actual, if it is known, does not lack that knowledge. To know all things, requires also not knowing, lest the knowledge of what it is to be an amoeba be missing from the totality of Knowledge. Segmenting into context is intrinsic to Omniscience. It is thus, most accurate as lowly human beings to say merely that we know in part and to gladly join the skeptics in agreeing that we have no true knowledge. This does not mean, of course, that we should also surrender the idea that we can enjoy any degree of certainty concerning what we know in part.

If a possibility or context is offered in which we might be wrong, so be it, but under the presumption that such a context or possibility is untrue, then we would be right given whatever reasons, methods, skills, causes, tracking and cognitive functioning, including coherent prior beliefs reinforcing experience we possess at any time t statistically likely to be true in those environments and contexts. In this sense, we can speak of a “contextualized superfoundherentism.”

On the one hand, we require no high definition of “knowledge” as being limited to the notion of the Imagination of Omniscience as a singular Knower of all that is actual and true or what could be. At the same time, we exclude possibilities such as being brains in vats. And on the other hand, we require no third, fourth or fifth condition for knowledge.  

Moving then, from the “traditional analysis of knowledge, which has … 
S knows that p iff: 
(i) S believes that p 
(ii) p is true 
(iii) p is justified 
(iv) some fourth condition that anticipates counterexamples 

What I have is more fundamental and only requires two conditions, where K is the total set of knowledge belonging to the imagination and knowledge of what is included in Omniscience, 

S knows that p iff: 
(i) K includes S in believing p 
(ii) p is true 

I’m not going to cop out with this though. There is another important possible aspect of context, which this reduced formula is not sufficient for determining. What of the context of someone who wants to know a particular fact or determine whether something or other is true? This, after all, is the typical context for which most of the formulations of us lowly human beings are designed. Until we’ve agreed upon a ground up architecture for superfoundherentism in more humble contexts, we haven’t offered the best we might have, given our limitations. For this we need a blended formula. 

I doubt any blended formula I can come up with will be perfect, especially as a first year philosophy student. When every other attempt has failed, my expectations are not great. But why not take a shot at it? What’s the harm? So the rest of this article is dedicated to doing just that. As a starting framework, I would propose adding Goldman’s causality to the traditional analysis of knowledge as a fourth condition, rather than replacing justification as one of the first three conditions for knowledge, as Goldman had it.   That would give us the following …

S knows that p iff: 
(i) S believes that p 
(ii) p is true 
(iii) p is justified 
(iv) S’s belief is causally connected to the truth. 

Feldman complains that causal connectedness fails to handle generalizations such as how one might know that all men are mortal. Goldman responds by pointing out that such a belief is connected to the fact that every known example of man is one in which humans have been mortal.”The fact that all men are mortal is logically related to its instances.” (Huemer 459) It is a fair generalization caused by the observation of known cases, or a belief in reports of such, which caused them. It is the truth which causes the connection.  

Feldman also rejects causal connectedness as a replacement for justification because sometimes true beliefs are formed from inaccurate assumptions. He calls this “overdetermination,” citing the example of Edgar, who knows Allan has taken a fatal dose of poison for which there is no known antidote. While Edgar runs off to get help and assumes correctly that Allan has died by the time that help arrives, futile as it would have been, Allan actually dies from a heart attack from the stress. If Edgar is incorrect about the real cause of Allan’s death, for Feldman, according to causal connectedness, it follows that Edgar did not really know that Allan died. (Feldman 85)

What here is the difference between knowing or not knowing? It is in the way we use the word “know.” In the context of expecting this word to mean that Edgar knows every detail about a cause or causes of his belief, Edgar only knows why he believes. In the context of expecting “know” simply to mean Edgar believes and is correct that Allan is dead, his belief is true and he is justified in believing. The causal connection needs to be as precise as the conversation we are having about it demands that we be. If we want absolute precision, then let us count the subatomic particles in the poison and produce their locations as they move, as an Omniscient mind might. But in the context of a far more humble human interaction, such an expectation is rarely in force.  

It seems fair to me that if we are not rejecting foundationalism because we can arrive at a more modest foundationalism, that we should be able to think in terms of more modest causal connectedness, as well. It all depends on what technically we are referring to when we use this word, “knowledge.” What is the context of our conversation? Who is having it? What are they expected to know? What are they attempting to prove or disprove?  

Feldman also complains that a cause can technically lack evidence. He uses the example of the twins, Trudy and Judy to make his point. Smith knows they are identical twins and one day sees Judy and is glad to see her. He had no evidence she is Judy. He just believes it. Judy caused his belief when he saw her. If we were to include causal connectedness as fourth condition, as I have it, then this leaves evidence for justification, or reasons why, as a condition for knowledge. His counterexample no longer works. Smith does not know he sees Judy. 

We might want to refine the words “causally connected” just as we might want to refine the word, “justified.” The more precise and specific we make it, the less inclusive it is likely to be. This is why formulas tend to be so all encompassing. They lose something from prose descriptions, which consider nuance and temper ideas. If we said “justified by evidence” then what “evidence” is there in the cube root of X1,800 is X1,797? The word “evidence” is better expressed as justification. Similarly, the word “connected” is very general. It solves certain counterexamples to the traditional analysis of knowledge. No definition is quite perfect for that which knows less than Omniscience. A formula is one context. An essay is another. A conversation with a skeptic raises standards for the term “knowledge.” A conversation with a politician can be nearly meaningless.  Again and again, context matters. 

The Fifth Element and a Sixth 

If we are satisfied that these four conditions are an improvement, then we might ask whether we should add a fifth condition to the four that we now have, such as no false grounds, as offered by Michael Clark, or no defeating arguments, as offered by Keith Lehrer and Thomas Paxson. We have to consider these one at a time. Feldman points out that the no false grounds condition may be too narrow or too broad. (Feldman 31-33) When it is too narrow, only the explicit steps of forming a belief are included. This skirts problematic false grounds in the background, leaving us with a better chance at saying we had knowledge. We can skip over inconvenient details and base our evidence on everything else. If we too broadly define false grounds, then almost any unfounded fact in the background can render a proposition as failing as knowledge. We wind up knowing very little. Again, I would say that it all depends on what one expects of the term, “knowledge.” To meet the most common expectations, we might say that it “lacks significant false grounds” or “there are no deal breaking false grounds.” We would then have something like (i) S believes that p; (ii) p is true; (iii) p is justified; (iv) S’s belief is causally connected to the truth, and (v) there are no significant false grounds for S’s belief in p.  

This would meet Feldman’s two objections. False premises would certainly undermine the concept of knowledge. Lehrer and Paxson would add that additional evidence defeating an argument should cast sufficient doubt on a belief, undermining knowledge. This would apply even with all of the above in place. I see no escape from including a provision, seeing that S believes at time t. If S fails to consider some new information that would defeat what they know, then it could only be bias or some other cognitive failure in that segment of the mind normally aimed at the truth that would sustain the belief. Knowledge disappears with belief, whether or not the facts behind that truth remain the same.  A defeater may turn out to be entirely unsound, but as it pertains to S believing, as facts are in the process of being gathered, calls for some suspension of belief, at the least, for as long as it takes to validate the chain of premises the defeating premise may be based on. The defeater itself, must however, be of significant weight to warrant such a suspension of belief. I see no choice but to add a sixth condition from this. “(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at time t.”  

Freedman has this argument as “There is no true proposition t such that, if S were justified in believing t, then S would not be justified in believing p.” (Freedman 34). This seems too strong for what the gist of the problem is. There may be plenty of justification for belief in a proposition, including its causal connection to belief, especially if there are no significant false grounds for that belief, but if there is some defeating information presented to S, S should not believe it. This, it seems, should be the thrust of the no defeater inclusion, not whether there are any such true and justified defeaters at all. Those defeaters have to both be true and justified, of course, but what matters is that their significance be to the point where S actually changes s’s mind and decides not to believe it on account of it. If S has no cognitive impairment toward the truth, S’s belief should correspond with what S knows in total. That is why I’ve fomulated (vi) as I have, in place of Feldman’s rendering of Lehrer and Paxson’s own more complex formulation. For their part, Lehrer and Paxson add a defeater defeater clause, which I must confess, as someone new to epistemology, I cannot comprehend. (Huemer 464-474) Feldman then addresses subjunctive conditionals – “sentences that say that if one thing were true, then another thing would be true.” (Feldman 35). He finds these confusing. They’ll get worse as I attempt to simplify and make good use of Nozick’s Truth-Tracking below, but presently, I would round out the foundationalist’s portion of the Contextualized SuperFoundherentism concept with the following stack: 

S knows that p at time t iff: 
(i) S believes that p at t. 
(ii) p is true at t. 
(iii) p is justified in believing p at .t 
(iv) S’s belief is causally connected to the truth of p at t. 
(v) There are no significant false grounds for S’s belief in p. 
(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t. 

Adjusting for Coherentism 

The essence of foundationalism is the notion that for any true belief, there must be either an immediate justification of that belief so that the belief is epistemic, or a basis for that belief that is epistemically basic, or a chain of beliefs leading to an epistemically basic belief for each premise for each belief. William Alston ((Huemer 402-416) uses the paradigm of the roots of a tree, rather than of the foundation for building a house to describe this. The end of each root system terminates in an epistemically basic truth. If not, the proposition is unsound. Laurence Bonjour and those espousing coherentism reject sensory experience as epistemically basic on the ground that an experience must have some feature indicating truth. Feldman doesn’t see the need to distinguish between the inference at the point of perception that the perception is true. Bonjour responds to more modest foundationalism, while Will and Lehrer attack a more stringent form of it that demands infallibility, incorrigibility and certainty from basic beliefs. Feldman and Alston defend a more modest form of foundationalism, but Bonjour disagrees with something the modest foundationalists don’t counter adequately. We just don’t typically form beliefs by thinking through chains of premises to check whether everything is sound. We could all stand to test this. Next time you express an opinion with the words “I think that” explore the roots of your basis for that opinion.  Ask yourself, whether you checked each premise for each premise for each premise. Are there any loops? Is there anything unjustified? Is there anything that just keeps going deeper and deeper into more and more mediate beliefs? You might find you have to consider thousands of things just to justify your belief in one thing.  

As human beings, we hardly have the capacity to think through such things. We might not be able to survive if we verified every mediate belief consciously. We would be endlessly processing information. We are capable of thinking through things, at aiming at the truth, but we lack the cognitive ability that the imagination of Omniscience would have. Having a coherent system of beliefs is far more practical. We readily have an existing set of beliefs that we can check any new information with as we make split second decisions. As I previously noted, Bonjour adds an “Observation Requirement” to his coherentism theory (Huemer 396).  Coherentism fails if it isn’t open to change from new sensory input. This leads us to the question of what it takes to change sets of belief. Many ideas are dependent on other ideas. Remove one and a whole Jenga tower may fall. One observation might create a shift in the force.  

So be it. The force of coherent belief is real in human beings.  We may hold very different beliefs from one another. Our political leanings are obvious examples of this. Looking for information that would challenge our existing over-all opinion runs counter to the proper function of a cognitive system aimed at the truth. It is aimed instead at what I will call “coherency bias.” Coherency, is some part of “justification” in the formula for knowledge. Coherentism itself comes classified by today’s epistemologists as an “evidentialist” theory.  This fact calls for some precaution in the justification clause. When we say something is “justified,” what is it justified by? Our biases? Of course not. It needs to be justified by that rare breed of coherence that continually checks its own facts, not to determine whether they are right, but to disprove itself, that it might be free from the bondage of bias and readily aimed at the truth, whatever that may be. The previously considered facts and opinions, having already been subjected to this process, ought to be reliable measures for anything new. But so what if they aren’t? Let them fall as they may. The power of knowledge includes the power to walk away from our presuppositions. Foundherentism, then, merely requires a slight adjustment to the third condition: (iii) p must be justified according to a sufficiently maintained, cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t.  

Alvin Plantinga will have more to say about the proper function of a cognitive system than this, which we’ll talk about below, but I think that coherentism is not just about assembling many ideas together in a way that fits like a crossword puzzle. While this model seems more practical than foundationalism, the fact that not all people possess the same set of pre-existing beliefs, calls for what Plantinga provides with his Proper Function Theory.  Coherentism is a cognitive predisposition. As such, it is a bridge from evidentialism to super-evidentialism, rather than non-evidentialism. Before we move on to how each of these theories should fit in with our more inclusive formula blend, let’s just look at what we have so far … 

S knows that p at time t iff: 
(i) S believes that p at t. 
(ii) p is true at t. 
(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t. 
(iv) S’s belief is causally connected to the truth of p at t. 
(v) There are no significant false grounds for S’s belief in p. 
(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t. 

Super-Evidentialist Contributions 

With this step in building up a blended theory of knowledge, we are ready to consider more carefully the Super-Evidentialist contributions that might help enhance this formula with something approximating something definitive. I’ve already brought Plantinga into this by relating coherentism to cognitive disposition, but there is more to consider.  

Feldman orders his introduction to the “non-evidentialists” with Causal Theory first, Truth-Tracking second, Reliabilism third, and Proper Function Theory fourth.  About twelve years after he introduced the Causal Connection Theory, Goldman offered a Reliabilism Theory, as well. Reliabilism focused on the method of belief formation. If the method or process was statistically reliable, this increased the fair basis for belief. Feldman’s fondness for foundationalism has him lauding Goldman’s distinction between conditionally reliable belief-dependent processes and belief-independent processes that are reliable. Of course, only if both types in any chain of beliefs are reliable processes, can a belief be justified.  

For Feldman, justification by reliable process parallels his foundationalist dependency theory, but he favors evidence over process and knocks down process with a few objections we should consider. His first is the oft heard brain in a vat problem. This is similar to the Matrix, only instead of being a whole body hooked up to a computer simulation program, all that remains is the brain. To distinguish the brain from what the brain is being led to believe, Feldman gives each a name – Brain and Brian. Brain does not know Brain has no hands. Brian thinks he has a brain and does not know Brain exists. Brian has no other reliable process to go by than what works in Brian’s simulated world. So when Brian thinks he sees Brian’s hands, contextually, Brian is using a reliable process for determining that Brian has hands. Brain, by contrast, lives in the context of an actual, rather than simulated world, where Brain is a brain in a vat. Brain supposes that the process for determining that Brian has hands is the simulation Brain experiences as Brian, not knowing anything about any simulation any more than Brian does, but Brain is wrong about that process, as well. The simulation is, therefore, anything but reliable for determining the truth. Brain actually has no access for determining anything at all because Brain is unaware that it is in a vat. There is nothing that Brain can know, except what Decartes knew.  

So that’s Feldman’s objection to Goldman’s Reliabilism but here’s my tke. Whenever anyone turns to the brain in a vat argument, there is a chance there is a lack of more reasonable objections. How can it be unreasonable to ask that the methods for determining the truth be statistically reliable? This can only increase the chances that what we believe is true. While it may be entirely true that an argument like Brian and Brain can defeat them, these fall under the category of no defeating evidence with weight significant enough to warrant suspension of our belief in the proposition – here the proposition being that statistically proven methods of discerning truth about beliefs are a reliable justification. That said, why would we hesitate to add Reliabilism to our formula?  

As for his part, Goldman doesn’t confront the brain in a vat argument that I’m aware of, but he does respond to the complaint of accidental or unknown reliability defeating his basic reliability condition. I’ll get to that in a minute. To put his basic reliabilism into some simpler words than his own: 

If the method or process is statistically reliable, this increases the fair basis for belief.  Only conditionally reliable belief-dependent processes and belief-independent processes that are reliable in any chain of beliefs can produce justified beliefs. 

This is a rather rigid statement that should be rejected for reasons that I’ll discuss below so we can see where we are as we build our formula. We can see here that Goldman’s focus is on condition (iii), once again. We’ve already added to this condition so that it reads, “(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t.” We’ve also moved Goldman’s causal connection to condition (iv) “S’s belief is causally connected to the truth of p at t.” We should add in his caviat – “in an appropriate way” to condition (iv), to cover such things as Smith’s inappropriate response to Trudy and Judy. This gives us (iv) “S’s belief is causally connected to the truth of p in an appropriate way at t.” 

Next, we’d like to include a less rigid reliabilism statement as part of our formula, if we can come up with one. To reduce his verbiage, Goldman basically says, “Only conditionally reliable belief-dependent processes and belief-independent processes that are reliable in any chain of beliefs can produce justified beliefs.” Since we have things like clairvoyants creating accidental reliabilism from time to time, (Feldman 95-96 per Bonjour) we need to soften the definition. Remember, we are including this with what we already have in condition three: 

(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t,  

We could perhaps change this to: 

(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable. 

Goldman, seeing a problem with defeating arguments, also proposes the following condition: 

“If S’s belief in p at t results from a reliable cognitive process, and there is no reliable or conditionally reliable process available to S which, had it been used by S in addition to the process actually used, would have resulted in S’s no believing p at t, then S’s belief in p at t is justified.”  (Feldman 95)

Since we already have a no defeater clause in (vi), this is unnecessarily bulky. Let’s not include it. Feldman complains that Goldman fails to go into detail about process types. Goldman calls an individual process a token and a type is a general category of a process. A single token process may involve multiple types. I’ll ignore these differences because including them in a single formula that is meant to be all encompassing is like saying “any whole number.” It doesn’t have to specify every detail. If we are looking for a definition for knowledge that at least in certain contexts we can agree on, I would exclude the specifications.  

So so far this gives us … 

S knows that p at time t iff: 
(i) S believes that p at t. 
(ii) p is true at t. 
(iii) p must be justified according to a sufficiently maintained cognitive system that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable. 
(iv) S’s belief is causally connected to the truth of p in an appropriate way at t. 
(v) There are no significant false grounds for S’s belief in p. 
(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t. 

Truth-Tracking 

I’ve saved the best and the worst for last and we’ve come full circle in this discussion. Robert Nozick wants to elevate the standard of truth to include things that could have been. It’s not an easy theory to understand, so I’ve broken it down into an imaginary conversation that I’ve had at a party with philosophers from all ages. This party conversation is something I might revise from time to time. Here is an excerpt of one of its earlier drafts. (RF is Richard Feldman, RN is Robert Nozick, JC is me, James Carvin, JW is Dr. Jeffery Watson, my epistemology professor) … 

RF: You people just think up examples to give you the results that you want. There’s no general theory here. This violates the Same Evidence Principle. Evaluation supervenes on evidence. 

RN: I appreciate that you strive for high and consistent standards, Richard, but I have to agree that causal chains might improve over reasons alone for justification. Method certainly matters. So does process. And what you want is not just any process type, but something more reliably reliable. The only way to do this would be through a process that actually tracks the truth. I’ll admit you do need a good method, but that method also has to be used in the right way. You ought to be asking yourself if things had been different, would you still have known. To be epistemically responsible, you need to track counterfactuals. S only knows p if S believes p, p is true and S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” Do this and you can’t go wrong. 

RF: Huh? 

RN: Consider the broken clock you mentioned, which you looked on only at lucky times, getting it right, but didn’t know. Why? Because the cause was right but the evidence was unjustified. The premise, you would argue, was that the clock worked, but it didn’t. So for you, as a foundationalist, there would be no knowledge, but Allan’s causal theory fails, as you said. But had Allan tracked the truth using the clock method, he would have learned within seconds that the clock was broken. He would then have found a different method more suitable for determining the correct time or simply confess he didn’t know what time it was and be correct in that belief instead. There are many examples like this – it could be a thermometer that was broken instead. Knowers are truth trackers.  

RF: Well, that would solve Edmund’s cases just as neatly as Alvin’s solution would.  

RN: Indeed, and there are many other such cases of lucky knowledge. For instance, Ms. Black, working in her office, getting up to stretch – she looks out the window – and just happens to see a mugging on the street and becomes a witness. Her method is luck. What kind of a reliable process is that? In fact, she has no method. Yet she certainly saw. And seeing was her process. She didn’t track the truth because she was looking for it over time. That’s why I said, “S used method M to form the belief in p, and when S uses method M to form beliefs about p, S’s beliefs about p track the truth of p.” At that time, her method was seeing from a timely stretch but it wasn’t even what she intended to do. One might suppose she tracked it over the time she needed to – just that moment. But this is still not enough for knowledge, because what if things had been different? What if she had stretched at any other time? Then she would not have known. And I say that if you would not have known, then you aren’t tracking the truth. I raise the standard of what knowledge ought to mean in this way. I say this because truth matters. In many cases our lives may depend on it! 

JC: I agree but I’m not sure I understand. You are introducing counterfactuals in saying that something is not knowledge unless they can say that if things had been different, then such and such would be true, and of course they would have to be right about that. Do you mean that they should be able to know both the truth or falsity under any condition? 

Nozick goes to the chalk board. 

RN: Yes. However, I would temper this by distance to be practical and we are only talking about the truth tracking method applied. Maybe an Omniscient being could know all possibilities but here we are talking about the responsibility toward truth that human beings ought to consider. So I’ll offer a third and fourth condition for knowledge as follows. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that under any other nearby condition, if p were true, S would believe that p and not disbelieve that p were true. This is actually a step up from a fourth condition I previously had expressed, that if p were true, S would believe it – (iv) p → S believes that p. But, as I said, realizing that we, as human beings, are quite limited in our methods and knowledge, for a realistic aim at what one would use for saying that someone knew something, at least given a certain method, this would be the responsible way to treat whether one knows something or not. 

JC: I think I’m starting to understand. Can you offer another example? 

RN: Certainly. Consider, the unattentive security guard who plays SODUKU all night long instead of responsibly watching the monitors in his store. He gets lucky and catches a thief out of the corner of his eye as he thinks about something completely different. His assigned method is to watch. And indeed, he sees. But he is not tracking the truth by that method. Therefore, even though it could be said that he has knowledge of the thief, the standard of knowledge I am referring to has not been met. He was derelict in his duty, which was to track the truth by the method of watching the store monitors. Had he looked up at some other time, he would not have believed p or known p was true. Had p been untrue, he would not have known whether p were true or not either way. This is not a responsible way of knowing things. Tracking the truth is a responsibility. Using reliable processes for doing so goes along with this responsibility. 

RF: I used that very same example to show why tracking was not necessary for knowledge.  

JC: Clearly, the difference is in what the standards are for the term. 

Just then Saul Kripke comes in saying, “truth tracking is hocus. You’ve heard about that town with fake barns, right? The town replaced their old barns and left a few standing but ran out of red paint, so they created a bunch of white barn facades to please the tourists. Smith drives through the town, sees a red barn and deduces that he sees a red barn. Now if Smith had tracked the truth, he would have known that all the white barns were fake. As it stands, he got lucky and properly identified a real barn, a red one, but since he didn’t track it, all he really knows is that he saw a red barn. However, he doesn’t know that he saw a barn because he wasn’t tracking the truth. See the problem?” 

JC: I’m not sure. 

JW: According to Truth-Tracking theory, he saw a red barn, James, but he did not see a barn.   You got this wrong on the quiz. Try again …
The Fake Barns case counts against the Truth Tracking theory because either Smith’s belief that he sees a red barn ________________, but yet _____________ … or else Smith’s belief that he sees a barn __________, but yet ___________.

JC: The choices were:

A) doesn’t track the truth; is knowledge… tracks the truth; isn’t knowledge   
B) tracks the truth; is knowledge… doesn’t track the truth; isn’t knowledge   
C) doesn’t track the truth; isn’t knowledge… tracks the truth; is knowledge   
D) tracks the truth; isn’t knowledge… doesn’t track the truth; is knowledge 

I chose B. I can see I was hasty on that. It’s D. Right?

JW: I won’t give away the answers, James. I’ll give you my opinion though. I think Nozick’s theory is “half right.” … “Truth-Tracking is really getting at the counterfactual: would S have believed p if not -p? Objections are to the condition that S would have believed not-p if p. So a revised Truth-Tracking theory would be a causal theory.” (Dr. Watson Unit 4 Video Lectures 4.3 The Truth-Tracking Theory) 

JC: Truth-tracking is confusing, Doc. What does he mean by “tempered by distance”?  

JW: He’s talking about how farfetched the alternate world of possibilities might be. It’s “in the neighborhood” if it’s relevant to tracking the truth about something specific using a specific method. He doesn’t mean every possibility, only the stuff directly related to tracking the facts. 

This ends the portion of the dialog. For the entirety of the conversation, see my blog, Parallel Universe Epistemology Party, from which this is an excerpt.  

I write this sort of thing to simplify the complex. Here, two very difficult formulations are explained. (3) not-p →not-(S believes that p). This means that if p weren’t true, S wouldn’t believe that p. And then (4) p → S believes that p and not-(S believes that no-p). This means that if p were true, S would believe that p and not disbelieve that p were true. For Nozick, (1) is p is true and (2)_ is S believes that p. So adding his third and fourth condition for S knows that p iff, in plain English, Nozick is saying (3) if p weren’t true, S wouldn’t believe that p and (4) if p were true, S would believe that p and not disbelieve that p were true. Again … 

(i) p is true 
(ii) S believes that p is true 
(iii) If p weren’t true, S wouldn’t believe p 
(iv) if p were true, S would believe that p and not disbelieve that p were true 

Of course, since we aren’t talking about the imagination of Omniscience here, not in this context anyway, as we search for ways that human beings might really know things, as we raise the standard for what we would consider knowledge to be, we might deny that something was knowledge, or somebody really knew something if they weren’t actually tracking it. The examples from the dialog at the party explain this.  

For this reason, I consider Nozick’s Truth-Tracking Theory to be somewhat optional depending on context. What context? Let’s say we are fighting hackers who really want to check our vulnerabilities and break into our National Treasury. Did we really check everything? We create redundant systems and then we monitor them. How often? What methods do we employ? We have to verify that everything is working and no thieves are entering continually. A certain level of importance warrants a higher standard of certainty than most other types of what we would consider knowledge. Lives depend on it. An airline pilot does routine checks we would not likely perform on our automobiles. We might say we knew our car was working because we checked our tires a week ago, or that we had a scheduled maintenance just yesterday. Does that mean it works? Really? Do we know that? How do we know someone didn’t just slash our tire? We don’t know. So is it knowledge even if we know we just paid for regular maintenance yesterday? How about if you have a compulsion to jump off a building because you think you can fly? Shouldn’t we be asking whether we are dreaming? What is our epistemic responsibility? You see that context matters. It’s sort of knowledge. It’s more knowledge-like when we’ve been checking, taking every precaution. This is what Nozick is getting at. It’s actually a fairly simple concept when its intent isn’t obscured by fancy word formulations, definitions and defeating counterexamples.  

It all brings me back to the imagination of Omniscience, the ultimate Truth-Tracker, who not only sees the neighborhood of possibilities, but every conceivable possibility. That is a context that matters for Pamalogy.   

A Contextualized Super- Foundherentist Formula 

We are now ready for a final definition of knowledge. I’m not so certain I agree with Dr. Watson about Truth-Tracking being half right. Certainly, the formula is confusing. I think Nozick meant well in adding more to it than he originally had. Maybe what he meant to say was, that in nearby conditions, if the truth had been different but true, I still would have known it, or if things had been different and p wasn’t true, I would have known that too. I might even add in that if the truth had been different, no matter what it was, I would have known it, whether a proposition was true or false. None of those formulations fail to make sense for someone who simply wants a higher standard of knowledge than what we typically employ. To say that someone doesn’t know something if all these conditions aren’t true, is not to say they have no justified true belief that something either is or is not true. It is simply to say that they haven’t met Nozick’s standard for truth-tracking. And since there are multiple standards, let’s see if we can offer a contextualizing option in the theory itself. So with those thoughts, I can add this as condition (vii):

S knows that p at time t iff: 
(i) S believes that p at t. 
(ii) p is true at t. 
(iii) p must be justified according to a sufficiently maintained cognitive system in an unimpairing environment that aims at the truth and recognizes and eliminates the bulk of its coherency bias at t, using conditionally reliable belief-dependent processes and belief-independent processes that are reliable. 
(iv) S’s belief is causally connected to the truth of p in an appropriate way at t. 
(v) There are no significant false grounds for S’s belief in p. 
(vi) There is no defeating evidence with weight significant enough to warrant suspension of S’s belief in p at t. 
(vii) all of the above and only all of the above, unless truth tracking or some higher standard of a definition of knowledge is expected in context, in which case: 
   viia – in a condition that was monitored, If p weren’t true, S wouldn’t believe p. 
   viib – if p were true in a condition that was monitored and wasn’t farfetched, S would believe that p                   and would not disbelieve that p were true. 

   viic – in the context of an even higher standard, whether p were true or not, in a superhuman condition, the truth of p or not-p would be believed and known by that superhuman entity, even if the counterfactual subjunctive conditions were farfetched, to the extent that it would be appropriate to the relative cognitive capacities and corresponding expectations toward those superhuman entities. 

The seventh condition thus scales up gradually in terms of the standard of knowledge as S changes context, both as to the environment of S and the increasingly capable cognitive types of entities S might be, as to how they might be designed with the capability of aiming at the truth. The scale of expectations for knowledge keep increasing until the expectations one might have toward an Omniscient being with the capacity to imagine all actual and imaginable possibilities is the subject, S.  

You can see, then, that I’ve dropped the Truth-Tracking expectation for human beings. We can say that a woman looking out the window at just the right time to see a mugging, just because she is stretching, not because she is monitoring, both does and does not know that a mugging has taken place. Her sensory perception is all that is required in one context for the meaning of knowledge, but not another. Further, her random seeing, is lucky, but her sight itself is not something lacking knowledge. Her method is seeing. And seeing is reliable. It doesn’t invalidate the formula with a counterfactual. Condemnation of the security guard for failure to watch more attentively is warranted. A higher expectation for watching regularly is expected as an epistemic responsibility. In this way, contextualized superfoundherentism trumps the counterexample of the inattentive security guard. 

Footnotes

General Note: What I call “Feldman,” refers to the author of the primary textbook for our Epistemology 330 course at ASU. There will be lots of references to it in this article. Rather than using the ibid. method, I’ll use an informal inline with page numbers source citation technique as I go. The plus is you won’t have to scroll up and down to see these footnotes. And since this is the web, there may be some papers, I’ll hook you up to directly. The other textbook we used I call “Huemer.” So where you see something like, (Huemer 487), what you are seeing is the page number for our primary source text. Huemer is an anthology of contemporary writings, so quotes from other authors will be found here with Huemer inline markings and page numbers. the first time they are introduced, I’ll put the reference to the article title in as a footnote.

  1. Feldman, Richard. Epistemology, Prentice Hall, New Jersey, 2003
  2. Haack, Susan. “A Foundherentist Theory of Empirical Justification” Epistemelogy: Contemporary Readings, ed. Huemer, Michael. Routledge London and New York, 2002, pp. 417-430.  
  3. We studied Contextualism in my Introduction to Philosophy Course with Prof. Nestor Pinillos and it struck a chord with me. Keith DeRose offered some potent examples, as I recall. “Contextualism” can be contrasted with “Invariantism.” Feldman has a few pages on it (Feldman 156-160), but we haven’t yet arrived at them in my Epistemology course at ASU with Dr. Watson. For those interested in digging deeper, here is a summary article on it in the Stanford Encyclopedia of Philosophyhttps://plato.stanford.edu/entries/contextualism-epistemology/ 2007, rev. 2016