We all know the skeptical argument: if I know that the external world exists, then I know that I am not in a radically skeptical scenario, but since I don’t know whether I am in a radically skeptical scenario, it follows that I do not know that the external world exists. What I want to propose is that we do know some world, even if it is not an ultimate, or, equivalently said, fundamental, material reality. Let me show a few examples of what I mean to make this clear.
Let us suppose we are brains in a vat, as the notorious thought experiment goes, and we are being feeded input from a supecomputer simulation. If such were the case, we would not know the reality in which the supercomputer and its artificers exist, which for all we know might be the fundamental material reality. (Side-note: what if it fed us information that was isomorphic to the information we would receive if we existed in the fundamental material reality? This possibility, of course, does not help when considering radically skeptical scenarios, because we wouldn’t be able to know whether this was the case.) What I want to argue here is that in this scenario, even if we don’t know the fundamental reality (which, in second thought, might not even be material), we do know the world of the supercomputer simulation.
Leaving aside, for the purposes of discussion, skeptical doubts about the reliability of memory and the validity of inductive reasoning, it seems that this supercomputer simulation is highly regular and coherent, and that the tables and chairs presented to me are manifestations of entities external to my mind. These entities, in this case, would be information stored inside the supercomputer software, much like objects in a The Sims or AutoCAD house are information stored inside the computer software. The regularity and coherency would be explained by laws, coded in the software, that govern the behavior of virtual objects such as tables and chairs – much like cars in a video-game are subject to the virtual laws of physics coded in the video-game software.
It seems to me that this virtual reality satisfies a lot of things we want satisfied about the world of perception: persistent objects that are external to us and not subject to our whims (unlike objects in imagination, which are to a very high degree controlled by our will), which are subject to regular laws that we cannot break. Therefore, even if I lived in a virtual simulation it would still make sense to talk about persistent objects such as chairs and clouds which are external to me – just like we expect real objects to be -, and it would still make sense to talk about laws of physics and about our knowing those laws of physics that govern this virtual world we live in. This simulation would have, of course, to be complex enough to simulate quantum mechanical particles and fields, as well as the chemistry and biology and psychology that springs out of that – but this is precisely what the brain in a vat scenario postulates. This seems to me greatly satisfying, though of couse it does away with the project of naturalizing metaphysics, since all we could do with such a project would be acquiring knowledge of the virtual world we live in, and not the fundamental reality of the supercomputer.
This, of course, leaves open the problem of other minds. Suppose I am the only brain in a vat, and not that there are 7 billion brains in a vat enjoying (or suffering) the same virtual reality. This possibility would seem to lead to radical skepticism about the existence of other minds. However, this can be avoided if we assume (and I know I’m assuming a lot by now, but I just want to work this out) that philosophical zombies are impossible. Thus, the simulated humans I, the single brain in a vat, engage in conversation with need to be conscious.
A similar reasoning could be constructed for thinking we are living in a false world of perceptions created by a demon or by our own brains. Even if the world we experience is not even a representation of the fundamental reality, we are experiencing some world that manifests regularity and coherence, that seems to be constituted of stable objects external to our conscious minds (if we blink, they don’t stop existing, if we wish them away, they don’t stop existing) governed by stable laws that satisfy what we think we know about physics, chemistry, biology, and so on. If I can trust my memory and induction is valid, I can assert the stability and regularity of the world, and if there are no philosophical zombies, then all human simulations I engage in are actually conscious (and I might be one of the simulations without a physical-or-whatever brain).
Thus, what I am doing is biting the skeptical bullet and saying: yes, there is no way to know whether I am acquiring knowledge of the fundamental reality, or the real reality as it were – but this doesn’t mean that I am not acquiring knowledge of some reality, of some world. Furthermore, this satisfies almost all of our pretensions of knowledge and almost all of our curiosity. We know that chairs and clouds exist in the relevant sense of being stable and external to us, and we know they are governed by laws exactly like the laws of physics we hitherto have postulated, and we know other minds exist.
We are left, however, with a problem: if the world of perceptions is a stable simulation, concocted by a demon, a supercomputer, of what may be, then can we know that for example the Big Bang occurred? Can we know that evolution occurred, and not that the world was created a few thousand years ago by the software engineer or demon looking just like a world that has existed for a long time, with all the relevant fossils and so on? How could I even know that the world wasn’t created right before I came into existence? (Remember I said I could trust my own memory.) I don’t know the answers to these questions, but they seem to plague the anti-skeptic (who believes we know the fundamental reality) as much as my semi-skeptic view (who believes we might as well not know the fundamental reality, but nevertheless we do know a reality almost as relevant as we could wish).
Have I made myself clear? I am eager to know how this idea could be improved, or whether it is hopeless and should be thrown away.