A priori is an epistemic notion; one has justification or knowledge a priori about P when P was justified without appeal to experience, in some sense. The heat of the debate concentrates around what, in our psychology, could give us the power to obtain a priori justification. I feel that common proposals in modern philosophy were considerably mystical, which is to expect given the utter ignorance about the machinery of cognition before the twentieth century. The idea (surely caricatured) was that we have some faculty of «understanding» or «rational insight», that our «Pure Reason» or «Mind» can «grasp directly» certain necessities and deep truths. — I want to construct a clearer, less mysterious, more “scientifically-respectable” conception of the a priori and the psychological mechanisms that give rise to it. Here’s the deal.

There is one big reason the debate over a priori synthetic justification is interesting at all, and it has to do with the methodology of philosophy. Philosophy has long been under attack from some scientists and some scientifically-minded philosophers because of its tendency reach bold theses from the armchair. Such armchair reasoning was called a priori reasoning, and its realization seemed dependent upon mysterious faculties nobody could really make sense of — and so it came under suspicion that any such reasoning could be reliable at all.

To account for obvious cases of reliable a priori reasoning, such as in the case of Logic and Mathematics, empiricists came up with the idea of analyticity, with the hopes that it wouldn’t be mysterious how the mind could grasp that some fact was analytically true. Usually, analyticity is conceived as arbitrariness, conventionality, stipulation, and other deflationary conceptions.

In light of this, I will begin by arguing Logic and Mathematics are not analytic in this watered-down, deflationary sense of analyticity. (This is a blow to empiricism.) Then, I will look for unmysterious psychological mechanisms that could reliably give us a priori justification over non-analytic propositions.

I. Case Study: Mathematics and Logic.
Consider Euler’s formula:
Euler's formulaThis surely is a bold thesis. Could it be analytically true?

Suppose that to be analytic means to be “true solely in virtue on our linguistic conventions.” This would mean that Euler’s formula does not report anything that is true independently of human (linguistic) conventions. Mathematics may be a game, for instance, much like chess, or some other kind of symbol-manipulating enterprise. Maybe the above statement is just the product of arbitrary axioms and definitions.

Here are three reasons we should reject a formalistic and a linguistic conception of mathematical truth (as well as a psychologistic one):

First, because formal mathematics can be fruitfully applied to the world, which means it is not arbitrary or conventional. Example one: if you follow out in the world what the mathematical formalism says, you will be successful. Example two: when we found out the world had four dimensions, all the theorems we proved (from the armchair) about four-dimensional geometries turned out true of the world. Something about the world, or about possible configurations of the world, is captured by mathematics. (This is the chief argument against psychologism, formalism, and ‘linguisticalism’.)

Second, because informal mathematics is very fruitful: it may be done without axioms, definitions, or even language (to a small extent, in the latter case). Mathematics was done informally for centuries, and a lot of physics, engineering, and industry was done using informal mathematics. The calculus was only formalized in the 19th-Century, for example. There is also something to the fact that we have such powerful mathematical intuition — think of the great Srinivasa Ramanujan, who often could not even lay out a formal proof of the (correct) theorems he intuited. (This is an auxiliary argument, and mostly against formalism.)

Third, because there is metamathematics. If mathematics were just a game of symbol-manipulating, then we automatically create the field of metamathematics, which is the study of the game of mathematics. I believe the truths of metamathematics would, in this case, be truths about the structure of games, which are independent of our linguistic conventions. For example, one can have theorems about chess, and they are not arbitrary, even though its rules are human-given. These theorems will be true of whatever game or formalism has the same structure as the game of chess. (This is an auxiliary argument, and mostly against formalism.)

(The scientist-philosopher Imre Lakatos claims, in the introduction to his Proofs and Refutations, that metamathematics is largely informal, even though it is very fruitful. Yet another point against the formalist. — The mathematician-philosopher James Franklin has a made a similar point to mine in a short review of logicism: «Even if the logic needed for mathematics were trivial, what about metalogic? “The propositional calculus forms a complemented distributive Boolean lattice” describes the mathematical structure of logic and is not itself trivial.»)

Mathematical truths, then, could not be analytic truths if the latter were conceived as a matter of linguistic convention. Another popular conception of analyticity is logical truth, but would this get the empiricist off the hook? Consider that it is easy to understand how our cognition could grasp linguistic conventions. So if p is analytically true in this sense, it is not mysterious how our minds could grasp p’s analyticity a priori. The empiricist would leave off happily if mathematics was analytic in this sense, because then humans wouldn’t know anything substantial a priori.

The same, however, cannot be said for logical truths. Logic aims to capture something of good inferences (perhaps from correct principles). We want rules of inference that actually preserve truth-conduciveness. In deduction, our Logic must preserve the truth of the premises into the conclusions; in induction and abduction, the conclusions must be reliably rendered probable by the premises. (Surely you can concoct any logical system you want, but not if your logical system aims to be epistemically significant.) That’s substantial knowledge, for sure — and that we’d know a priori exactly how to preserve the truth of mathematical premises all the way to far-reaching mathematical theorems would be impressive indeed!

On top of that, we also want axioms (‘principles’) that capture adequately the internal “logic” (lato sensu) of the phenomenon being investigated. Whereas paraconsistent logic may be adequate to model Donald Trump’s belief system, it probably isn’t for modeling the “logic” of computer circuitry. That’s also substantial knowledge — and that we’d know a priori exactly what are the correct logical principles for the “logic” of mathematics would be impressive indeed!

It really does not matter that mathematical theorems follow logically from stated axioms and definitions. It’s a pretty substantial feat to (i) work out axioms such that reality can satisfy them; and also pretty substantial to (ii) logically deduce theorems from these axioms such that, once reality satisfies these axioms, all the theorems we proved will be true of reality. — This shows that mathematical axioms are not meaningless, and that our deductions from them (and the resulting theorems) are not mind-dependent (linguistic, conventional, social, psychological), and thus not ‘analytic’ in any sense that will make their a priori knowability unmysterious.

(My own explanatory hypothesis is that there’s something logical about reality, which makes it so (a) it can conform to axioms stated in purely logical (and perhaps mereological) terms, and so that (b) it conforms to the «logic» (lato sensu) of these axioms once it satisfies them.)

It should be clear now that knowing logical truths means having substantial knowledge, and the empiricist contention that we know nothing of import a priori falls apart. The empiricist may change the definition of analyticity, but care must be taken so that the new definition does not defeat the empiricist’s purposes. Here’s the problem: (i) If analyticity is conceived too strongly, — for instance, if P is analytic iff P is necessary, or if P is analytic iff P is a logical truth, — then one will be at pains at explaining how one can gain a priori insight into P’s analyticity. While it’s no problem to the empiricist to admit a priori knowledge that «P is linguistically true», it is a concession of defeat to accept we know «P is logically true» (logicism) or «P follows logically from Q» (if-thenism) or «P is necessary» a priori.

Armchair reasoning about such “analytic” statements would be mysterious in precisely the way empiricist philosophers dislike; knowing their analyticity would require some sort of faculty of rational insight, for it is no trivial feat, but a substantial one. On the other hand, (ii) if our conception of analyticity is too watered-down, too deflationary, then we cannot classify mathematical (and logical) statements as analytically true. — So I’d say analytic truths must be conceived as mind-dependent, even if we shouldn’t say all mind-dependent truths are analytic (e.g. I’m feeling x now). Since mathematical facts are mind-independent, then a fortiori they are synthetically true.

Either way, there’s clearly some faculty of rational insight about substantial propositions, even if it is not as powerful as modern philosophers may have thought. Now it is time to explore how is it that we can reliably intuit correct mathematical statements, given that mathematics is substantial knowledge: what could have given us this power, and how could it work?

(Of course, detailing how our cognition works in detail is a computational problem of the highest order, and I cannot pretend to understand it. I’ll be satisfied if I can upgrade questions surrounding our rational insight from mysteries to problem.)

II. Mathematics and Rational Insight.
Mathematics is a brilliant case study for this debate. It is easy to see why it is not analytic: it has something to say about the world — about structures in general, I think, existent or merely possible. Now I want to argue that it is easy to see that it’s knowable a priori. Then, I will argue it is easy to see how it can be known a priori. (I’m employing a fallibilist conception of «knowledge» here.)

Prima facie, this seems obvious. As I have stated above, we proved the core truths about four-dimensional geometry from the armchair. The empirical success of mathematics gives us reassurance that mathematical methodology is generally reliable. But this methodology is clearly a priori.

A friend of mine has offered a few objections to my conception of a priori, on the grounds that any faculty of «rational insight» we may have was finely tailored by genetic evolution, cultural evolution, and personal experience. Consider the following thought experiment. Usually one would think it would provide support for the thesis that we have some reasoning faculty that reliably outputs new correct intuitions about synthetic propositions, without their being derived from experience. My friend would object, as I detail in sec. III.

A twenty-five-year-old Leonhard Euler enters into an empty room with white walls and white carpet, and stays there with an endless supply of nutrients, pen, and paper. Suppose he knows something of the mathematics of irrational numbers, complex numbers, trigonometry, simple arithmetic, geometry, and group theory. However, he has never read anything about Euler’s formula — he has not yet invented it, nor has he read anything that is very close, mathematically, to that formula.

The fact is that Euler can, without exiting the room, reach the brand new knowledge about Euler’s formula, which connects facts about the complex plane, the trigonometric relations, the ratio of a circle’s circumference to its diameter, and an irrational number like e. Ex hypothesi, he had not learned from his experience in the outside world anything that was equivalent or closely-equivalent with this formula. He reached it by his mathematical insight, building from his previous mathematical understanding.

Mathematicians do armchair work. They reach mathematical theories over a century before any empirical science catches up to it and finds application (which is often immensely prolific). The mathematics of change (calculus) may have grown together with science, but most other mathematical theories have not done so — so I was informed by multiple sources which I trust. Furthermore, as an even stronger example, we reached the core truths of four-dimensional geometries before experiencing anything four-dimensional in character.

Could such an ability to reach non-analytic truths from the armchair count as being a priori? I think so, and I think it follows from our stipulations of what would count as «a priori»: it’s new information, and it’s not deducible from the facts of experience, but rather reached from the armchair. Here’s my idea, in a nutshell:

Mathematics is not just a reworking of information we acquired in our experience, but the exercise of some ability of our brain to understand certain things about mathematical concepts, outputting true beliefs about propositions wholly independent from human activities.

This is true for logic just as it is for mathematics. Optimist rationalists think the same may be true about our armchair normative ethics, normative epistemology, and metaphysics (of time, space, mind, causation, substance, property, etc.).

However, one of my close acquaintances is a borderline radical empiricist, and he has aided me in constructing the following picture of the phylogeny and ontogeny of our abilities of insight. He thinks it at least should make us wary in claiming this faculty of ours is «a priori».

III. The Story of a Brain. Consider Euler’s brain at the moment he entered into the empty room. It is a product of a long chain of events in the world. The genetic code that was crucial in its individual development (and continued well-functioning) was finely-tuned by evolutionary processes so that it would have a great chance of generating a functional brain under certain nurturing circumstances.

Let us consider how this process worked. In a crude but sufficiently accurate of depiction evolution, Euler’s ancestors, and the other members of the species they belonged to, sent down their genes differentially, based on how well they were adapted. Their adaptation was a function of their capacities of computation, and these a function of their genes. We say: certain facts about these genes affected how further generations of genes would be like. If they tended to generate brains which failed to discriminate between up and down, they probably would not be passed on. If they failed to generate brains which correctly conceptualized mathematics, — simple numerical quantities, ratios of time, patterns, symmetries, frequencies, resonances, distance relations, — and other relations between and features of middle-sized physical objects, these genes would not go on to the next generation.

Moreover, the complete information about this highly-structured system we call twenty-five years old Euler is not entirely contained in the genetic code. He is also a product of a complex process in the uterus, in the maternity, at home, at school, at his linguistic community, at college, — he’s a product of a deep immersion in an extremely advanced culture, constantly interacting with the memeplexes and individual memes that spread around through books, conversation, gestures, films, essays, and education. If he was locked in a room as an infant, he would not have developed much. He’d be a brute, incapable of much abstract reasoning, not even arithmetic — it’s not even clear that he’d be able to approximate quantities like babies and innumerate (but not dyscalculic) tribespeople. (That’s not arithmetic per se, which is a precise instrument.)

So Euler’s genetic code was carefully tailed to generate brains with deal correctly with mathematical concepts such as number and Euclidean geometry. It was also designed to learn, and learn it did, sucking up concepts, ways of thinking, ways of articulating thought, evidence, and theories from his culture. He also learned from a rich interaction with his physical environment. And so Euler saw parallel lines, planes, sets, equations, variables, pendulums, oscillatory waves, diagrams, dodecahedrons, graphs, and learned about series, expansions, irrational numbers, the complex plane, the infinitesimal, analytic geometry, and much else besides.

By the time he entered into the room, his brain was rich with mathematical knowledge, concepts, and abilities, which he acquired through experience and evolution (which is very much like experience).

Consider for yourselves whether the story I have just told changes the fact that our mathematical ability reliably reaches synthetic truths from the armchair, in just the way empiricists wanted to avoid. I don’t think it does!

IV. Autonomous Reason. Every time one proposes some reliable faculty of a priori reasoning, one had better have two things in store: (i) some empirical confirmation that this faculty is, indeed, reliable, and (ii) a non-mysterious account of how such a priori insight works, cognitively. — We’ve clearly got i, due to the effectiveness of mathematics in the natural and (sometimes) in the social sciences. Could we achieve ii, a plausible account of our a priori mathematical ability?

There are two extreme possibilities as to the nature of this mathematical ability, in my mental model of cognition. On one extreme, Euler’s brain is of a more simple kind. Euler’s genes could consistently generate brains that have some basic abilities, such as counting and estimating distances visually, as well as the ability to learn some concepts. However, this brain does not have the capacity of elaborating on those concepts and making any kind of sophisticated reasoning. So Euler can count, and estimate distances, and think about natural numbers; he can think about lines, planes, triangles, and so on. (Grant him a small ability to ‘idealize’ the objects he sees: he never sees perfectly straight lines, but it’s not to difficult to reach this abstraction.) — We could call this «the empiricist’s brain».

This first kind of brain will judge correctly the number of apples in the tree, but will not be able to deal with any higher concept of algebra. It is simply too rigid; it can count and that’s it. Perhaps it can also judge distance and estimate ratios correctly, but it will not understand a Möbius strip, Klein’s bottle, Borel sets, or Zeno’s paradox, much less Euler’s formula, Gödel’s theorems, and the Skolem paradox. That requires forming new concepts, which are sometimes extreme overhauls of the concepts which are immediately derivable from experience.

My point is that one can have a system that comes with built-in capacities of counting and simple geometrical reasoning, but if these capacities are not very conceptualized, — that is, if it’s not grounded in a system of concepts, concepts like number, ratio, distance, shape, size, relative position, and much else, and concepts rich enough so that they can be stretched, analogized, combined, connected, disassembled and re-assembled, and overall manipulated, — then one will have a system that is merely able to deal with very simple situations, a system which copes very badly with new situations or situations requiring complicated thought.

Euler’s brain is not of this first extreme kind, then, for it is capable of amazing mathematical feats. It’s rather a brain of the second extreme kind, the kind of brain that created infinitesimal calculus, second-order logic, advanced linear algebra, complex analysis, tensors, systemics, non-linear dynamics, topology, computational theory, cybernetics, category theory, and much else. The kind of brain that did all of that a priori, without generating testable hypothesis.

Our brain does not (merely) have built-in notions of Euclidean geometry, natural numbers, ratios, and periodicity (for example) — it must have a general capacity of dealing with concepts, which means developing new things and making highly conceptual thinking, allowing for analogies, comparisons, combinations, and so on. As we have seen with the white-room thought experiment, humans do not need experience to reach new objective truths about logic and mathematics. They apply this capacity for reasoning.

Getting from the finite cardinals of experience to inaccessible cardinals? From ordinary parallel lines (and curved surfaces) to parallel four-dimensional geometries in which parallel lines intersect? From everyday liar paradoxes to formal systems, Gödel numbering, and self-reference inside formal systems? Not a chance, unless you’ve got some ability to reach entirely new concepts building, a priori, from old ones.

Somehow, we can understand that certain mathematical structures work in certain ways, and we can think of new structures and examine them too. This is why Euler could discover the objective truth of Euler’s formula without doing empirical science, and why we discovered the core truths of continuous symmetry (think of Lie groups) and four-dimensionality way before we saw anything in experience that “suggested” them.

An ability to (a) understand the basic facts and concepts of a field of inquiry, coupled with an ability to (b) manipulate these facts and concepts in one’s mind so as to come up with new, more complex concepts and facts through a combination (and stretching, and mixing, etc.) of what was previously known, is the mechanism I propose for mathematical cognition. For example, our mastery of the concepts of finitude and order allows us to combine them in ways that gives rise to the concept of a transfinite ordinal, for example.

How all of that happens, in detail, is a computational problem, and an immense one at that. What I have done here (and this is something many other people have done) is transforming the problem of a priori mathematical cognition from a mystery to a problem. I have provided a basic outline of how such cognition could work: the computational manipulation of concepts.

[From here onward, this essay is under construction.]

Another problem remains, besides the computational problem of the manipulation of concepts: how is it that the original, ‘basic’ facts and concepts we have in our minds are correct? For example, I proposed our ideas can reach the realm of transfinite ordinals starting from conceptions of order and finitude, for example. Couldn’t our initial apprehension of order and finitude be so wrong so as to prevent us from reaching a sensible conception of infinity? — Likewise, other basic concepts on which we built metric spaces and four-dimensional topologies could be misguided too. Somehow, we’re starting off from correct assumptions and concepts. Or, we have some ability to fix them up. This is something I’m thinking about still.

V. Other Possibilities for the A Priori. [W.I.P.: Work In Progress]
I also think the same is true of epistemic issues like justification, argument, entailment. We can understand what is a rational plan, an intelligent argument, a valid inference, a clear exposition, a reliable strategy, an air-tight piece of reasoning, and from this we derive our capacity of  thinking about pretty much anything. This is why we can do epistemology, philosophy, science, proof theory, and logic. Our evolutionary past (phylogeny) and personal development (ontogeny) resulted in a kind of machinery in our brains which I call autonomous reason. It is a kind of machinery that can understand the workings of these kinds of things, — maths, logic, evidence, — and it does not need to make an empirical hypothesis and test it to see if its true. It can just see it’s true, understand that it’s true. It’s a machinery for understanding that and how certain things are the case.

This is how epistemology, logic, mathematics, and other areas of human inquiry are possible. We are agents who are good at inquiry. Nature and nurture made us that way. And we do not need to make empirically-testable hypotheses about good inquiry to make good inquiry or know that we are making good inquiry. This is the nature of understanding.

In fact, we couldn’t recognize evidence, justification, entailment, crucial tests, falsifications, empirical adequacy, explanations, corroboration, and all sorts of concepts involved in empirical science if we didn’t have this capacity of understanding evidence, justification, entailment, etc. to begin with!

One of the major objectives of the sciences of the mind is, I think, to understand how we understand things — how is it that our brains make computations in such a way to deal intelligently with concepts representing mathematical structures, logical arguments, and evidence, and eventually (after careful, diligent thought) reliably reach objectively correct beliefs, like Euler’s formula, the workings of the Non-Euclidean geometry used in general relativity, and basic tenets of epistemology and logic.

For this, we’ll have to understanding much about the nature of mathematical, logical, and epistemological truth. I think we will need to understand how is it that computers — for I think our cognitive processes are computational — can represent with concepts (and what are concepts? how do they work?) features of a mathematical, logical, and epistemological nature, as well as deal with this things correctly enough to see general and important objective truths about them.

Empiricists have been wary of postulating such a faculty in human brains of which we understand next to nothing. But there is overwhelming evidence that we do, from our success at inquiry. We are good at making science, which shows we have great understanding of evidence and justification, and we are good at reasoning from the armchair, which evidences our solid understanding of logic and mathematics.

We have faculties of reasoning which can reach truths autonomously from experience and empirical testing. It is our task to figure out how this works, and even how this could work.

Furthermore, we couldn’t even begin proceeding with science unless we had a prior  reasonably reliable access to reasonably reliable epistemic standards, proceedings, and so on. A philosopher named Peter Millican has summed up a skeptical challenge offered by Sextus Empiricus like this: “How can any criterion of reliable knowledge be chosen, unless we already have some reliable criterion for making that choice?”