My head is aching, for I’ve got some things to say and I’m not saying them. In the next few days I’ll be laying down rough ideas that have been using up my cognitive time. I call it the development of pilot ideas.

Idea Two. What needs explanation and what doesn’t is not clear to me. What seems clear is that explanations must come to an end.¹

I‘ve been told Aristotle argued one cannot explain why x has the essential properties it has,² for essential properties are basic from an explanatory point of view. Maybe he’s right. Consider an electron: the only explanation of why an electron has the properties it has would be an explanation of how the electron came into existence. Given his existence, all his essential properties follow; these properties are just aspects of what the electron is. This is very intuitive to me.

However, could there be anything which has a complex property which is explanatorily basic in this sense? One of Aristotle’s example, I think, is that one could not explain why humans are rational, because rationality is essential to humanhood. It’s not clear how this thesis could be true. One way for it to be true, perhaps, is for the word ‘human’ to be defined as something whose properties include rationality. But clearly that’s a non-starter and Aristotle did not intend it.

Perhaps he thought rationality to be a basic property, incapable of being decomposed into other properties. If Aristotle was wrong and it could be so decomposed (say, in properties P, Q, & R), then these other properties could themselves be essential to humanhood, so that the necessary rationality of humanhood would be grounded in (and thus explained by) P, Q, & R. If Aristotle was right and such decomposition is impossible, then I think that rationality itself could not be explained.

I have two things to say here. First, why I find this scenario would be deeply problematic:  rationality is a complex property, and explanations are needed if how they work. Second, why I think the impossibility of decomposing rationality would entail it cannot be explained. So let me begin. It’s difficult to unpack what it means to be a complex property, but here’s a go. Rationality seems complex because…

  1.  Rationality is versatile: it causes wildly distinct effects in response to a wide gamut of stimuli. It cannot only deal with mathematical problems, but also with social reasoning, theory-building, psychological prediction, argument-evaluation, and more.
  2. Rationality is nuanced: small changes in the stimuli lead to unpredictable, yet transparently orderly, responses. This means it ain’t random, but it doesn’t follow simple paths either. For instance, small changes in the social situation alters a rational being’s response in subtle ways.
  3. Rationality is powerful: it can reliably do complex things like building a house, inventing a well-functioning machine from clunkier prototypes, predicting tomorrow’s political events, and concocting splendid mathematical tools.
  4. Rationality is varied: it admits much variation in its character. The rationality of Lev Landau (mathematician and physicist) was much different from the rationality of Ernst Gombrich (art historian and philosopher), which is equally distant from the rationality of Marcus Aurelius (statesman and general).

Contrast this with mass. I am not keen on quantum mechanics or quantum field theory, but I don’t suppose this property is versatile or nuanced, as it reacts according to a short equation with few terms. Neither is it powerful: it can merely affect some other particles and fields in simple ways. Finally, it does not admit variation beyond a one-dimensional quantitative variation. So mass is a simple property, contrasting starkly with rationality, perhaps the most complex of properties.

For illustrative purposes, think of the electron and the human brain as black boxes. The behavior of the latter is wonderful and orderly unpredictable (even if deterministic), while the behavior of the latter is comparatively boringly simple. The brain black box (BBB) is much more in need of explanation than the electron black box (EEB), and I submit it cannot be explanatorily basic.

Now I’ll delve into the nature of explanations, the second of my purposes laid out above, and there’s two points I wish to address. First, what would count as an explanation of either box? Second, is the electron black box in need of explanation?

Another good idea I had from Aristotle is that one way to explain something is to show how it can be accounted for by something less in need of explanation than it. So one way to explain the BBB is to account for its behavior by positing multiple smaller (less complex) black boxes, the interaction of whose behavior accumulates into the behavior of the original black box, BBB. Since they are less complex, they are less in need of explanation. So long as the sum of the «explanatory needs» of the small black boxes does not exceed the «explanatory need» of the BBB, we’ll have a net positive explanation.

In further work one can further explain the brain black box by decomposing each of these simpler boxes into even simpler boxes. So one can explain the “reproductive cycle black box” by talking about the male gamete black box and the female gamete black box.³ And one’s explanation will be better if the gamete black boxes are themselves explained. For example, one may explain their workings via DNA black boxes, and other black boxes representing biochemical functions vital for a foetus’s development. Finally, one may explain the workings of DNA black boxes with biochemistry. This can go on until the basic level.

Note: explaining the origins or existence of DNA black boxes is best done through evolutionary theory and careful research on the specific evolutionary thread that lead to the DNA. And I am thinking about how many kinds of things admit of explanation, and what kinds of explanations there are. However, in this whole essay I am considering only explanations of how things work.

These forms of explanation sound rather like mechanism: the explanation of big systems consists in detailing the behavior of interacting smaller pieces, and nothing more. This means, I think, the behavior of the big systems are explained as effects of the behavior of the small systems. I’ve been flirting with the very strong thesis this is the only way to explain a complex phenomenon. Teleological explanations of complex phenomena, in turn, seem like non-explanations: if one contends that seeds become trees because they aim at becoming trees, one has not explained in the least how they go about doing this. It really does seem complex phenomena can only be explained mechanically.

That this is true, however, is not sufficient to establish mechanism as a paradigm of explanation of how things work (i.e., a model, and the only model at that). At this juncture one may accept that complex phenomena can only be explained mechanically, but argue that simple phenomena admit no such explanation. This seems right. What simpler black boxes could one invoke to explain a black box so simple as an electron, the electrical field or, say, a string-theoretical object? None! The failure of mechanism seems to be at hand: even if our physical theory has not gotten there yet, somewhen an ultimately simple black box must be reached, and no mechanistic explanation of it will be possible.

I can feel the pull of the intuition that even this simple box must admit of some kind of explanation. But I think the only explanations we can give, say, for the behavior of the simple electron, is by (i) explaining what laws of nature or metaphysical necessities made possible or inevitable that an entity like the electron would exist, and (ii) explaining what in specific caused the electron to come into existence. The latter seems answerable within a mechanistic framework, while the former is a bit of a puzzle. I think there are maximally simple and basic laws of nature or laws of metaphysics, and they admit of no explanation. It’s unlike Boyle’s law, which can be explained in terms of statistical mechanics.

Otherwise, the behavior and workings of the electron itself do not seem to cry out for an explanation. Once we get phenomena/functionalities/capacities/behavior/properties with this degree of simplicity, we must accept that they work the way they do because that’s the way reality is. Unlike complex properties like intelligence, a property like mass is best understood as a built-in feature of reality. An analogy with high-level programming languages will be useful: one cannot explain in Java how the “if-else” function works, because it’s a basic feature of the reality of Java. You can only explain things in Java if they’re complicated, like a video-game. The basic features of reality are just like that: like Java it has basic functions, but it’s mass and field interaction instead of if-else. But unlike Java, which is a high-level language built on top of lower languages, reality does not have a lower-level process on which it’s running. It’s ontologically basic.

Thus, explanations of the kinds “i” and “ii” seem to me sufficient to wholly explain the electron. It admits of no explanation of its workings, and that’s fine, because it’s as simple as things could get. On the other hand, complex entities such as living human brains require some explanation of how they work, and that can be done by decomposing the brain into simpler units. A mechanistic explanation.

Final comment: I‘ve been musing on the idea that the mechanical/teleological distinction crumbles at very simple phenomena/functionalities/capacities/behavior/properties. What’s the different between an electron interacting with other particles in a simple law-like manner because of its «charge» and because «it wants to»? What’s the difference between the Higgs field (a) interacting with massive particles in certain ways because it has certain blind properties, and (b) because it has a certain unconscious simple-minded goal, if they are empirically indistinguishable?

I wrote somewhere else that “any sufficiently simple teleological modus operandi would be indistinguishable from blind, mechanistic behavior. Do particles in the Stern-Gerlach experiment go either upwards or downwards because they have some unconscious simple-minded purpose, or is it because they have quantized, mechanical, & unexplained properties called “spin”?

Notes to Idea Two:
¹ There is the possibility (epistemic possibility, I mean) of an infinite chain of causation stretching backwards in time, without any event in reality not being caused by some earlier event. I have argued here that this must be the case if anything that exists must have been caused by something to exist. There’s something I read by one Jim Holt which posits a considerable threat to my argument, though: “The assumption that explanations must always involve “things” has been called by one prominent contemporary philosopher, Nicholas Rescher, “a prejudice as deep-rooted as any in Western philosophy.” Obviously, to explain a given fact—such as the fact that there is a world at all—one has to cite other facts. But it doesn’t follow that the existence of a given thing can be explained only by invoking other things. Maybe a reason for the world’s existence should be sought elsewhere, in the realm of such “un-things” as mathematical entities, objective values, logical laws, or Heisenberg’s uncertainty principle.”

² Some property P may be necessary of an object, — that is, the object has that property in all possible worlds in which it exists, — but it might not be part of its essence. Its property P may be ontologically grounded on another necessary property of that object. A necessary property of an object will only be essential of it if it’s “ground-level,” that is, if it’s not ontologically grounded on any other (necessary) property of that object. (Since ontological grounding is asymmetric, we won’t have this account making two essential properties not being called ‘essential’ because on ontologically grounds the other.)

³Of course, having an explanation is of little use if one has no argument or evidence supporting one’s explanation. Philosophers of science have spent a lot of time considering what kinds of explanation best lend themselves to support by argument and evidence. Unfalsifiable theories cannot be supported by evidence, and so if they’re not supported by argument (e.g. inference to the best explanation), they’ll be explanations alright, but ones we have no reason to think are true. (Also, philosophical explanations can be justified via argumentation even if they are not falsifiable. Perhaps we should distinguish empirical falsifiability and conceptual falsifiability: some theories may be empirically irrefutable, but they might be refuted by careful argumentation. Others may be so confused and vague so as to be irrefutable in both senses.)