Abstract: There are some ways to explain something about a particular thing or a class of things, but there seems to be only one way of explaining how things work: as a mechanism, in a sense to be explained. The explanation of how things work is all I will deal with in this essay. That whose inner workings can be explained mechanically may be said to be mechanically reducible. I wish to settle which kinds of things can be mechanically reducible, and how this is to be accomplished. Furthermore, I wish to argue that some things are not mechanically reducible but that, crucially, their workings are not explainable any other way. Finally, I strive to come to terms with the (perhaps unsurprising) contention that a certain class of things are explanatorily basic.

What needs or admits of explanation and what doesn’t is not clear to me. What seems clear is that explanations must come to an end.¹

I‘ve been told Aristotle argued one cannot explain why x has the essential properties it has,² for essential properties are basic from an explanatory point of view. Maybe he’s right. Consider the electron field: the only explanation of why it has the properties it has, and of why it interacts with other fields the way it does, would be an explanation of how it came into existence. There is no «mechanical reduction» of the electron field, — a term which I’ll explain later, but it roughly means no mechanical explanation of how it works internally, — unless string theory is true, in which case the behavior of strings would be mechanically irreducible.

At any rate, given the existence of the field, all of its defining (essential) properties follow. These properties are just aspects of what the electron field is, no explanations. Or so I am compelled to judge.

However, could there be anything which has a complex property which is explanatorily basic in this sense? One of Aristotle’s example, I think, is that one could not explain why humans are rational (or intelligent), because rationality (or intelligence) is essential to humanhood. It’s not clear how this thesis could be true. One way for it to be true, perhaps, is for the word ‘human’ to be defined as something whose properties include rationality. But clearly that’s a non-starter and Aristotle did not intend it.

Perhaps he thought rationality to be a basic property, incapable of being decomposed into other properties. If Aristotle was wrong and it could be so decomposed (say, in properties P, Q, & R), then these other properties could themselves be essential to humanhood, so that the necessary rationality of humanhood would be grounded in (and thus explained by) P, Q, & R. On the other hand, if Aristotle was right and such decomposition is impossible, then I think that rationality itself could not be explained.

I have two things to say here. The first is why I find this scenario would be deeply problematic: rationality is a complex property, and when things are complex explanations are needed of how they work. The second is a follow-up: why I think the impossibility of decomposing rationality would entail its workings could not be explained. So let me begin. It’s difficult to unpack what it means to be a complex property, but here’s a go. Rationality seems complex because…

  1.  Rationality is versatile: it causes wildly distinct effects in response to a wide gamut of stimuli. It cannot only deal with mathematical problems, but also with social reasoning, theory-building, psychological prediction, narrative construction, aesthetic creativity, argument evaluation, and more.
  2. Rationality is nuanced: small changes in the stimuli lead to unpredictable, yet transparently orderly, responses. This means it ain’t random, but it doesn’t follow simple paths either. For instance, small changes in the social situation alters a rational being’s response in subtle ways. In this sense, rationally is open-ended: bounded for sure, but not strictly so.
  3. Rationality is powerful: it can reliably perform multi-step tasks with intertwined sub-tasks, like building a house, inventing a well-functioning machine from clunkier prototypes, predicting tomorrow’s political events, and concocting splendid mathematical tools.
  4. Rationality is varied: it admits much variation in its basic character. The rationality of Lev Landau (mathematician and physicist) was much different from the rationality of Ernst Gombrich (art historian and philosopher), which is equally distant from the rationality of Marcus Aurelius (statesman and general).

Contrast this with a property like mass, or an entity like an electron. I am not keen on quantum field theory, but I don’t suppose these two to be versatile or nuanced. Their total space of possible interactions and behaviors can be described with a short equation containing few terms. Neither are they powerful: they can merely affect some other particles and fields in simple ways. Finally, they barely admit of any variation; mass varies quantitatively in one-dimension, electrons vary in momentum, mass, spin, and perhaps some other features. So mass is a simple property, contrasting starkly with rationality, perhaps the most complex of properties.

For the sake of illustrative comparison, think of the electron and the human brain as black boxes. The behavior of the latter is wonderful and orderly unpredictable (even if deterministic), while the behavior of the latter is comparatively boringly simple. For that reason, the brain black box (BBB) is much more in need of explanation than the electron black box (EEB).

Now I’ll delve into the nature of explanations of how things work, the second of my purposes laid out above, and there’s two points I wish to address. First, what would count as an explanation of the workings of either box? I submit this occurs through mechanical reduction. Second, is the electron black box in need of explanation? I submit the brain black box is in dire need of explanation, while the electron black box could be accepted without explanation of how it worked, due to its simplicity. Let us see.

Another good idea I had from Aristotle is that one way to explain something is to show how it can be accounted for by something less in need of explanation than it. So one way to explain the BBB is to account for its behavior by positing multiple smaller (less complex) black boxes, the interaction of whose behavior accumulates into the behavior of the original black box, BBB. That’s what I mean by “mechanism:” smaller causal interactions between parts explain the properties of bigger stuff.

And since these smaller black boxes are less complex, they are less in need of explanation. So long as the sum of the «explanatory needs» of the small black boxes does not exceed the «explanatory need» of the BBB, we’ll have a net positive explanation. (Attempting to account for a black box of complexity X by invoking ten distinct kinds of smaller black boxes of complexity X/2 may not constitute explanatory progress.)

In further work one can further explain the brain black box by decomposing each of these simpler boxes into even simpler boxes. So one can explain the “reproductive cycle black box” by talking about the male gamete black box and the female gamete black box.³ And one’s explanation will be better if the gamete black boxes are themselves explained. For example, one may explain their workings via DNA black boxes, and other black boxes representing biochemical functions vital for a foetus’s development. Finally, one may explain the workings of DNA black boxes with biochemistry. This can go on until the basic level.

Note: There are many ways of explaining stuff. One can explain how something came into existence, and that can be done either by giving its causal history of by explaining what something is for (in case of evolutionary or intentional human design). For instance, explaining the origins or existence of DNA black boxes is best done through evolutionary theory and careful research on the specific evolutionary thread that lead to the DNA. However, in this whole essay I am considering only explanations of how things work.

These forms of explanation sound rather like mechanism: the explanation of big systems consists in detailing the behavior of interacting smaller pieces, and nothing more. I don’t mean mechanism in the old sense: the physical collision of solid bits. Not even electromagnetism is mechanical in that sense. Anyhow, under that kind of mechanism we have that the behavior of the big systems are explained as effects of the behavior of the small systems. I’ve been flirting with the very strong thesis this is the only way to explain a complex phenomenon. Teleological explanations of complex phenomena, in turn, seem like non-explanations: if one contends that seeds become trees because they aim at becoming trees, one has not explained in the least how they go about doing this. It really does seem complex phenomena can only be explained mechanically.

That this is true, however, is not sufficient to establish mechanism as a paradigm of explanation of how things work (i.e., a model, and the only model at that). At this juncture one may accept that complex phenomena can only be explained mechanically, but argue that simple phenomena admit no such explanation. This seems right. What simpler black boxes could one invoke to explain a black box so simple as an electron, the electrical field or, say, a string-theoretical object? None! The failure of mechanism seems to be at hand: even if our physical theory has not gotten there yet, somewhen an ultimately simple black box must be reached, and no mechanistic explanation of it will be possible.

I can feel the pull of the intuition that even this simple box must admit of some kind of explanation. But I think the only explanations we can give, say, for the behavior of the simple electron, is by (i) explaining what laws of nature or metaphysical necessities made possible or inevitable that an entity like the electron would exist, and (ii) explaining what in specific caused the electron to come into existence. The latter seems answerable within a mechanistic framework, while the former is a bit of a puzzle. I think there are maximally simple and basic laws of nature or laws of metaphysics, and they admit of no explanation. It’s unlike Boyle’s law, which can be explained in terms of statistical mechanics.

Otherwise, the behavior and workings of the electron itself do not seem to cry out for an explanation. Once we get phenomena/functionalities/capacities/behavior/properties with this degree of simplicity, we must accept that they work the way they do because that’s the way reality is. Unlike complex properties like intelligence, a property like mass is best understood as a built-in feature of reality. An analogy with high-level programming languages will be useful: one cannot explain in Java how the “if-else” function works, because it’s a basic feature of the reality of Java. You can only explain things in Java if they’re complicated, like a video-game. The basic features of reality are just like that: like Java it has basic functions, but it’s mass and field interaction instead of if-else. But unlike Java, which is a high-level language built on top of lower languages, reality does not have a lower-level process on which it’s running. It’s ontologically basic.

Thus, explanations of the kinds “i” and “ii” seem to me sufficient to wholly explain the electron. It admits of no explanation of its workings, and that’s fine, because it’s as simple as things could get. On the other hand, complex entities such as living human brains require some explanation of how they work, and that can be done by decomposing the brain into simpler units. A mechanistic explanation.

Final comment: I‘ve been musing on the idea that the mechanical/teleological distinction crumbles at very simple phenomena/functionalities/capacities/behavior/properties. What’s the different between an electron interacting with other particles in a simple law-like manner because of its «charge» and because «it wants to»? What’s the difference between the Higgs field (a) interacting with massive particles in certain ways because it has certain blind properties, and (b) because it has a certain unconscious simple-minded goal, if they are empirically indistinguishable?

I wrote somewhere else that “any sufficiently simple teleological modus operandi would be indistinguishable from blind, mechanistic behavior. Do particles in the Stern-Gerlach experiment go either upwards or downwards because they have some unconscious simple-minded purpose, or is it because they have quantized, mechanical, & unexplained properties called “spin”?

Notes to Idea Two:
¹ There is the possibility (epistemic possibility, I mean) of an infinite chain of causation stretching backwards in time, without any event in reality not being caused by some earlier event. I have argued here that this must be the case if anything that exists must have been caused by something to exist. There’s something I read by one Jim Holt which posits a considerable threat to my argument, though: “The assumption that explanations must always involve “things” has been called by one prominent contemporary philosopher, Nicholas Rescher, “a prejudice as deep-rooted as any in Western philosophy.” Obviously, to explain a given fact—such as the fact that there is a world at all—one has to cite other facts. But it doesn’t follow that the existence of a given thing can be explained only by invoking other things. Maybe a reason for the world’s existence should be sought elsewhere, in the realm of such “un-things” as mathematical entities, objective values, logical laws, or Heisenberg’s uncertainty principle.”

² Some property P may be necessary of an object, — that is, the object has that property in all possible worlds in which it exists, — but it might not be part of its essence. Its property P may be ontologically grounded on another necessary property of that object. A necessary property of an object will only be essential of it if it’s “ground-level,” that is, if it’s not ontologically grounded on any other (necessary) property of that object. (Since ontological grounding is asymmetric, we won’t have this account making two essential properties not being called ‘essential’ because on ontologically grounds the other.)

³Of course, having an explanation is of little use if one has no argument or evidence supporting one’s explanation. Philosophers of science have spent a lot of time considering what kinds of explanation best lend themselves to support by argument and evidence. Unfalsifiable theories cannot be supported by evidence, and so if they’re not supported by argument (e.g. inference to the best explanation), they’ll be explanations alright, but ones we have no reason to think are true. (Also, philosophical explanations can be justified via argumentation even if they are not falsifiable. Perhaps we should distinguish empirical falsifiability and conceptual falsifiability: some theories may be empirically irrefutable, but they might be refuted by careful argumentation. Others may be so confused and vague so as to be irrefutable in both senses.)

Anúncios