微物理学的认识论
The Epistemology of Microphysics

原始链接: https://www.edwardfeser.com/unpublishedpapers/microphysics.html

## 微观物理学的认识论:摘要 爱德华·费瑟的讲座探讨了在微观物理学研究中获得的知识——以及遇到的局限(原子、粒子等)。尽管缺乏对这个领域的直接感知,物理学取得了显著的成功,但进展已放缓,引发了人们对当代研究过于依赖无法验证的、以美学为驱动的数学构建的担忧。 费瑟认为,成功和挫折都源于共同的认识论根源,这可以通过托马斯主义哲学来阐明。 就像自然神学一样,微观物理学超越了感官经验,但具有内在的界限。 两个领域都利用类比和推理来理解超越直接观察的现实——从可观察的现象出发,推断潜在的原因(如粒子)或上帝的属性。 然而,随着物理学深入微观世界(朝向“原始物质”),以及神学朝向神性本质,可理解性会降低。 这些概念变得越来越抽象和数学化,远离日常经验。 这种“现象边缘化”——观察与现实之间不断增加的理论层数——反映了*事后*推理关于上帝的局限性,需要依赖间接证据和类比。 费瑟告诫不要优先考虑理论中的审美“美”,因为这有脱离经验验证的风险,呼应了对现代理性主义的担忧。 最终,承认这些局限性对于健全的科学方法和对现实基本性质的现实评估至关重要。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 微观物理学的认识论 (edwardfeser.com) 5 分,来自 danielam 2 小时前 | 隐藏 | 过去 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

The Epistemology of Microphysics

Edward Feser

[A lecture delivered at the 49th Annual Meeting of the American Maritain Association at Loyola Marymount University, Los Angeles, CA on March 21, 2026]

Microphysics is the branch of physics that studies molecules, atoms, and elementary particles.  Two facts about it are especially noteworthy from the point of view of the theory of knowledge.  The first is the astounding amount we have come to learn about this part of reality, despite our having no direct perceptual access to it.  The second is that progress has slowed considerably in recent decades, at least in the opinion of many physicists.  Among these is Sabine Hossenfelder, who argues that the source of the problem is that contemporary research in fundamental physics is dominated by mathematical constructs that are nearly impossible to test empirically and embraced instead for largely aesthetic reasons.[1]  Similar criticisms have been raised by Roger Penrose, Lee Smolin, Peter Woit and others.[2]

What I will argue is that both the success and the frustrations of microphysics have common epistemological roots, which Thomistic philosophical considerations help to expose and illuminate.  While these considerations are not inherently theological, they have implications for theology just as they do for physics.  And it will turn out that the scope and limits of what we can know where the micro-world is concerned parallel the scope and limits of what Thomism says we can know by reason alone about the existence and nature of God.  In both cases, the human intellect can press well beyond what the senses alone could reveal, but only so far.  And in both cases, the intellect’s powers give out as it approaches one end or the other of the ontological spectrum – the divine essence being at the top of that spectrum, and what Thomists call prime matter at the bottom.

From atomos to strings

The place to begin is with a brief overview of how physics has come to know what it knows about the micro-world.[3]  The first steps were taken by Democritus and other ancient Greek atomists.  Noting that physical objects are divisible into parts, those parts into smaller parts, and those parts into yet smaller parts, it was natural to extrapolate to the existence of even smaller parts below the level of those that can be perceived.  Phenomena such as evaporation, density, and permeability also lent support to the idea.  Evaporation could be explained by reference to unobserved particles moving apart from one another, and density by reference to such particles being tightly packed together.  The movement of sounds and liquids through what look to be solid objects could be explained by way of the thesis that such objects are actually collections of particles separated by empty space, which provides an avenue through which sound and liquid can pass.

However suggestive, such speculations did not yield rigorously testable predictions.  But considerable progress was made after interest in atomism and related ideas was revived with the scientific revolution.  Studying the compression and expansion of gases, Robert Boyle (1627-1691) found that changes to the volume of a gas did not alter its mass.  This was hard to understand unless a gas is not a continuous thing but rather a collection of particles separated by empty space, with compression and expansion involving changes in the distances of the particles from one another.  Moreover, on the basis of this assumption, Boyle was able to formulate and support by experimental test his famous law describing the relationship between the volume of a gas and its pressure.

The kinetic theory of gases developed by Daniel Bernoulli (1700-1782) added further detail to the story.  Since a gas will spread evenly throughout a container it occupies, the particles that make it up must be in continual random motion.  For if they weren’t, they would collect in some part of the container rather than remaining evenly spread out.  The pressure of a gas could then be analyzed in terms of the collisions of particles against the inside surface of the container.  As the volume of the container increases or decreases, the distance the particles would have to travel to hit its inside surface will correspondingly increase or decrease, which leads to decreases or increases in pressure in conformity with Boyle’s law.  Temperature could also be explained in terms of the kinetic theory, which identifies it with the average kinetic energy of particles in motion.  James Clerk Maxwell (1831-1879) and Ludwig Boltzmann (1844-1906) would go on to formulate the theory with mathematical rigor.

Such mathematical and predictive precision made the reality of unobserved particles harder to deny, but what really settled the matter in the minds of scientists were developments in modern chemistry.  The law of the conservation of mass was established by Antoine Lavoisier (1743-1794), and this opened the way to determining the mass of each element in a compound.  By doing so, Joseph Proust (1754-1826) was able to show that the elements are always to be found in compounds in fixed proportions, a principle that would come to be known as Proust’s law.  Applying Proust’s law, John Dalton (1766-1844) argued that there must be some smallest unit of an element that cannot be broken down into parts that retain the properties of that element.  This he called an atom.  A molecule, the smallest unit of a compound, is thus made up of atoms.  Using Proust’s law, the relative masses of different elements, and thus of different atoms, could be deduced.  For example, from the proportion of hydrogen to oxygen in water, the oxygen atom could be shown to be sixteen times more massive than a hydrogen atom.  This in turn allows us to infer the relative masses of yet other atoms.  For example, since carbon dioxide also contains oxygen, knowing the mass of oxygen allows us to determine the mass of carbon.  From the relative atomic mass of the elements, Dmitri Mendeleev (1834-1907) was able to work out the periodic table, and successfully to predict new elements and their properties from the gaps in the table. 

Now, the chemist Amedeo Avogadro (1776-1856) had argued that equal volumes of gases contain equal volumes of molecules given that temperature and pressure are fixed.  This is so even if the volumes differ in mass.  The mathematical analysis of gases worked out by Maxwell and Boltzmann opened the way to determining exactly how many molecules are in a volume of gas.  This allowed, in turn, for inferences concerning the absolute mass of molecules and atoms, and about their sizes as well.  The predictive successes of a theory that revealed even the mass and size of atoms as well as the properties of the elements made the reality of the atom appear certain.

The electron was discovered by J. J. Thomson (1856-1940), who showed that cathode rays could be made to curve away from an electrically charged plate with a negative charge and toward one with a positive charge.  This showed them to behave like particles rather than waves, and particles of a negatively charged kind, specifically.  From the size of the charge of these particles, their mass was worked out, and this turned out to be smaller than that of the smallest atom.  Study of the photoelectric effect, in which light causes electrons to be emitted from a metal surface, showed that electrons are already present in the atoms from which they are ejected.  Further study showed that electrons had the same properties even when they came from the atoms of different kinds of metal.

Since atoms are electrically neutral, the presence in them of negatively charged electrons suggested that there must be some component with a positive charge to neutralize the negative charge.  Experiments by Ernest Rutherford (1871-1937) involved firing positively charged particles at thin gold leaf, behind which was a photographic plate.  The resulting patterns showed that most of the particles passed through as if nothing were there, while a few were significantly deflected.  This suggested that the atoms making up the gold leaf were mostly empty space, with a large positively charged mass at the center from which the deflected particles were repelled.  Rutherford concluded that the atom consists of a nucleus surrounded by a cloud of electrons.  Later experimentation revealed the existence of radiation that carried no electric charge.  This led to the discovery of the neutron by James Chadwick (1891-1974), and then in turn to the model of the atomic nucleus developed by Werner Heisenberg (1901-1976), according to which it comprises a tightly packed mass of positively charged protons and uncharged neutrons.

Yet other particles were being discovered throughout the era in which these advances were being made.  For example, some cases of radioactive decay showed energy loss that could not be accounted for.  Given the law of conservation of energy, this energy could not simply have disappeared, so physicists such as Wolfgang Pauli (1900-1958) and Enrico Fermi (1901-1954) speculated that it was being carried away by particles which came to be called neutrinos.  The evidence indicated that these particles, if they existed, had very little if any mass and were not sensitive to nuclear or electromagnetic forces, all of which made them extremely difficult to detect.  Only experimental apparatus with shielding that could rule out an effect’s having been produced by any other particle could confirm their existence, and this was achieved in 1956.

So far I have been emphasizing the role theoretical considerations have played in discovering particles, but it is important to consider also the role of experimental apparatus like the kind just referred to.  Early twentieth-century devices included the Wilson cloud chamber, in which speeding particles can be made to leave behind small water droplets, yielding tracks from which inferences can be made about the nature of the particles.  With a Geiger-Müller counter, the presence of a radiating particle produces an electrical discharge that is converted to a clicking sound, and the number of clicks per second reflect the amount of radiation.   Later devices include bubble chambers, the use of which involves sending particles through pressurized superheated liquid, in which the particles leave tracks of bubbles that can be photographed and then analyzed.   Nuclear emulsions are photographic plates on which particle tracks are recorded.  Scintillators are devices which emit flashes of light when struck by particles.  In spark chambers, detected particles trigger an electrical pulse.  These gave rise to sophisticated devices in which the firings of individual counters are processed in order to produce an image that reconstructs the paths of the particles detected.

The various kinds of particle detectors are sometimes classified into two main groups.[4]  The first are visual detectors, which include cloud chambers, nuclear emulsions, and bubble chambers.  The second are electronic detectors, which include Geiger-Müller counters, scintillators, and spark chambers and their descendants.  The historian of science Peter Galison characterizes these two classes as “image” devices and “logic” devices, respectively.[5]  Image devices or visual detectors produce detailed pictorial representations of particular events.  Logic devices or electronic detectors take in much less information about particular events, but from a sequence of these events can build up a detailed representation.  The representations generated by image devices require interpretation in order to determine which features are significant.  With logic devices, computer processing can be used to screen out features of the situation that are not significant, before the representation is formed.  Both approaches can also be combined to yield what Galison calls an “image-logic hybrid.”[6]  Naturally, in all cases background theoretical assumptions are brought to bear when determining which features of a representation are significant, and in otherwise interpreting it.

Also crucial to contemporary microphysics are particle colliders, which accelerate counter-rotating beams of particles to very high energy before bringing them to crash into one another.  The idea is to determine whether particles have more fundamental particles as component parts, by using these collisions to break them into those parts.  Naturally, in this case too, interpreting the results of experimentation requires bringing background theoretical assumptions to bear.  As one textbook puts it, “it is like shooting bullets at a fine watch, and trying to figure out how the watch worked from the shape of the pieces that get knocked out.”[7]

But by this means, various subatomic particles were discovered, such as mesons and hyperons.  In the early 1960s, Murray Gell-Mann (1929-2019) and George Zweig (1937- ) proposed that the protons and neutrons that comprise the atomic nucleus are themselves made up of more fundamental particles (called quarks) and their existence was confirmed experimentally in the following decades.  Gradually, physicists worked out what came to be known as the Standard Model, according to which the fundamental particles of which all other matter is composed comprise fermions on the one hand and bosons on the other.  Fermions include various types of quarks along with leptons, which comprise electrons and neutrinos as well as some other particles.  Bosons include, among other particles, photons and the famous Higgs boson.

The controversy today is over whether there are additional particles, or more fundamental entities out of which the particles now taken to be fundamental are composed, and if so how we could establish such conclusions.  Supersymmetry is an approach which proposes that there are a large number of additional particles that partner the particles of the standard model, plus a few more.  Its attraction derives from its mathematical elegance, the solution it provides to some technical problems in particle physics as it stands today, and in the fact that some of the particles it posits could plausibly be identified with dark matter (that is to say, the matter that is thought to make up the bulk of the universe but neither absorbs nor emits light).  The trouble is that so far the existence of none of these particles has been confirmed, and it is not clear what could confirm it given that existing methods have failed.

Meanwhile, string theory holds that the fundamental constituents of the material world are not point-like particles, but rather one-dimensional strings.[8]  The theory also posits ten spacetime dimensions, six of which are “compactified” in a way that keeps them from being detectable at the macroscopic level.  Given the characteristics attributed to the strings, they will appear like the particles familiar from traditional physics.  String theory also incorporates supersymmetry.  Like supersymmetry, it possesses a mathematical elegance that has made it attractive to physicists.  It provides a way of unifying gravity with the other three fundamental forces in nature (electromagnetism and the weak and strong forces).  But it is even more difficult to test experimentally than supersymmetry.  In particular, the extremely small size of strings and the extra dimensions the theory posits make it extremely difficult to test by way of particle collider experiments.

Retroduction and analogy

Now, you might suppose that this survey by itself already answers the question of why physics has learned as much as it has about the micro-world but has had trouble progressing further.  It might seem enough simply to say that it has established the reality of molecules, atoms, electrons, protons, neutrons, quarks, and so on by way of experimental testing and has been unable so far to establish the reality of supersymmetry particles or strings because of the lack of experimental evidence.  But that things are not that simple should be obvious enough when one recalls the traditional dispute between instrumentalists and scientific realists.  If instrumentalists are correct, even the reality of the particles modern physics does accept has not really been demonstrated by the experimental evidence.  That evidence shows only that our most successful theories are handy instruments for making predictions and developing technologies, but they may for all that merely be useful fictions.

Of course, scientific realists famously retort that the usefulness of scientific theories would be a miracle if the entities they posit were not real.  Ian Hacking famously argues that in some cases it’s not merely that experiment confirms what a theory says about unobservable particles, but that the experimental procedure itself presupposes their reality.[9]  For example, we can build an apparatus by which we can fire electrons at other particles and thereby interfere with them.  In such a case, it’s not that the theory that posits electrons is a useful instrument, but that electrons themselves are useful instruments, which they could hardly be if they did not exist.

This is not the issue I want to address here, however.  For present purposes, I will take it for granted that the realist is correct to hold that the success of modern microphysics gives us reason to believe in the existence of the particles it posits.  The questions I want to address concern exactly how it arrives at their existence and exactly what it tells us about their nature.  A pair of classics in post-positivist philosophy of science provide some clues.  In his 1958 book Patterns of Discovery, Norwood Russell Hanson showed that the inferences by which modern physics arrives at fundamental particles do not fit the models of scientific reasoning then most familiar in philosophy of science.[10]  In particular, they aren’t Baconian inductive inferences from particular observations to general laws, and they aren’t Popperian deductive inferences from bold conjectures.  Rather, they are instances of retroductive or abductive inference, also called “inference to the best explanation.” 

Our overview of the history of microphysics vividly illustrates this.  As we have seen, theorists aim to explain observed phenomena such as: the divisibility and permeability of ordinary material objects; the pressure and temperature of gases; the proportions of the elements in a compound; the behavior of cathode rays and the photoelectric effect; the effects of firing particles at gold foil; unexplained energy loss in radioactive decay; tracks in cloud and bubble chambers and on emulsion plates; patterns of electrical pulses in particle detectors; and so on.  They construct theories that posit unobserved particles of various kinds, and show how the observed phenomena are made intelligible on the supposition that these unobserved particles are real and behave in the ways described by the theory.

But the history we surveyed also illustrates how crucial mathematical descriptions, specifically, have been to the development of modern microphysics from Boyle to string theory.  Here microphysics has simply applied to its own domain a more general mathematicizing tendency that has characterized modern physics from Galileo and Descartes onward.  Hanson has much of importance to say about how this mathematicization increasingly removed from modern physics’ conception of unobserved particles the attributes that characterize matter as we ordinarily experience it.  The Greek atomists had already taken the step of denying that basic particles have features like color or taste, though they did attribute to them shape, position, and motion.  When early modern scientists and philosophers revived the idea, they tightened it up into the now familiar distinction between primary and secondary qualities, with secondary qualities like color, taste, sound, and odor taken to resemble nothing in particles themselves.  The primary qualities that particles were said to possess were thought of as susceptible of a rigorous mathematical analysis. 

Since these included size, shape, position, and motion, the initial tendency was to think of unobserved particles on the model of billiard balls.  This made them at least seem picturable or visualizable despite their lack of color, and in any event to be easily understandable on the model of ordinary objects.  As Hanson notes, however, “the impossibility of visualizing ultimate matter is an essential feature of atomic explanation,”[11] and the logic of retroductive reasoning “necessarily forced physicists to consider matter as lacking in any direct, physically interpretable properties.”[12]  The reason is that to attribute to particles the same properties as those possessed by the phenomena to be explained would merely kick the problem back a stage rather than explain anything.  For example, if you tried to explain the green color and distinctive odor of chlorine gas by positing particles that were green and had that odor, you would be replacing the problem of explaining the color and odor of the gas with the problem of explaining the color and odor of the particles.[13]

Hence, observes Hanson, contemporary microphysics has not only stripped particles of the secondary qualities but also “denies its fundamental units any direct correspondence with the primary qualities, the traditional dimensions [and] positions.”[14]  For they “cannot be the point-particles of classical natural philosophy” if they are to account for the entirety of the observational evidence they are constructed in order to explain:

For example, electrons ‘veer away’ from negatively charged matter; they must therefore be like particles.  But electron beams diffract like beams of light, and therefore they must be like waves too.  The physicist fashions the electron concept so as to make possible inferences both to its particle and its wave behavior, and a conception so fashioned is unavoidably unpicturable… If microphysical explanation is even to begin, it must presuppose theoretical entities endowed with just such a delicate and non-classical cluster of properties…

Mathematical techniques more subtle and powerful than the geometry of Kepler, Galileo, Beeckman, Descartes and Newton are vital to today’s physical thinking.  Only these techniques can organize into a system of explanation the chaotically diverse properties which fundamental particles must have if observed phenomena are to be explained.[15]

Again, whereas “the punctiform mass, a primarily kinematical conception, is the starting point of classical particle theory,” in the quantum theory that replaced it, “the wave pulse, a primarily dynamical conception, is the starting-point.”[16]  Because a particle like an electron not only has such wave-like as well as particle-like properties, but also cannot be said simultaneously to have both a determinate position and a determinate velocity, it “can be no more than an ingenious mathematical combination of physically distinct parameters,” extremely remote from the billiard ball model that we still tend reflexively but erroneously to think in terms of.[17]

Precisely because the concept of a particle is mathematically constructed in this way, it is a mistake to suppose that a particle’s peculiar characteristics, such as the impossibility of simultaneously determining its position and velocity, simply reflect epistemological limitations on our part.  As Hanson emphasizes, what we have in this case is a conceptual impossibility rather than a mere technical impossibility.[18]  In particular:

Unless the whole of quantum theory is discarded, uncertainty is here to stay; it is built into the conceptual pattern of quantum mechanics… The uncertainty principle is not a detail of microphysics, it is an essential part of the plot.  It patterns microphysical phenomena for the physicist; it is not just an awkward anomaly, as some suppose.  The pattern was built up by studying such phenomena, but it is not itself one of those phenomena.[19]

As this last remark indicates, though, by no means does this entail that what microphysics attributes to particles is purely a matter of convention and entirely divorced from empirical reality.  It is precisely the need to explain the actual observational evidence that led microphysics to adopt the mathematical constructions in question.  This is true not only of the uncertainty principle but also of other features, such as the “combination of wave and particle notations.”[20]  All the same, though the retroductive inferences of microphysics begin with observation, they go well beyond it to arrive at entities which cannot be conceived of except in the most abstract mathematical terms, and of which we must deny the properties that characterize the material substances of ordinary experience.

But Hanson’s account needs qualification.  True, the particles of modern microphysics are radically unlike the particles of classical physics, much less the tiny billiard balls of popular imagination.  However, we do still speak of them as particles.  When the quantum theorist attributes to them wave-like properties, he does thereby speak of waves.  When the string theorist posits strings as the ultimate constituents of nature, he does speak of strings.  All of this remains true despite the fact that the particles, waves, and strings in question are radically unlike any of their familiar counterparts in everyday experience.  There is a reason we continue to use these terms, and it is captured in the other classic of post-positivist philosophy of science that I had in mind in my earlier remark, namely Mary Hesse’s 1966 book Models and Analogies in Science.[21]  When we speak of the fundamental constituents of matter as particles, waves, or strings, we are speaking analogically, and we are modeling unobserved reality on familiar observed realities.

Hesse distinguishes two main attitudes that have been taken by scientists and philosophers of science toward models and analogies.  The first, associated with Pierre Duhem and others, holds that while models and analogies might be useful in suggesting theories, what ultimately matters is the mathematical system the theorist arrives at.  The models and analogies that initiated theorizing drop away as inessential or even potentially misleading.  The second attitude, represented by the physicist N. R. Campbell, holds that models and analogies are necessary both in order to give an abstract mathematical theory genuine explanatory power, and to extend it to new phenomena. 

This second view is the one with which Hesse sympathizes, and in expounding it she draws a distinction between negative analogies, positive analogies, and neutral analogies.[22]  Take, for example, the work Boyle, Bernoulli, Maxwell, and Boltzmann did in theorizing about gases.  It starts with a model in which a gas is thought of as a collection of particles analogous to tiny billiard balls.  Certain features of billiard balls, such as color, are not attributed to the particles.  This would be an instance of a negative analogy.  Features of billiard balls such as motion and impact are attributed to the particles, and these would be examples of positive analogies.  Then there are features of which we do not initially know whether they should be attributed to the particles, and these are the neutral analogies.  These are what make it possible to extend the theory to new phenomena by making predictions based on the supposition that the entities posited either have or don’t have the features in question.

In the conception of particles developed within modern microphysics, what at one time looked like neutral or even positive analogies have often turned out to be negative analogies.  Still, Hesse suggests, some positive analogies retain their importance even when they have to be stretched.  For example, it remains useful to conceive of fundamental entities as in some respects positively analogous to particles and in other respects positively analogous to waves, while denying of them any features of particles and waves that would make these attributions inconsistent with one another.[23]  Though this puts strain on the concepts in question, it is reasonable given “the extreme difficulty of finding any satisfactory alternative model.”[24]

The triplex via and microphysics

Now, what does all this have to do with Thomism?  The most obvious connection, of course, concerns Hesse’s point about the role of analogy in scientific theorizing.  The importance of analogical language to human cognition is a longstanding theme of Thomism, and Thomists like William A. Wallace have followed Hesse in emphasizing the importance of models and analogies to science in particular.[25]

It also so happens that insights in some respects similar to Hanson’s were developed independently by Jacques Maritain in 1932 in The Degrees of Knowledge.[26]  In particular, like Hanson, Maritain holds that microphysical explanation involves the positing of a system of unpicturable, mathematically constructed theoretical entities which, though it corresponds in a general way to something real, in part also reflects the symbolic mode of representation in which it is cast.  Emphasizing, as Hanson does, how much theory as opposed to observation contributes to physics’ characterization of the micro-level, Maritain says that “it is in no wise necessary that any physical reality… correspond determinately to each of the symbols and mathematical entities in question.”[27]  Expanding on this, he writes:

It does not follow that the mathematical beings which play a part in this synthesis actually represent real causes and entities… It is only en bloc that the physical theory is verified by means of the correspondence established between the system of signs that it employs and experimentally known measurable events.[28]

At the same time, Maritain is clear that this does not entail a thoroughgoing anti-realism.  He acknowledges that “the existence of atoms… has reached a degree of probability bordering on certitude” and that the same can be said of protons, electrons, and neutrons (which were among the particles known at the time he was writing).[29]  But he emphasizes that this certitude extends only to the existence of these particles as opposed to “the nature and structure… which science attributes to them.”[30]  Here we are largely dealing with what Maritain calls “symbolically reconstructed real beings” or “mathematically conceived entities that take [the] place” of the particles themselves.[31]

Unlike Hanson, though, Maritain is a Thomist, and to the account of microphysics that he shares with Hanson, he adds two important further considerations that reflect this.  First, with the Thomist tradition he holds that the first or most fundamental accident of material substance is quantity in the sense of extension, the property by which something is located in space, has one or more dimensions, and possesses adjacent parts.  In Maritain’s view, the more plausibly a theoretical entity posited by microphysics might be said to possess quantity in this sense, and the fewer are the layers of theory that come between it and observational evidence, the stronger is the case for judging it to be real.[32]  But the more remote it is from possessing quantity in this sense, and the more numerous are the layers of theory that separate it from actual observational evidence, the stronger is the case for regarding it as a mere “being of reason” lacking any existence independent of theory.

Second, while quantity is the first accident of material substance, by no means does it exhaust its nature, and neither do mathematical attributes more generally.  Accordingly, in Maritain’s view, physics does not reveal to us the essence or “inner ontological nature” of the entities it posits, but rather only their mathematical relations to what is measurable.[33]  Here he takes a position similar to what is now known as epistemic structural realism (and indeed, in making his case he cites Henri Poincaré and Arthur Eddington, two ancestors of the view).[34]  In fact, Maritain says that in its focus on mathematical description, physics gives us a “substitute” for the essence of matter rather than that essence itself.[35]

Interestingly, Maritain also says that when natural theology “establishes conclusions about the nature (as analogically known) and the perfections of [God considered as] pure Act through the three-fold way of causality, eminence, and negation,” what it gives us is a “substitute” for the divine essence rather than that essence itself.[36]  And here I want to make a suggestion that goes beyond anything Maritain says, which is that there is an interesting parallel between the triplex via or threefold way by which natural theology arrives at conclusions about the divine nature and the way microphysics arrives at conclusions about unobserved particles.  This parallel, I argue, accounts for how physics can know as much as it does about the micro-world, but also why it may be reaching its limits.

To some extent the parallel will be obvious enough from what has been said so far.  Just as analogical language makes it possible for us to talk about God, so too does it make it possible for us to talk about unobservable particles.  Just as our knowledge of God begins with the via causalitatis, by which we reason to God as cause of the world, so too our knowledge of particles begins by positing them as the explanation of what we observe.  Just as, by the via negationis, we deny of God the limitations that characterize created things, so too do we deny of unobserved particles characteristics of ordinary material substances, such as color, odor, and determinate position or velocity. 

It is perhaps less obvious how there is a parallel here with respect to the via eminentiae.  Let me explain this by first saying a little about how exactly the via eminentiae or way of eminence works in natural theology.  As Daniel de Haan notes in a recent essay on the triplex via, whereas the way of causality tells us that God exists and the way of negation tells us how he is unlike created things, the way of eminence is what gives us positive knowledge of his nature.[37]  It tells us, for example, that God’s essence is subsistent existence itself; that since existence is the supreme perfection, God is supreme in perfection; and that the attributes of created things are finite perfections which imitate the divine perfection that is their source.  The via eminentiae also orders the divine attributes.  For example, extrinsic attributes (such as God’s being a creator, which reflects the world’s dependence on him as its cause) must be understood in terms of intrinsic attributes (such as God’s goodness, which reflects the divine essence itself rather than any relation things bear to him).  There is also an ordering among the intrinsic attributes themselves.  For example, since the power of intellect is prior to the virtue of wisdom, God’s wisdom must be understood in terms of his intellect.  And so on.

Now, as subsistent existence itself and supreme perfection, God is pure actuality.  That entails that there is no passive potency in him – no capacity whatsoever to change or be changed – but also that he is supreme in active potency, the capacity to bring other things into being and otherwise to affect them.  In Thomistic metaphysics, this puts him at the apex of the hierarchy of reality, at the other extreme end of which is prime matter, which is pure passive potency.  In my book Aristotle’s Revenge, I say the following about prime matter and its relationship to fundamental particles and observable physical substances:

Prime matter on its own is wholly indeterminate.  By itself it is not an actual particular physical thing of any kind, but rather the pure potentiality to be a particular physical thing of some kind.  If we think of matter on the analogy of the position of a needle on a dial and the values on the dial as representing the various specific kinds of material thing that might exist, prime matter is like a needle that is flitting wildly all across the face of the dial.  It has no intrinsic tendency to stop at any particular value, though potentially it could be made to stop at any of them…

Now, if prime matter is like the needle flitting wildly across a dial’s face, a fundamental particle, considered apart from any substance it might partially constitute, is like a needle which has narrowed its flitting somewhat to a certain range of possible values.  Fermions do not have the indeterminacy of prime matter, for they are matter of a certain kind, with properties and causal powers distinctive of that kind.  However, they do maintain a very high degree of indeterminacy insofar as there is an extremely wide variety of more complex kinds of matter that they might constitute.  They do not flit back and forth past every possible value on the dial, but they do still flit past most of them.  A fermion qua fermion can be a constituent of water, a stone, a dog, or what have you.  Water and stone, by contrast, are like a needle that has settled down to flitting only across a very narrow range of possible values.  Water may take a liquid, solid, or gaseous state; stone may be arranged in a pile or used to construct a wall.  Compared to a fundamental particle, though, there is relatively little transformation they can undergo consistent with remaining what they are (viz. water or stone).  Whereas prime matter is the pure potentiality to be any material thing, fermions have a somewhat narrower range of potentiality, and water a much narrower range.[38]

You might say, then, that just as God is eminent in actuality or active potency, prime matter is eminent in potentiality or passive potency.  Just as angelic intellects, given their immateriality, are closer to the pure actuality and perfection of God than anything else in creation, so too fermions and bosons, given their indeterminacy, are closer to the pure potency of prime matter than anything else in creation.  Just as the attributes of material things preexist in God as their efficient cause, they preexist in fermions and bosons as their proximate material cause (prime matter being their ultimate material cause).  In theorizing about fundamental particles, physics deploys something analogous to the via eminentiae insofar as it deduces what the natures of these particles must be like in order for them to have the passive potency to constitute any and all of the ordinary material substances of our experience.  And just as the way of eminence orders the divine attributes, so too does its analogue order the properties attributed to particles.  As Hanson and Maritain emphasize, the mathematical description is paramount, because it is what does the real explanatory work.  Other descriptors, such as “particle” and “wave,” are interpreted in light of the mathematics, and any connotations of these terms that are inconsistent with the mathematics are negated.

Now, the higher we go in the hierarchy of being, the less firm is the intellect’s grasp of what it knows – not because angelic intellects and God are not intelligible in themselves, but rather because our intellects are naturally most at home in the realm of corporeal and extended things, and can only dimly grasp what is immaterial and metaphysically simpler or non-composite.  This is why Maritain says that we must make due with a “substitute” for the divine essence rather than a direct grasp of the divine essence itself.  But the intellect’s grasp of what it knows is also less firm the lower we go in the hierarchy of being, because here we get increasingly farther from ordinary corporeal and extended things and approach what is minimally intelligible in itself, namely prime matter.  Hence here too we must make due with what Maritain characterizes as mathematical “substitutes” for the essences of particles.

We should expect, then, that physics will be able to establish less, and be less confident about what it does establish, the deeper we go into the microstructure of the physical world.  And that is indeed exactly what we find.  As Richard Dawid notes in his book String Theory and the Scientific Method, over the course of the twentieth century microphysics saw a progressive “marginalization of the phenomena.”[39]   With the physics of molecules, atoms, electrons and the like, experiment resulted in phenomena that could be observed by anyone, such as changes in the pressure of a gas, or the path of cathode rays.  Microphysics also yielded technologies that had a profound effect on us all, such as the medical use of X-rays, nuclear power, and television.  By contrast, the fundamental particles discovered in the second half of the century could not be detected except by precision instruments yielding images of “a few unusual lines,” the significance of which is entirely unknowable apart from the body of sophisticated theory brought to bear in interpreting them.[40]  Nor did these microphysical discoveries have technological applications like those of earlier theory.  Then, with developments like string theory, microphysical argumentation came to depend entirely on abstract theoretical considerations without the possibility of empirical confirmation, at least given current technological capabilities.

The phenomena have been “marginalized,” then, insofar as the connection between observational evidence and theoretical entities has become increasingly distant.  Dawid notes other ways in which the phenomena have been marginalized.  The layers of theory that come between experimental evidence and the particles judged to have been detected by way of it have increased in number.  The conceptualization of particles has gotten further away from modeling them on matter of the kind we know from perceptual experience.  And whereas, in the early history of particle physics, experiment came first and theorizing about its results followed, in later history theory would come first and experiment was devoted to testing its predictions.  Now theorizing has reached the point at which it relies largely on non-empirical considerations, with results that currently cannot be tested experimentally.

Dawid himself defends this development as a legitimate and needed transformation of the methods of physics.  But physicists like Hossenfelder, Penrose, Smolin, and Woit regard it as instead a departure from sound scientific method, at least insofar as string theorists maintain their position with a confidence that has no basis in actual empirical evidence.  It seems to me that Thomists, given their general epistemology and the philosophy of science that thinkers like Maritain have developed on its basis, are bound to agree with these skeptics.  Unless and until the claims of string theory and the like can be tested experimentally, they cannot count as knowledge and we have no basis for affirming the reality of the entities they posit.  Insofar as such theories are driven by abstract mathematical considerations rather than experiment, they have the flavor of rationalist metaphysics.  For the Thomist, the existence and nature of God cannot be arrived at a priori, after the fashion of the ontological argument and perfect being theology.  We must instead reason a posteriori from the empirical world to God as its cause.  Similarly, the existence and nature of material particles can be known only a posteriori rather than a priori.  Moreover, just as a posteriori reasoning about the divine nature must at some point hit a ceiling beyond which it cannot ascend, so too is a posteriori reasoning about the microstructure of matter bound to hit a floor below which it cannot dig further.  While it would be rash peremptorily to rule out the possibility that string theorists might yet find a way to test their theory experimentally, it could turn out the theory is to physics what Leibniz’s theory of monads and rationalist theology are to metaphysics.

Beauty and scientific method

A parting word is in order about string theorists’ tendency to try to make up for the absence of empirical evidence by appealing to aesthetic considerations, a tendency bemoaned by Hossenfelder, Penrose, Smolin, and Woit.  To be sure, all of these critics acknowledge that the beauty of a physical theory’s mathematical representation of nature is a criterion valued by many physicists, not just string theorists.  Hossenfelder notes that theories tend to be regarded as beautiful in virtue of properties such as symmetry, simplicity, naturalness, and elegance.[41]  Penrose says that it has to do with coherence.[42]  Smolin connects a theory’s beauty with its ability to unify phenomena and other theories that might at first have seemed unrelated.[43]  Woit too says that physicists relate beauty to unification, but also to simplicity and mystery.[44]

One problem, the critics point out, is that such judgements are subjective.  As Woit notes, though some physicists judge string theory to be elegant and beautiful, others take it to be ugly and complicated.  Another problem, as Hossenfelder, Penrose, and Smolin point out, is the history of physics shows that physical theories judged to be beautiful have sometimes turned out to be wrong.  Moreover, as Penrose adds, there are theories in mathematics that are very beautiful but have no apparent relevance for physics in the first place.

Why has beauty turned out to be such an unreliable guide, and why do physicists nevertheless persist in being guided by it?  I believe Thomism sheds light on these questions as well.  Beauty is traditionally taken to be among the transcendentals, alongside being, one, true, good, thing, and other.  These concepts are “transcendental” in that they transcend all categories and apply to all reality.  That is to say, any reality of any kind has being, is one, is true, is good, is beautiful, and so on.  In this connection, the transcendentals are also said to be convertible in the sense that being, unity, truth, goodness, beauty and the rest are one and the same thing looked at from different points of view.

Naturally, this doctrine needs exposition and defense, but the point for present purposes is to use it to shed light on the role aesthetic considerations play in physics.[45]  If beauty, being, unity, and truth are all really the same thing looked at from different points of view, then it would make sense to suppose that a theory that unifies phenomena, or that captures fundamental truths about nature, or that describes nature’s very being or reality at the deepest level, would be beautiful.  Hence, given the doctrine of the transcendentals, it is not unreasonable for physicists to put at least some stock in aesthetic considerations as a heuristic.

At the same time, Thomists would emphasize that what is fundamental in the ordo essendi or order of being is not fundamental in the ordo cognoscendi or order of discovery.  For example, God is the most fundamental reality, and is supreme in being, unity, beauty, and so on.  All the same, and contrary to what a rationalist like Descartes supposes, his existence and nature are not among the fundamental pieces of knowledge.  Rather, we have to reason to God’s existence and nature from what we know about metaphysically far less fundamental features of reality, such as the fact that the things we experience undergo change and are contingent.  There are no shortcuts whereby we might deduce God’s existence and nature from the idea of God as the highest reality, after the fashion of the ontological argument.

Similarly, fundamental features of nature such as its basic material constituents and the laws that govern them will no doubt turn out to be beautiful.  But we cannot reliably deduce their existence and nature from considerations of beauty, unity, and the like alone, any more than we could reliably deduce God’s existence or much about his nature from the idea of him as supreme in being, unity, beauty or the like.  We have to rely instead on reasoning of the kind I have been describing, which relies on analogy and parallels to the triplex via.  Hossenfelder’s book is titled Lost in Math: How Beauty Leads Physics Astray, and what I have been arguing is that her diagnosis is one that dovetails with a Thomist epistemology and philosophy of science.  The emphasis that string theorists and other contemporary physicists put on the beauty of mathematical constructs lacking experimental foundation has led the field to create castles in the air, in a manner that parallels the way in which early modern rationalism led metaphysics into the same fate. 

The remedy is to reject the advice of Dawid and others to abandon modern physics’ traditional insistence on experimental verification.  But this will entail rejecting also the hubris of supposing that physics will inevitably hit upon some final “theory of everything.”  As when we approach the divine apex of the hierarchy of reality, so too when we approach the basement, our vision is likely to become less clear rather than more.

 

End notes

联系我们 contact @ memedata.com