You are here

Matter

Date: 
2013
DOI: 
10.17421/2037-2329-2013-AS-1

I. What is Matter? - II. Matter as a Philosophical-Theological Concept 1. The Physical Approach 2. The Mathematical Approach 3. The Metaphysical Approach 4. Matter and Spirit: Philosophical-Theological Aspects - III. Scientific Inquiry into the Nature of Matter 1. The Atomic Theory of Matter 2. Matter and Radiation 3. Einstein’s Theories of Relativity 4. Quantum Mechanics 5. The Organization of Matter: Information and Complexity 6. Matter and Mind - IV. Science and Philosophy - V. Matter and Mass, Field and Energy 1. The Tendency towards Substantialization of Mass and Energy in Classical Physics 2. Special Relativity 3. Quantum Mechanics - VI. Vacuums, Matter, and Energy - VII. Matter and the Problem of the Whole and of the Parts 1. Various Positions and Approaches 2. Some Examples Taken from the Sciences VIII. Matter, Intelligence, and Abstraction.

I. What is Matter?

In common, everyday language, we usually designate as “matter” everything that falls under the direct perception of our external senses: We call “material” that which we can see, touch, smell, taste, and hear. This is an adequate working definition on the macroscopic (i.e., on the human) level. In everyday language, we call material objects “bodies,” especially if they are solid, but in a wider sense this term can also include liquids, gases, and such things that are indirectly observable with measuring instruments. By the term matter, we are indiscriminately referring to a sort of constituent fabric of corporeal bodies, without reference to how this fabric differs in varying types of corporeal objects.

The need to introduce such a terminology arises, at first glance, from the need to distinguish that which causes a sense experience from that which lies at the origin of an experience of a non-sensorial nature (such as the internal experiences of thinking, feeling emotions, remembering, and willing, experiences that appear as fundamentally imponderable and immaterial).

The topic becomes complicated when one considers a more detailed analysis involving phenomena such as light, or areas of research encompassing the microscopic, biological, or psychological worlds. As we shall later see, only a careful examination allows one to gain a better understanding of the characteristics of these “worlds” and to develop a more precise meaning of the word “matter,” both independently from, and in relation to, them.

Historically, two approaches have been adopted to the problem of matter: We can call one approach “philosophical-metaphysical,” while the other approach we would today term “scientific.” Each of these ways of approaching the problem, if carried out correctly, offers us very significant insights for answering the question, “What is matter?” These approaches are mutually complementary in so far as they consider the same object from different points of view: the “quantitative-relational” (scientific) and the “entitative” (philosophical) points of view. I will attempt to examine both as far as I am able.

II. Matter as a Philosophical-Theological Concept

In this section, I will consider the qualitative, or better yet, metaphysical differences between various ways of approaching the subject of matter. I will also indicate those aspects that concern theology more directly and which are treated at length in other relevant works referred to in the text.

1. The Physical Approach. In classical antiquity, around the 6th century B.C., science and philosophy were not yet separate subjects. The rational and demonstrative thinking of science and philosophy had begun to develop and exceed the mythical culture, which aimed more at communicating fundamental truths than at analyzing the structure of the cosmos. At this time, Ionian philosophers such as Thales, Anaximenes, and Anaximander, etc. (later known as “physicists” because they studied nature [Gr. physis]) posed the problem of the constituent elements of the sensible world (cf. Daumas, 1957). The tendency of the human mind then, as it is now, was that of reducing the description of the world to a few unifying, constitutive elements. Just as physicists today ascertain that the quarks of the “standard model” (cf. H. Firetzsch, 1983; Cohen-Tannoudji and Spiro, 1988) are the fundamental components of the universe (although they are ready to change the model if it should prove inadequate, or if someone should find a better theory) so too these ancient inquirers into the physical world explained, in a more simple way, every degree of weight and density, as well as every qualitative property, as a mix of more or less dense concentrations of one of the four elements of earth, water, air, and fire. (Empedocles’ thought posited a mix of precise amounts of the above elements.) However naive this description may sound today (and, for that matter, however much it might sound too “qualitative”) it does not, from a philosophical and methodological point of view, substantially differ from the current way of proceeding. In fact, just like today, the ancients looked for constituent elements homogeneous with the thing they were seeking to describe and explain. This method is called “reductionist,” and it is in essence the simplest one that can be adopted. In order to explain the nature of different corporeal objects, we call to mind a description of them as composed of yet smaller (microscopic) corporeal objects, which cluster together and which are none other than the minimum portions of the elements that can be found in nature, even if in macroscopic amounts. For the ancient inquirers into nature, a particle of “earth” was made of the same “earth” as the ground we step on, just as, for us, a particle is “matter” in the same way as the table on which we lay a book. No one would say that a proton or a quark is not matter. The problem, instead, becomes that of understanding the nature of the matter common to all microscopic and macroscopic objects, whether it is a primary and irreducible constituent, or if, in turn, it is an effect of something else.

It is not by chance that these elementary constituents are sometimes called the “building blocks” of which the universe is made. And, the building blocks of a house are made of the same matter as the house as a whole. As a proof of the substantial continuity of how this problem is posed, there is a certain kinship that scientists of today feel for a thinker such as Democritus (460-370 B.C.) who came up with the first atomic theory of matter.

2. The Mathematical Approach. The position of Pythagoras (570-490 B.C.) and his followers is particularly interesting, even from the modern point of view, because it places mathematics at the foundation of any explanation of nature (cf. Daumas, 1957). According to this view, there are “points” in place of matter, a view that brings us back to a geometrical description of physical space. We might be led to think of the “material points” of modern rational mechanics, but the Pythagoreans were less concerned with describing the ponderable aspect of nature than grasping its order, harmony, and musicality through numerical ratios. In this sense, they went from a “materialistic” to an “abstract” or “ideal” description of the cosmos. Furthermore, once the Pythagoreans discovered the correspondence between points of a line and numbers, the description became at once geometric and arithmetic, or as is often said, “arithmo-geometric.” The crisis of “irrational” numbers, however, was not fully overcome until many centuries later, and this mathematization, on which the entire way of life and thought of the Pythagoreans was based, reached a crisis and fell into a lengthy period of stagnation.

3. The Metaphysical Approach. At this point, the time was ripe for a shift from the physical and/or mathematical approach to the metaphysical approach. The problem of understanding reality no longer involved the question, “What are the constituent elements,” but rather, “How is change possible?” That is to say, the problem of understanding reality involved the question of becoming. We experience, at the same time, change and identity in things. Philosophical inquiry shifts its focus from the investigation of the constituents (the “building blocks”) of the universe to the “principles” that explain its existence and change. These principles are not reducible to components that are corporeal and hence observable because they are of an entirely different nature from that of corporeal bodies. Yet, they must be hypothesized for logical reasons in order to explain the behavior of things, particularly of corporeal bodies. Further, each of these principles is indispensable for understanding reality since one runs into contradictions, or finds it no longer possible to go past a certain degree of knowledge, if such principles are ignored.

Every corporeal body—and this is particularly evident in living bodies—partly changes during its existence and partly remains the same and maintains its identity. If there were only one principle behind being, if there were only “building blocks” (i.e., matter), a corporeal body would not remain the same, should these “building blocks” be replaced by others. Thus, one would no longer be able to say that a human, or a living being, is always the same living being during the course of his life, once the particles that comprise him are replaced. Another principle is therefore necessary in addition to the principle of matter that guarantees identity and permanence within changes of constituent matter. Aristotle (384-322 B.C.) called this immaterial principle “substantial form,” which makes an entity be and remain what it is for its entire existence. Our conception of information is most likely closest to the Aristotelian concept of form.

We find ourselves face-to-face with a description of corporeal bodies as a synthesis (Gr. synolon), that is, as the result of two constitutive principles (co-principles, in so far as they operate together) that are not themselves bodies but are of an entirely different nature. They are neither “observable” nor homogeneous with corporeal objects but make the existence and change of corporeal objects possible. They are “matter,” which is the common ground of corporeality, and “form,” which puts the necessary information into matter so that it becomes this particular object with its particular properties. This is the basis of the “hylomorphic” theory. At this point, it is necessary to make some precisions. Up until now, I have used the word “matter” to indicate something that is of the same nature, of the same “stuff,” as bodies, whereas in Aristotle matter appears as a “principle” of a different nature, a pure potentiality to receive the active and informative principle, which is “form.” It is therefore necessary to distinguish between two types of matter: There is “prime matter,” which is a “principle” (the pure potentiality to receive forms), and “secondary matter,” which is matter already actuated by a form, is of the same nature as observable corporeal objects, and is the fabric of which they are made. This “secondary matter” is none other than what today we call simply “matter” in both everyday and scientific language. It is homogeneous with corporeal bodies and is a “thing” (Lat. ens quod), whereas “prime matter” (just like “form”) is not a “thing” but a principle “through which” (Lat. ens quo) things are as they are.

This kind of metaphysical inquiry into the nature of the constituents of the corporeal world requires a conception of entity according to several different modes, such as an ens quod, or an ens quo, rather than according to a single, homogenous (univocal) mode, such as that of the being of Parmenides (which is always identical to itself and free of change), or that of the pure being of Heraclitus (which is free of permanent identity). Instead, such a conception requires a gradation of modes of being an entity, which includes “potential” principles such as prime matter, “active” principles such as form, and “things” already actuated in different degrees.

4. Matter and Spirit: Philosophical-Theological Aspects. Philosophy, unlike physics and the natural sciences, has throughout its history involved not only the study of the sensible world but also an analysis of the interior experience of man as characterized fundamentally by his intelligence and will. This analysis has led, in addition to the concepts of primary and secondary matter, to a completely immaterial principle, often known as spirit or soul. Aristotle used the term “soul” to indicate the substantial form of living beings, distinguishing in it the vegetative, sensitive, and rational faculties, the former two being faculties shared with the other animals, the latter unique to human persons.

The term “spirit” was later used for the most part in a generic sense, whereas the term “soul,” used more and more frequently to indicate the human soul, denoted the spiritual principle that a rational individual, such as a human person, is endowed with. The term “Spirit” is also used in philosophy and theology to indicate the nature of higher and completely immaterial beings such as Angels and God. We refer the reader to other related entries for a more complete treatment of these subjects and of their relationship with the subject of matter.

In the history of human cultures, in relation to religious thought matter has often been considered an element related to corruption, degradation, and evil because it was seen in opposition to the spirit and to immaterial realities in general. Plato’s philosophy was not foreign to this vision: The body, for example, is viewed as the “prison” of the soul. One of Christianity’s original contributions, following upon Judaism, has been to consider the intrinsic goodness of matter. In Christianity, the dialectic of good and evil shifts its focus from the paradigm of spirit/matter, which is, in a certain sense extraneous to the moral dimension, to the human heart, that is, to a person’s interior life. In this regard, the reflections of the Fathers of the Church (Ireneus, Tertullian, and Augustine), who opposed Manichaeism and dualist doctrines in general, is well known. Matter and corporeality are good because they are created, as are spiritual realities, by one God. The theological significance of matter, and its being ordered towards God, is then reflected in the Church’s very work of sanctification. Indeed, she entrusts to the “matter” of the sacraments the function of signifying in an efficacious manner the order of grace, as, for example, water does in the sacrament of baptism, and even of actualizing it, as happens in the transubstantiation of the bread and wine into the flesh and blood of Jesus Christ in the sacrament of the Eucharist.

From a philosophical-theological perspective, matter is at times reduced to the idea of materialism, from which it must be properly distinguished. Fusing the attributes of the spirit into those of matter, or, conversely, the spiritualization of matter, can lead to various forms of pantheism. Christian teaching, in this regard, exhorts one not to view the entire world as only matter, and to dispose oneself to recognize the works of the Spirit. These works, even though they are realized through visible and sensible matter, transcend matter in their origin.

III. Scientific Inquiry into the Nature of Matter

Modern science, which is based on Galileo’s method, abandoned the metaphysical approach in order to resume both the physical approach of the Ionian philosophers and the mathematical approach of the Pythagoreans, restating and in a certain sense unifying the two. The intent of this section is not so much to give a complete description of the different scientific theories of matter as to put into relief the changes of the concept of matter which the passage from one paradigm to the other has entailed (for the by now classic concept of “paradigm,” cf. Kuhn, 1966).

1. The Atomic Theory of Matter The success of Galilean and Newtonian mechanics seems naturally to suggest a mechanical description (mechanism) of all of corporeal reality. In this viewpoint, the simplest unifying scheme capable of accounting for different densities of corporeal bodies, from solids to liquids and gases, was the atomism of Democritus. After Dalton (1766-1844) devised the first experimental proof of the atomic theory, atomism gained enough scientific merit to be placed side-by-side with the already well-established Newtonian mechanics. Thus, while the atomic theory gave a description of the “structure of matter,” on the basis of which all of chemistry was developed, Newtonian mechanics was the tool with which “dynamics” (that is, a system’s evolution in time) was described. On the basis of the latter, the kinetic theory of gases, and more generally, statistical mechanics, was developed, which was the first microscopic mechanical model explaining the macroscopic theory of thermodynamics. The development of classical physics can therefore be examined from two points of view: From the point of view of the “structure” of matter, which I address in this entry, and from the point of view of its “dynamics.”

2. Matter and Radiation. Successively (beginning with the 19th century), classical physics was faced with other phenomena to describe, such as light, electricity, and magnetism. What is the physical nature of light? Is it made up of corpuscles of matter, in which case, corpuscles that would be so small as to appear practically immaterial to the observer? With his corpuscular theory, Newton (1642-1727) proposed this material model of light, but it did not completely match experience (experiments measuring the speed of light, for example, made it clear that light propagates in a refractive medium with a velocity of c/n, where c is the velocity of light in a vacuum, approximately 3 x 108 m/sec, and n is the index of refraction of the medium, rather than the velocity c x n required by the Newtonian theory). With his wave theory of light, Huygens (1629-1695) explained the phenomenon of light as a mechanical periodic vibration that propagates in a nearly imponderable “ether,” and he predicted (in addition to the correct speed of propagation in refractive media) the phenomenon of interference later observed experimentally by Young in 1810. Maxwell’s equations (1831-1879), which govern electromagnetic phenomena, allowed the nature of light to be interpreted as a wave phenomenon, but of an electromagnetic, rather than a mechanical, nature. If, therefore, the nature of light is reduced to that of an electromagnetic wave, the problem shifts from mechanics to the nature of electricity and magnetism, two distinct phenomena that were unified by Maxwell.

With electromagnetism, the concept of “field” arose, as a vehicle that transports energy in a form not conceptually reducible to the kinetic energy of particle mechanics, even though the two are convertible. The concept of radiation was the first to be placed alongside, and later in opposition to, that of matter, and thus even the concept of energy associated with radiation became viewed in opposition to that of matter. Energy began to be spoken of no longer as a property “of something,” as an attribute of the field which transports it, but “as something,” as if it were an autonomous entity like matter, and of a nature in a certain sense different from the latter. This conception of energy is also favored by the fact that it is subject to a conservation law like that of mass: If “nothing is created and nothing is destroyed,” as Lavoisier (1743-1794) posited for mass-matter, this is also true for energy, which is conserved even if it transforms from one form to another. How do matter and energy differ according to 19th century classical physics? They differ on account of two easily identifiable characteristics. The first is the fact that matter possesses “mass,” whereas energy does not. In fact, it is this property that allows one to define matter itself, interpreting mass as a “quantity of matter.” Matter is that which has mass, whereas energy can subsist independently of matter in the form of an electromagnetic field that has no mass, in addition to being able to be transported by masses in the form of kinetic energy. In the second place, matter appears in discrete form, like atoms and particles (e.g., ions, electrons), whereas energy appears as a “continuum,” whether it is associated with the motion of a particle (kinetic energy) or takes the form of radiation.

3. Einstein’s Theories of Relativity. With the theory of relativity of Albert Einstein (1879-1955), namely his theory of “special relativity” (1905), the famous equivalence of mass and energy was established, quantified by the formula E = mc2 and, thus, the first of the two properties stated above, which distinguished mass from energy, as it was then understood, began to break down. On the one hand, the “mass” of a particle at rest appears itself as a “concentrated” form of energy (energy at rest). On the other hand, radiating energy proves its material character as soon as it gets inertial and gravitational properties according to the mass E/c2 associated with it. At the same time, Einstein’s discrediting of Lorentz’s ether as unobservable, and its replacement with the “vacuum,” gave energy a character of yet starker self-sufficiency. The energy of radiation no longer needs a support, that is, a vehicle that transports it (the substantiation of energy).

The theory of “general relativity” (1916) leads to another interesting step in our inquiry into the nature of matter. It associates the “metrical” (curvature) properties of space-time—already unified by the geometrical space-time representation of special relativity, developed by Minkowski—with the distribution of mass-energy present in space-time itself, in the form of matter and non-gravitational fields. The absolute space and time of Newton, understood as an empty, pre-formed container in which matter is later placed, is replaced by a space-time whose metrical properties are defined by the presence of matter itself. With special relativity, space and time are no longer described as two independent entities but as a single four-dimensional geometric structure (of which three are space-like and one is time-like). With general relativity, space-time is curved near masses and is no longer described by means of Euclidean geometry but rather with the geometry of Riemann (1826-1866), in such a way that the inertial trajectories (geodesics) of heavenly bodies, which move within it, are the same in a flat space-time in which, however, gravity is present. In such a way, curvature replaces and describes the effects of gravity itself.

4. Quantum Mechanics. Quantum mechanics makes further steps towards unification (even though it brings with it many problems that need to be clarified related to the paradoxes it gives rise to [cf. for example, Selleri, 1987]). On the one hand, the non-relativistic formulation of quantum mechanics, with the equation proposed in 1926 by Schrödinger (1887-1961), attributes wave-like properties even to matter, following the discovery in 1922 by De Broglie (1892-1987). On the other hand, the relativistic formulation of quantum mechanics introduces, with the concept of the “photon,” the discretization of the energy spectrum of the electromagnetic field (quantum electrodynamics)—already hypothesized by Einstein in his famous interpretation of the photoelectric effect (1905), which earned him the Nobel Prize—and of fields in general (quantum field theory).

In this picture, the matter of wave-particles and the energy of wave-photons appear conceptually indistinguishable. However, quantum mechanics introduced a criterion of distinction that was both new and old: New, on account of its mathematical formulation, and old, due to its philosophical content. From the mathematical point of view, the criterion is given by the different statistics the wave-particles obey. Some of these (“fermions,” particles of half-integer spin), which obey Fermi-Dirac statistics—unlike the others (“bosons,” particles with integer spin), which obey Bose-Einstein statistics—are subject to the Pauli “exclusion principle” that does not allow two identical particles to have identical quantum numbers at the same place and time. This fact is interpreted as the impossibility of two fermions overlapping. It is recognized, philosophically speaking, as the property characteristic of matter, while bosons are not subject to this constraint and behave like radiation. Fermions, in fact, are the particles that make up matter (protons, neutrons, electrons, etc.), whereas bosons are the field particles that transport the energy of interaction (photons, gluons, W and Z0 particles, and also gravitons, whose existence is not yet experimentally confirmed).

It is worth noting that one of the most important consequences of relativistic quantum mechanics was the prediction of the existence of “antiparticles”—what has been termed “antimatter”—about which much was speculated. This prediction was the work of Paul Dirac (1902-1984) who discovered, in addition to the solution to his famous equation that corresponded to the electron, then experimentally well-known, another solution which turned out to be identical to that of the electron with the difference of a sign-change in t (same properties: mass, electric charge, spin, etc.). At first, this solution was interpreted as an electron that traveled backwards in time. This interpretation, however, turned out to be non-physical. In fact, scientists realized that, alternatively, one could interpret the same solution as a particle identical to the electron, which traveled forward in time, but which had the opposite electrical charge. This positive electron, or positron, was discovered experimentally. Later, antiparticles corresponding to all known particles were discovered, even for the electrically neutral particles, which were, however, described by other quantum numbers of opposite sign and which were capable of “annihilating themselves” with the corresponding particles and giving off energy in the form of radiation. The problem remained of understanding why our universe is made up almost exclusively of matter instead of antimatter. This problem of “symmetry breaking” is probably one of the most researched problems of particle theory and cosmology in the past few decades. A great challenge of today's physics, which proved itself able to unify cosmology and particle physics, was represented by the experiment set up at the LHC at the CERN with the precise scope of verifying if the Higgs boson (responsible for the rest mass of heavy particles, according to the standard model) does really exist. Physicists considered this a crucial experiment in order to test the validity of the standard model. The results of that experiment (or better several experiments performed according to different kinds of tests and machines) were presented officially on July 2013 with the announcement of the detection of a scalar boson resembling the Higgs particle. Later results have confirmed that the detected particle "is looking more and more like a Higgs boson" (CERN says).

5. The Organization of Matter: Information and Complexity. The study of matter in living organisms is the subject of biology. Nevertheless, the overlap with chemistry and physics has always been significant for various reasons, mainly because the use of a methodological reductionism required all the natural sciences be reduced to physics, the Galilean science par excellence. For this reason, the link between biology and physics was expected to be represented by organic chemistry. Another reason for this was that the great experimental and theoretical discoveries of molecular biology, such as the genetic code of DNA and the double-helix model of Watson and Crick (1953), were in this sense a confirmation of great importance. Mapping the human genome (1990-2003) represented a further progress in this same direction, even though it also opened up new and unexpected questions related to complexity and the consequent crisis of reductionism.

Recently, with physicists and mathematicians resuming a systematic study of non-linear systems (a field of study begun by Poincaré and later abandoned for many decades after his death), along with the birth of the science of complexity, which gradually involved all sciences in its problematic, the reductionist process came to a halt and the relationship between physics and biology changed radically. In a certain sense, one can say that today it is the biologist who proposes an epistemological model to the physicist and not vice versa.

The fact that in a non-linear differential equation the sum of two or more solutions is not generally a solution forms the mathematical basis of the crisis of reductionism in that it does not allow for the division of a solution describing a complex structure, or of the “whole” into simpler solutions, which describe its parts viewed as isolated from each other. The former, elementary, non-reductionist characteristic of non-linear physical systems finds its counterpart in practically all sciences (see below, VII). Other aspects of complexity instead concern the dynamics of systems that, because of their non-linearity, behave “unpredictably,” and, if they are dissipative, can be shown to be capable of “self-organization” due to the fact that they are open systems that interact with the external world with which they exchange matter, energy, and entropy (cf. Nicolis and Prigogine, 1989).

A decisive role seems to be played by information which, coming into play on different levels of the organization of matter, determines in each level determines several characteristics that differ qualitatively and not only quantitatively, and therefore are irreducible with each other.

6. Matter and Mind. Another scientific problem that involves living matter, and which has developed considerably in recent times, is that of the mind-body relationship. Here one deals with an inquiry that directly concerns the sciences such as biology, physiology, and psychology, along with philosophy and theology, in an interdisciplinary context that goes by the now common name “cognitive sciences.” Parallel with the mind-body relationship is the field of artificial intelligence, which involves, rather than the sciences of living matter, computer science and information theory.

The cognitive sciences deal with how intelligent knowledge is formed in our mind in its relationship with the brain (and, more generally, with the body) in terms of how this is at least partially reproduced in a computer. It is clear that scientific problems related to this kind of research ask, unavoidably, philosophical questions that have theological implications of great importance. We will indicate two of these that seem to be among the most relevant: a) Is it possible for a corporeal brain (or a computer) to form universal abstract concepts only from its material resources and therefore think like a human being? Or, is it necessary to require the intervention of a non-material function, like that performed by a spiritual soul? b) Is it possible for a corporeal brain (or a computer), with its material resources alone, to be conscious of its activities and therefore to possess a self-consciousness like a human being’s? Or, is it necessary to have the intervention of a non-material function performed by a spiritual soul?

The two preceding questions are the subject of scientific and meta-scientific discussion between physicists, mathematicians, engineers, and computer scientists, not to mention philosophers and theologians. From the point of view of the philosopher, these questions directly involve the classical problems of “abstraction” and “reflection,” functions that the human mind habitually performs (see below, VIII).

IV. Science and Philosophy

I will now delve into some philosophical questions related to scientific theories, such as those that have surfaced in the preceding section. I will make more precise, among other things, the meanings of terms and look out for frequent misunderstandings that arise from the improper use of terminology, which easily occurs when one goes from the scientific domain to the philosophical domain and vice versa.

The first observation concerns the scientific method. The 20th century witnessed a particularly significant step in the understanding of the scientific method, which has had notable repercussions in the way matter is conceived. This step involved a shift from a fundamentally positivist attitude to an attitude that revised the foundations of scientific theories. This change of position was in part the result of a free decision and was in part dictated in a certain sense by the very evolution of scientific research.

An example of the first type, in which a change of methodological attitude was the fruit of careful reflection and a free decision, is offered by Albert Einstein. The Einstein of special relativity—an “operationist,” in the sense of Bridgman’s operationist theory—defined quantities through operations corresponding to the experimental procedure used to measure such quantities. The beginning hypotheses with which Einstein constructed the special theory of relativity are none other than codifications, in terms of laws, of what results from experience. The Michelson-Morley experiment (1887) did not imply any modification of the laws of electromagnetism due to the translational motion of the earth with respect to the ether, therefore: a) the principle of Galilean relativity is valid not only for mechanical phenomena but also for electromagnetic phenomena; b) the speed of light is invariant under uniform translations of the observer’s reference frame. The reason why Lorentz (1853-1928), who had also deduced the correct transformations, did not succeed in arriving at a complete theory of relativity lies in the fact that he unwittingly added to the two preceding principles elements not derived from experiment, such as the mechanical explanation of the contraction of rods during their motion.

General relativity was discovered, instead, not from pressing experimental problems, that is, not because Newton’s gravitational theory did not correspond to experience (it is not by chance that the experimental verification of general relativity required extremely precise measurements), but from the need to revise the foundations of Newtonian mechanics, a revision that remained incomplete even with special relativity. What seemed unsatisfying was the fact that the laws of Newtonian mechanics were not completely independent from the choice of the observer, as is the case with the laws of electromagnetism, but were related to inertial frames of reference. How can one make two frames of reference equivalent? Making them equivalent would have meant making them, in an appropriately generalized sense, all “inertial.” The mathematical solution was found in the idea of the curvature of space-time described by Riemannian geometry, which made possible inertial motion along geodesic trajectories that are not straight in the Euclidean sense.

Even Werner Heisenberg (1901-1976), at the beginning of his “matrix mechanics,” adopted the operationist method. In his theory, only observable quantities were supposed to appear. An undoubtedly certain criterion, which is however incapable of being absolutized in that some variables cannot be observed, is sometimes required for the logical consistency of a theory. And these variables, in Heisenberg’s mechanics, are the eigenvectors of the orthonormal basis of the functional space l2 which correspond to the initial conditions of the eigenfunctions of Schrödinger. In giving up the absolute criterion of the exclusivity of non-observable quantities, Heisenberg was lead by the very structure of the theory rather than by an epistemological reflection.

The preceding considerations ultimately refer to the question of the metaphysical foundations of scientific theories. Every scientific theory, with is mathematical formalism, establishes “relations” (equations, laws of nature) that relate different “quantities” with each other: Relations and quantities are none other than “properties” of physical objects which one wishes to describe. The fact that a physical object has certain properties instead of others is sufficient ground for excluding a determined way of conceiving of the object as a whole. And, this is so because “quantity” and “relations” are not only objects of the sciences but also of metaphysics, which considers them in so far as they are entities, particularly in so far as they are “properties” (accidents) of other entities (substances). Thus, we can say that a scientific theory can be, more or less, in agreement with a certain “metaphysics,” while excluding others. The elements of metaphysics (meta-science) with which a scientific theory most closely aligns are jointly: a) the framework of philosophical foundations (logical or ontological) that it implicitly assumes, and; b) the philosophical background against that which is usually called the “interpretation” of the theory is conceived.

In the following sections, I will examine certain assumed metaphysical aspects that are useful for the interpretation of scientific theories of matter, which I have referred to in the preceding section.

V. Matter and Mass, Field and Energy

1. The Tendency towards Substantialization of Mass and Energy in Classical Physics. In the mechanistic interpretation of classical mechanics, there is frequently a confusion (from the philosophical point of view) of “substance” with “accident,” that is, of physical objects with their properties. From the philosophical point of view, for instance, matter is a “substance” in so far as it is capable of subsisting by itself. Mass and energy, on the other hand, are not “things”; they are not themselves substances but rather are properties of matter, that is to say, they are “accidents.” With the advent of the field concept and its interpretation as something real and not just mathematical, the tendency arose in classical physics of identifying the energy carried by the electromagnetic field with the field itself, that is, of treating field energy as a substance rather than as a mere field property. This could be legitimate, if one desires to call radiation “electromagnetic energy,” but it is necessary to be careful in clarifying what one means by the term “energy,” energy in so far as it is “a field property,” or the field itself. An ambiguous terminology is always risky, especially if one is interested in doing science. Moreover, even before the substantialization of the concept of energy, in classical physics there existed the substantialization of the concept of mass, which was often considered synonymous with the “quantity of matter.” Quantity is that which is measurable in a substance;  it is observable par excellence and easily identified with the object itself, with the substance itself. In this way, we have mass-matter, on the one hand, and energy-radiation, on the other. Energy has a dual aspect: It is treated as “accident” in so far as it is the kinetic energy of material masses, and as “substance” when it is in the form of radiation. Conversely, mass exists only in the form of matter because radiation is massless.

The extremization of these processes of ontologizing interpretation, of mass-matter on the one hand, and energy-radiation on the other, has lead to a two-fold reductionism, of materialism and energetism. All of this has a historical motive.

I will begin with a few considerations on materialism. As R. Masi rightly observed years ago in his classical study on the structure of matter: “The concept of form at the basis of the hylomorphic theory and all of Aristotelian physics was misunderstood by the Scholastics of the decadent period: Form, which in the true thought of Aristotle and Thomas Aquinas is an incomplete and partial reality, an ‘ens quo,’ was instead described as a complete substance, an ‘ens quod,’ leading to a host of contradictions” (Masi, 1957, p. 85). The nominalist thought of the mediaeval Oxford school (13th century) completely stripped away the notion of analogy of meaning, rendering univocal the search for principles on which to base the understanding of the universe. Due to this, the method of research was led back to where the Ionian philosophers had left off, even though the instruments of observation and mathematical tools were clearly at a much more advanced stage. For this reason, once the univocalized, and no longer genuinely Aristotelian, notion was rejected, the new “natural philosophers,” as they were then called, had no other alternative than to adopt, as an interpretative principle of the physical universe, “matter” understood in a simply univocal manner. Consequently, Newtonian physics could not be anything but “materialist” as far as the structural description of the cosmos was concerned, “mechanistic” as regarded the dynamical and causal explanation of its becoming, and “reductionist” in its approach to the relationship between the whole and the parts. A thus misunderstood Aristotelian and Thomistic thought could not but become the principal enemy to be fought, from the viewpoint of a rigorous and certain science, which could only be mathematical and experimental. “Faced with the obscurity of Aristotelian forms, mechanism represented a clarity without equal: All of natural phenomena was conceived as a combination of material particles, bound together and in relative motion. The universe became a big machine, which could be broken down into smaller ones” (Ibid., p. 86). With the development of thermodynamics, the concept of energy acquired notable importance, parallel to that of matter, but the reduction of thermodynamics to mechanics brought about by the kinetic theory reaffirmed the primacy of matter and motion.

The true alternative to the materialism of Newtonian mechanics is related to Maxwell’s electromagnetism: “The concept of field was developed without using the concept of particle; […] Maxwell’s field is not made of particles, though being real” (Ibid., p. 91). The fact of substantializing field energy, which leads to “energetism,” entails the misunderstandings and conceptual errors that I spoke of above. In addition, after a certain point there arose the tendency, in the field of classical mechanics, to reverse the direction of reductionism. Instead of explaining everything in terms of matter and particle motion, a new reductionism arose that tended to view energy, rather than matter, as a founding principle to which even the notion of matter could be reduced, conceived as a condensed form of energy. This gave rise to energetism, whose first proponent was the chemist W. Ostwald (1895). The distinctive character of energetism was the abandonment of the matter-energy dualism that had reigned supreme up until then. Energy became the most general concept. Not only did matter have to sustain energy’s prevalence, but it also had to yield unconditionally its place to it (cf. Masi, 1957).

These misunderstandings were engendered by a two-fold conceptual error. The first consists of conceiving of the electromagnetic field as something that is not “material substance.” The second consists of attributing a “substantial” character to energy, in place of the substantiality removed by the field.

2. Special Relativity. With its equivalence of mass and energy, special relativity restored the symmetry. Not only matter, but radiation (an electromagnetic field) is also endowed with a “mass,” which is revealed by its inertial and gravitational properties (the deflection of light rays in a gravitational field). Often people speak of converting matter into energy, and vice versa, in nuclear processes. If by this it is meant that a “substance” (a part or all of the matter of some particles) has become an “accident” (some amount of energy), then this is an incorrect use of philosophical terms. A property (accident) like energy can exist only as a property of something, and matter (substance) can convert itself only into another substance (substantial mutation), not into an accident (that is, not into something without a supporting subject, because the question remains regarding of what it is the energy). Otherwise, it is correct to say that a substantial mutation has taken place during which some particles released a part or all of their “rest mass” that was acquired by the reaction products (particles and/or radiation) as kinetic and electromagnetic energy.

3. Quantum Mechanics. If special relativity has unified the two properties (accidents) of mass and energy, quantum mechanics, in its relativistic version called “quantum field theory,” tends toward the unity of matter and radiation, in that it presents us a set of wave-particles in which the distinction between what was classically denoted as “matter” and “energy” becomes drastically more subtle. Matter and radiation (in the broad sense of a field of interaction: gravitational, electromagnetic, strong and weak, which one seeks to unify) no longer constitute two opposed entities, but rather two ways of actuating, or two “species,” of the same reality, endowed with mass-energy, which is in a certain sense its “genus.” From the point of view of the philosophical tradition, it would seem natural to call this single genus “matter,” meaning that it can actuate itself in the two species that obey the two quantum statistics: fermions, endowed with half-integer spin, which represent matter in the classical sense of the word, and bosons, of integer spin, which constitute the field of interaction. From the contemporary physical point of view, it is more common to denote this “genus” as a “field,” which actuates itself in two “species” of fermionic and bosonic fields.

VI. Vacuums, Matter, and Energy

At this point in the discussion yet another old problem appears, that of the “vacuum” (cf. A. Strumia, Il problema della creazione e le cosmologie scientifiche, 1992). What is a vacuum? Can it exist? A more precise use of terminology can save us from certain misunderstandings that have more than once lead many illustrious persons astray. From the metaphysical point of view a vacuum, in the absolute sense, is “the vacuum of entity,” and as such can be identified with “nothingness” (“non-entity,” “no-thing”), a concept coined in order to identify things that do not exist. Metaphysically, the vacuum does not exist by definition, because that which exists, by the very fact that it exists, is an entity. A vacuum, understood in the absolute sense, is therefore an absolute and total negation of being. A vacuum in a relative sense, not as an absolute negation but only a relative negation, is the “privation” of something in a certain subject, rather than the total negation of the subject. In scientific language, we say “vacuum” in the privative sense of an “absence of matter”: In this case, however, the step from this relative meaning to the absolute meaning is not legitimate in order to to draw conclusions of a philosophical and theological character, which do not follow logically.

According to classical physics, in the area of pure mechanics a vacuum is a region of space in which matter is absent (a vacuum of matter): Where atoms and particles are not present, there is a vacuum. The planetary model of Rutherford’s atom confirms the fact that empty space is prevalent in the physical world. Where there is no matter, classical physics admits, however, that there can be space, as a pure empty extension and not, therefore, as nothingness. Space assumes its identity, becomes a kind of substance, can exist in the absence of matter, and is in fact the container of matter, which is in a certain sense pre-existent. This is the Newtonian concept of absolute space. Electromagnetism fills this empty space with the ether that supports the field and is responsible for electromagnetic interactions between charged material particles and transports electromagnetic radiation energy. The vacuum, therefore, is a “vacuum of matter,” but not an absolute vacuum, in that it is filled by the ether.

Special relativity eliminates both the ether and the absolute space of Newton and re-establishes the vacuum as “something” that has the property of transmitting radiation. In fact, a vacuum is in a certain sense the best “medium” in that, through it, all signals travel at the maximum velocity c, which is precisely the velocity of light in a vacuum. The vacuum of special relativity, therefore, is a “vacuum of matter” but not of “radiation.” It is a vacuum that has at least one property, that of transmitting radiation, and as such it is not nothingness, because that which has properties is a substantial being. It is nevertheless neither ether, nor absolute space, since measurements of space and time are not absolute as in non-relativistic physics. A relativistic vacuum is, in a certain sense, the field itself, which is never exactly vanishing because of the presence of corporeal bodies, over which the vacuum extends, which continually exchange their mutual interactions. Further, if there were no corporeal bodies or radiation, would special relativity allow us to affirm that the vacuum of both is something real? We recall that special relativity is a theory that defines its concepts operationally: If there were no corporeal bodies or fields it would not be possible to define either the observer, or the measurement, because these require corporeal objects in order to identify the coordinate axes, rulers to measure the lengths, and clocks to measure the times. The vacuum of matter and fields would therefore not be observable and definable and would be only an entity of reason.

General relativity identifies a gravitational field with the metrical properties of space-time (metric tensor) and makes the latter depend on the distribution of mass-energy, that is, on the presence of matter and non-gravitational fields. In this way, the geometrical properties of space-time are determined by bodies and external fields (which, significantly, are cumulatively called “matter”) and on their motion. It is a concept of space and time far from the Newtonian one and, as it has been emphasized by various authors, very close to the Aristotelian one. In Aristotle’s view, in fact, space is defined through the notion of contact (today we speak of interaction) between bodies, which allows one to introduce the concept of distance, and time is defined as the number that measures motion. Clearly, the two conceptions are not comparable on the mathematical level, but only on the qualitative, metaphysical level. Something of this kind can be found in Lobachevskij: “‘Contact’ is an attribute characteristic of bodies, and due to it bodies are termed ‘geometrical’ bodies inasmuch as we fix our attention on this property, apart from all the other properties, be they essential or accidental. In this way, we can conceive of all the corporeal objects of nature as parts of a single, global body, which we call space” (cf. Lobachevskij, New Principles of Geometry with the Complete Theory of Parallels, Russian edition 1835-38).

General relativity is not only incompatible with the absolute space and time of Newton (and with their philosophical transposition brought about by Kant), just as special relativity is, but in addition, general relativity tells us that space and time are determined by the presence of matter, by corporeal objects and by their mutual interactions. What, then, is the “vacuum” of general relativity? The vacuum is a “vacuum of matter,” whereby the term “matter” means both corporeal objects and non-gravitational fields. The vacuum is a free gravitational field described as a Riemannian space-time: It is a pure abstraction because the universe is filled with matter-radiation. Nevertheless, Einstein’s equations of general relativity can be written by eliminating the presence of matter and external fields, by means of free gravitational fields. And, they even admit a solution in which the gravitational field is zero, which corresponds to the space-time metric of special relativity. However, in the absence of fields and corporeal bodies, as has been observed, it is not possible to speak either of the observer or of a measurement, and therefore it is not possible to speak of space-time, so that a vacuum understood in this way appears as a pure abstraction, or a limited concept.

Quantum electrodynamics and quantum field theory further substantialize the vacuum, in that it is conceived as an entity in which there are “virtual” pairs of particles and anti-particles that can be brought to an observable (real) state at the expense of an appropriate amount of energy. A vacuum so understood is certainly not nothingness, but simply a “vacuum of observable matter.” With the help of Heisenberg’s uncertainty principle, such matter can become observable on the condition that the energy DE required is extracted from the vacuum itself in a time less than h/DE, where h is Planck’s constant. A similar quantum fluctuation of a vacuum, according to certain authors, would be responsible for the generation of the entire universe from a “quantum vacuum,” a claim that would seek to replace the metaphysical act of creation. However, a “quantum vacuum” is not “nothingness” but a pre-existent entity, in which pairs of particle-antiparticles (matter), and the act necessary to extract them, are virtually present.

Some have wished to interpret a quantum vacuum as Aristotle’s “prime matter,” but this does not seem to be accurate because prime matter, in addition to not having extension (in that it is not yet “marked” by quantity, unlike a vacuum, which is a spatial-temporal region) is a pure potency and requires an adequate “external” cause in order to be actuated into “secondary matter,” whereas a quantum vacuum would seem to include in itself the capability of actuated matter.

VII. Matter and the Problem of the Whole and of the Parts

From the point of view of the metaphysical analysis of the structure of matter, the problems that arise from the physics of nonlinear systems, and more generally, from the science of complexity, bring us directly back to the classical problem of the “whole” and its “parts.” The other aspects related to complexity, such as “unpredictability,” “deterministic chaos,” and “self-organization,” for the most part concern the evolutionary “dynamics” of matter.

1. Various Positions and Approaches. In contemporary science, the problem of the “whole” and of the “parts” (which is presented at times as the problem of “holism”) can be formulated to begin with in the following way: We consider a given object (the whole), which we call “complex” in that it appears to us as highly structured and difficult to examine as a whole. We divide (on the basis of an assigned rule) the source object into other objects which we call “parts” and which turn out to be simpler to examine because they are scientifically well-understood. There are two alternative possibilities: a) the complex object is exhaustively explained, at least within certain limits, with a study of its parts taken as self-standing; b) the complex object manifests properties and behavior that cannot be explained with an examination of its component parts alone.

The first case is equivalent to the typical assumption of the reductionist approach: A whole is completely explained through its component parts. We could say, with a formula that makes scientific sense only when the terms are precisely explained (but which, nonetheless, has a certain expressive power) that “the whole is the sum of the parts.” The second case emphasizes the insufficiency, or the impossibility, of the reductionist approach and points to a holistic approach. We distinguish “insufficiency” and “impossibility” because both situations can arise.

We encounter insufficiency when we find that the complex whole is not exhaustively explainable through the study of the component parts, since it is characterized by properties typical of the “whole” in itself. These properties elude scrutiny if one does not consider the general whole, because they cannot be found in the single separate parts. One can say then, using a rough formula, that in this case “the whole is more than the sum of its parts,” or that it contains new information in addition to that contained in the parts, information which characterizes it as a “whole” taken together. In the Aristotelian scheme, one would say that the “whole” has a form that makes it “one,” with new properties not present in the juxtaposition of the parts. It is not by chance that the term “form” reappears in the language of biologists and mathematicians (cf. e.g., Thom, 1989) together with a renewed interest in the writings of Aristotle.

One encounters “impossibility” when the complex whole is not divisible into simpler parts. In this case, some parts, or every part, have identical properties, or have a degree of complexity comparable to that of the “whole” and, consequently, the subdivision does not lead to any simplification. It is a little like what occurs when a magnet, cut in half, does not become simpler in its structure, but gives rise to two new magnets similar to the original one. Using a rough formula, we can say that in this case “the whole is contained in its parts” and in a certain sense “replicated in all its parts.” It is interesting to note how these parts are not necessarily identical, but possess enough similarities to permit an application of the same definition to both the whole and the parts. In philosophical language, we would say that the parts are of the same nature as the whole.

Clearly, these statements regarding the insufficiency of the reductionist approach do not have to lead to a sense of exasperation. Reductionism is always in a certain sense legitimate, otherwise knowledge would be impossible. Human intelligence needs to break down and reassemble in order to understand: It is not always necessary to study the entire universe as a whole to understand one of its parts, even if in certain cases it is necessary to do so. An example of this is the recent dialogue between cosmology and particle physics, which aims to solve the problem of the so-called “first instant” of the universe.

2. Some Examples Taken from the Sciences. Given the importance for both analysis within the sciences and for potential dialogue with other fields, we will briefly outline how the subject of the whole and of the parts is viewed and approached in a few of the main scientific disciplines.

In biology, one finds that a living organism manifests properties that, even from the chemical-physical point of view, are not shared by inanimate objects. Even the simplest living organism cannot be entirely described by analyzing its component parts. In a reductionist mindset, a statement of this kind is met with suspicion and accused of vitalism because it seems to introduce an animistic factor into life. But this is not the real problem: The point is rather that of seeing if, in the organization of matter, the matter itself, if stimulated in the right way by an adequate external cause, tends to manifest a new level of order, which was not present in the components taken separately, once a certain degree of organic structuralization (complexity) has been reached. On this level, an analysis of the component parts is no longer sufficient, although it has been useful and necessary up to this point; there is a need for an inquiry on a different level of the “whole” itself.

An in-depth study of a relatively complex molecule, such as crystalline lattices in solids or electrical conductors (to cite only a few examples), has pointed out how even in the chemistry of inanimate objects the properties of the whole of a composite complex structure are not completely deducible from the properties of the constituent atoms. The existence of molecular orbitals with completely shared electrons excludes the possibility of electrons that belong to a single atom. In an electric conductor, the conduction electrons are shared even among all the atoms. Therefore, there exist, even on the chemical level, properties of the whole which progressive research reveals to be more and more significant.

In the field of physics, we must take into account two classical aspects that characterize it. The first is inherent in the “mathematical tool” itself, and the second has to do with the “explanation of observation.” From the mathematical point of view, as soon as physics uses more and more mathematics to formulate its laws in the form of equations, new problems arise. Such problems come about when new mathematical results give unexpected answers to these physical questions. I will deal with this subject shortly when I treat the subject of mathematics below. As far as the agreement between hypothesis and observation is concerned, we are faced at the same time with a vast array of unsolved, and perhaps, unsolvable, problems in classical mechanics, because they are thought to be too complicated. In quantum mechanics, problems still remain which are a source of paradoxes in their formulation and understanding.

In classical mechanics, it suffices to consider, for example, the complexity of turbulent motion in fluids. The classical model of Landau (1959), which superposes several convective motions associated with increasing frequencies, does not correctly predict the transition to turbulence that appears as a completely new property in addition to that of convection. In quantum mechanics, certain events appear as “non-separable” even if they occur at a great distance. It appears to be a question of those cases in which the whole seems to be located in one of its parts.

In the field of mathematics, the problem of the whole and of the parts appears with great clarity in the two aspects alluded to above. As far as the aspect of insufficiency is concerned, the problems related to the non-reducibility of the whole to the sum of the parts gain a clear formulation for the theoretical physicist and the mathematician when the evolutionary laws that govern the near totality of physical processes are formulated in terms of non-linear differential equations. Now, in “linear” equations, the sum of two or more solutions (let us call them “parts”) is also a solution (let us call it “the whole”) of the system, and vice versa, a general solution (a “whole”) can be written as the sum of several solutions (the “parts”). In physics, this law is known as the “principle of superposition.” A well-known example is the case of waves that interfere as their oscillations are summed. In “non-linear equations,” the preceding statement is no longer true in general. It follows, in the sense indicated above, that the whole is not generally obtainable as the sum of the parts. Let this reference suffice to indicate the relationship between all the different types of behavior inherent in non-linear theories that constitute different aspects of a single problem. Our considerations lead us to the second aspect of the problem, that of the impossibility of using an adequate reduction, or, one could say, of the non-distinguishability of the parts from the whole: The whole is replicated in all of its parts. A typical example of this second aspect is given to us by “fractal geometry” (cf. Peitgen and Richter, 1986). Fractals, among other things, have the property of being “self-similar,” that is, of reproducing infinitely, in all of their parts, geometrical forms similar to that of the whole. For this reason, it is not possible to isolate the forms that are structurally less complex than the whole by subdividing them into smaller and smaller parts. It is interesting to note that in the Mandelbrot set, the form of the parts is not exactly identical, but is similar, to the shape of the whole and maintains the degree of complexity, which can be quantified with what is called the “fractal dimension.”

In logic, the problem of the relationship between the whole and the parts arises mainly in the second of the two aspects already mentioned, that in which the whole can be found, in a certain sense, a part of itself. This problem appears, for example, in the “logic of sets.” The set of all sets is a typical example of a set that contains itself as an element. In this case, a part of the set coincides with the whole. In the first phase the logic of the classes, developed by Russell and Whitehead, has dealt with the problem by excluding from the definition of “class” the sets that contain themselves as an element, in order to avoid the usual contradictions that can arise from their consideration. It is known that Russell’s paradox arises when one tries to define an object as “a catalogue of catalogues which do not refer to themselves.” It therefore seems possible to construct a theory of collections that contain themselves as elements. Computer scientists nevertheless deserve the credit for having made current the by now classical problems of mathematical logic, such as those related to Gödel’s theorem on the consistency and completeness of axiomatic systems. Another merit of computer science is that of making it possible to represent Julia sets on a computer screen, the  beauty and elegance of which was unknown, and which were considered mathematical “monsters” due to their infinitely winding boundary. Research in artificial intelligence has led to the understanding that information can be nested on various levels and that there exist several hierarchies of information. The lower level resides in the hardware structure of the machine, and the higher levels in the software; the programming language, in turn, contains information that is significant for the programmer. Such information falls into lower-level instructions that are mechanically executable by circuits that do not perceive them as significant. The program itself and its whole contain information on a higher level related to the goal it was written for; it resides in the mind of the programmer, and of the user, and so on and so forth.

In all of the sciences, therefore, there seems to appear a hierarchical structure of information related to the degree of complexity and, therefore, to the unity of the structure in question. In Aristotelian-Thomistic philosophy, as has been said earlier, the unitary principle of a being is its form. Even though it is not yet clear what course the sciences will take, there seems to be an indication of a shift from the univocal scheme of reductionism towards that of a new and more satisfying vision. Today we are, curiously, witnessing an interesting change due to which mathematics itself, and with it the other sciences, seems to show a genuine interest in a broader rationality that opens the sciences up to the concept of analogy, which had until now been scorned.

VIII. Matter, Intelligence, and Abstraction

Cognitive science deals with how intelligent knowledge is formed in the mind, in its relationship with the brain and with the body in general, and even in terms of the at least partial reproduction of the latter in the use of computers. In observing, for example, the methodology of current scientific research on artificial intelligence, we are confronted, from the philosophical point of view, with a two-fold approach. Roughly speaking, we can take a “Platonic” route and an “Aristotelian” one, excusing my somewhat schematic, but very significant, use of this terminology. As A. Koyré has suggestively observed: “If you claim for mathematics a superior status, if more than that you attribute to it a real value and a commanding position in Physics, you are a Platonist. If on the contrary you see in mathematics an abstract science, which is therefore of a lesser value than those—physics and metaphysics—which deal with real being; if in particular you pretend that physics needs no other basis than experience and must be built directly on perception, that mathematics has to content itself with the secondary and subsidiary role of a mere auxiliary, you are an Aristotelian. What is in question in this discussion is not certainty—no Aristotelian has ever doubted the certainty of geometrical propositions or demonstrations—but Being; not even the use of mathematics in physical science—no Aristotelian has ever denied our right to measure what is measurable and to count what is numerable—but the structure of science, and therefore the structure of Being. [...] It is obvious that for the disciples of Galileo just as for his contemporaries and elders mathematicism means Platonism” (Koyré, 1943, pp. 421, 424).

From the technical point of view, the results that have been obtained suggest, for the future, what type of approach is preferable and how it can be corrected in order to make it better.

a) The approach that we in a certain sense call “Platonic” is also reductionist: It is based on a theory of knowledge as “anamnesis,” on the memory of innate ideas that are revived from contact with sensible experience. In this point of view, the intelligence is led back to that operation which brings “memory” to the fore, to an at least approximate superposition of the idea with the sensory datum of experience. From the point of view of computer science, this conception suggests the technique of the user “letting in” as much information as possible to the hardware: The information plays a similar role to that of the innate ideas, or, as one prefers to call them in this case, concepts. One cannot deny that the term “concept” is used more than once in a rather ambitious way by those who deal with artificial intelligence and often indicates simply a certain codification stored in memory, which provides for the recognition of objects not completely identical to each other, vaguely recalling the notion of universals. With this strategy, the system works well as long as one does not depart from the set of stored data, but it does not recognize certain similarities and does not succeed in establishing analogies. One obtains a scarce level of universality with such concepts.

b) A second approach is based on a methodology that is the inverse of the former and is more similar to the “Aristotelian” conception, or at least to an empiricist one, in that it is based on the hypothesis that knowledge is not innate, but is learned from experience through a process which goes from the external senses to the brain and to the mind. It is a question of a methodology that attempts to emphasize techniques of a machine “learning” concepts.

But what is a concept? In both of the preceding approaches, there is a tendency to have recourse to two techniques, that of approximation on the one hand, and that of modelization on the other. The “technique of approximation” goes back, in a certain sense, to the empiricist notion of David Hume (1711-1766). A concept is a kind of “vague singular” datum, and one tries to realize this vagueness of the singular in view of a generalization by introducing an allowed margin of error, which allows several objects to fall into the approximate scheme, not just one. The “technique of modelization” is certainly less rudimentary than that of approximation, and it is based on a process of “abstraction” (first performed, however, by a human mind) aimed at identifying elements common to several singular data.

A comparison with the cognitive science of Thomas Aquinas, based on that of Aristotle, seems useful and also interesting. This science, basing itself on common experience, identified three operations characteristic of human understanding. The first operation was called simplex apprehensio, and we could translate this phrase from Latin into English as “simple apprehension.” The second operation is “judgement” (Lat. iudicium), while the third is “reasoning” (Lat. ratiocinium). Each of these operations acts on the source material and elaborates its own product, which is the object of the study of logic. Simple apprehension begins with the sensory data furnished by the senses and the brain (we shall say generally from the body) and as a final result (or product) furnishes the concept. The judgment has as source material the product of the first operation, and it works by connecting the concepts together appropriately, by elaborating a proposition or enunciation. Finally, the reasoning connects the propositions elaborated by the second operation, following the rules of inference which guarantee the correctness of the deduction (cf. for example, Thomas Aquinas, In “Peri hermeneais,” Proem., n. 1). The theory of abstraction lies on the level of the first operation, in so far as by “abstraction” we mean that process which the mind performs on the data elaborated by the body, starting from a single sensory element and extracting from it an informative universal product that, according to this theory, is precisely a concept (cf. Summa Theologiae, I, q. 85, a. 1).

This operation is of a cognitive character. It releases the information, in a certain way, from the physical signal that transports it (from the physiological representation which is found in the body and the brain) and, from the logical point of view, has the effect of furnishing a datum in the form of a “universal” (concept), removing it from the material context that delimited it and made it a “singular” concrete. And it is just this characteristic of universality that qualifies the concept as a principle of knowledge, of a nature qualitatively different from that of the sensory material data present in the senses, nerves, and brain, as an electrical polarization, a chemical alteration, etc., or in an electric circuit such as a state of a binary system. The concept appears with a different nature, one that is not reducible to sensible material data. It is not reducible to the cerebral state, even though it is tied to it. From this point of view, universality is not obtainable from genericity, in the sense of indeterminacy, as Hume intended: The universal is not an approximate singular, with a margin of error in its boundary, but is something qualitatively different, being non-material information.

The content of information does not coincide, properly speaking, with the signal that transports it, even if one cannot ignore the physical vehicle (of an electrical, chemical, or other nature). In order to be known by the human mind, information needs to be in a certain sense extracted (“abstracted”) from its vehicle and possessed by the mind in an immaterial (“intentional”) form. One may then ask how the mind must be made in order to perform this operation of the abstraction of non-material, universal information from sensory data that has reached its cerebral state. The standard response given by this theory is that in order to perform an operation of abstraction of a non-material principle, such as information, a non-material mind is necessary, for reasons of fitting causality. All of this is based on the conception of the universal as immaterial information, since matter is by its very nature individualizing (the principle of individuation). If this way of approaching the problem is correct, it does not seem that a computer by itself, in so far as it is material—or a brain by itself, in so far as it is material—can elaborate a universal, abstract concept, even if it can manage information related to it, when it is made to work by an operator who is endowed with an immaterial mind. What the machine or the brain-body can at most produce is an electromagnetic or electrochemical representation, or something else, which does not contain the matter of the observed object, but which is, however, still tied to the matter-energy of the physical signal and, as such, is not yet universal. In the Aristotelian-Thomistic conception, this representation is called a phantasma, and abstraction of the universal concept from the particular phantasma cannot be performed by a corporeal, material organ but has to be the work of an immaterial intellect, which, in so far as it performs such an operation, is called an “active intellect.” The machine can, however, manipulate (singular) symbols that for the human operator have a universal meaning, which furnish elaborations of reasoning and calculations, whereas the processes of human intelligence seem to be irreducible to the processes of calculation (cf. Penrose, 1995).

Bibliography: 

F.T. ARECCHI, I. ARECCHI, I simboli e la realtà (Milano: Jaca Book, 1990); M. ARTIGAS, J.J. SANGUINETI, Filosofia della natura (Firenze: Le Monnier, 1989); F. BERTELÈ, A. OLMI, A. SALUCCI, A. STRUMIA, Scienza, analogia, astrazione. Tommaso d’Aquino e le scienze della complessità (Padova: Il Poligrafo, 1999); P.W. BRIDGMAN, The Logic of Modern Physics (New York: Macmillan, 1961); M. CINI, Un paradiso perduto. Dall’universo delle leggi naturali al mondo dei processi evolutivi (Milano:Feltrinelli, 1994); R. COGGI, La filosofia della natura. Ciò che la scienza non dice (Bologna: Edizioni Studio Domenicano, 1997); M. DAUMAS (ed.), Histoire de la science (Paris: Gallimard, 1957); P.A.M. DIRAC, Principles of Quantum Physics (Oxford: Clarendon Press, 1958); A. EINSTEIN, L. Infeld, The Evolution of Physics: The Growth of Ideas from Early Concepts to Relativity and Quanta (Cambridge: Cambridge University Press, 1971); R.P. Feynman, QED: The Strange Theory of Light and Matter (Princeton, NJ: Princeton University Press, 1985); D.R. HOFSTADTER, Gödel, Escher, Bach: An Eternal Golden Braid (London: Penguin, 2000); M. JAMMER, Concepts of Mass in Contemporary Physics and Philosophy (Princeton, NJ: Princeton University Press, 2000); M. JAMMER, Concepts of Space: The History of Theories of Space in Physics, foreword by Albert Einstein (Cambridge, MA: Harvard University Press, 1969); G. KANE, A. PIERCE (eds.), Perspectives on LHC Physics (Hackensack, NJ:World Scientific, 2008); A. KOYRÈ, “Galileo and Plato,” Journal of the History of Ideas vol. 4, n. 4 (1943), pp. 400-428; T.S. KUHN, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1996); L. LANDAU, S. Lifschitz, Fluid Mechanics (Reading, MA: Addison-Wesley, 1959); B.B. MANDELBROT, The Fractal Geometry of Nature (San Francisco: W.H. FREEMAN, 1982); J. MARITAIN, Philosophy of Nature (New York: Philosophical Library, 1951); R. MASI, Struttura della materia. Essenza metafisica e costituzione fisica (Brescia: Morcelliana, 1957); E. NAGEL, J. R. NEWMAN, Gödel’s Proof (London: Routledge, 1989); G. NICOLIS, I. PRIGOGINE, Exploring Complexity: An Introduction (New York: W.H. FREEMAN, 1989); A. PAIS, Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford: Oxford University Press, 1988); W. PAULI, Collected Scientific Papers (New York: Interscience, 1964); H.O. PEITGEN, P.H. RICHTER, The Beauty of Fractals: Images of Complex Dynamical Systems (Berlin: Springer, 1986); R. PENROSE, Shadows of the Mind: A Search for the Missing Science of Consciousness (Reading, MA: Vintage, 1995); M. RIGHETTI, A. STRUMIA, L’arte del pensare. Appunti di logica (Bologna: Edizioni Studio Domenicano, 1998); R. RUSSEL, N. Murphy, A. PEACOCKE (eds.), Chaos and Complexity: Scientific Perspectives on Divine Action (Berkeley, CA: The Vatican Observatory and The Center for Theology and the Natural Sciences, 1995); A. Salam, Unification of Fundamental Forces (Cambridge: Cambridge University Press, 1990); J.J. Sanguineti, La filosofia del cosmo in Tommaso d’Aquino (Milano: Ares, 1986); P.A. SCHILPP (ed.), Albert Einstein: Philosopher-Scientist (LaSalle, IL: Open Court, 1970); F. SELLERI, La casualità impossibile. L’interpretazione realistica della fisica dei quanti (Milano: Jaca Book, 1987); S.G. SHANKER (ed.), Godel’s Theorem in Focus (London: Croom Helm, 1988); A. STRUMIA, Introduzione alle filosofia delle scienze (Bologna: Edizioni Studio Domenicano, 1992); A. STRUMIA, The Sciences and the Fullness of Rationality (Aurora, CO: The Davies Group, 2010); G. TANZELLA-NITTI, Faith, Reason, and the Natural Sciences: The Challenge of the Natural Sciences in the Work of Theologians (Aurora, CO: The Davies Group, 2009); R. THOM, Structural Stability and Morphogenesis: An Outline of a General Theory of Models (Redwood City, CA: Addison-Wesley, 1989); J. VON NEUMANN, Mathematical Foundations of Quantum Mechanics (Princeton, NJ:  Princeton University Press, 1955).