You are here

Information, notion of

Date: 
2002
DOI: 
10.17421/2037-2329-2002-ES-1

I. Terminological Clarifications - II. The Codification of Information - III. The Transdisciplinary Nature of the Concept of Information: Technology, Biology, and Physics 1. Information in the Context of the Relation between Human Beings and Machines 2. Information in the World of Living Organisms: Genetic Information 3. Information in the Physical Universe: Information and Natural Laws 4. An Overall Outlook - IV. The Theory of Information - V. The Birth of Informatics - VI. Telematics and the Internet - VII. The Society of Information - VIII. Information within Theological Reflections

I. Terminological Clarifications

Terms such as “to inform,” “information,” “form,” and “formulation,” which are used daily in common language, were first used by classical and medieval philosophers. Today they are widely used by technicians in the computer world, in the field of communications, and in logic. Given the variation in their usage, it is necessary to begin with an analysis of these terms to clarify the use I will make of them in the course of the present study.

In common meaning, the phrase “to inform” means to transmit knowledge, “to inform someone by giving them news, data, and the like.” Frequently, this term has a practical significance since whoever informs expects the listener to use the information received in order to modify his or her behavior as a consequence. In fact, for transmitting knowledge that does not demand a practical and direct response, other verbs are preferred, such as “to explain,” “to describe,” and “to teach.” This practical emphasis of the verb “to inform” seems connected to the original significance of the word, which is related to the expression “to model according to a form.” In fact, “to inform” derives from the word “form” (Lat. in-formare, that is, to “give form”). There is, therefore, an original “operative” value within the word: Information—understood as the action of informing—produces a form. Today, this aspect of the word’s meaning has declined but it has not been completely abandoned. In some cases it has been relegated to the cultured language: We use the expression, for example, “to form one’s conduct according to moral values.” In addition to this practical emphasis the theoretical aspect must be mentioned. Information is at the root of every transmission of knowledge, even in a theoretical sense: We learn inasmuch as we are informed. Information is therefore a vehicle for knowledge. Let’s now look a little more closely at the sense of the two words “form” and “information.”

The semantic field for the term “form” is very wide. As has been highlighted by various dictionaries, it has specific uses in a great number of disciplines such as biology, geography, crystallography, botany, electronics, mathematics, meteorology, and the science of construction, as well as a multiplicity of meanings in the linguistic sphere. Nevertheless, the original philosophical significance of the word is defined with reference to the Greek morphé signifying the “appearance of an object, sufficient to characterize it externally,” and, in philosophy, referring to, “the active principle that distinguishes the essence, as dynamically contrasted with the matter”. Indeed, in the classical philosophical context (particularly the Aristotelian context) “form” was normally correlated to “matter.” The relationship is that which exists between potentiality and actualization, a bit like clay when it assumes the shape of a brick: The work of an external agent takes the brick from a potential state (shapeless clay) to its present state (clay in the shape of a brick).

As regards the term “information,” it includes the two principal aspects already contained in the words “inform” and “form.” The first is “an act which consist in giving substantial form (vegetative, sensitive, or intellective) to a being,” and an act which “determines the nature of a being, moving it from potency to act”. In this sense, information is the “actualization” of matter as potency, which links the matter to the form in the sense described above. But we can also discern a second significance for information as “the act by which news is given or received, that is, knowledge” or, also, “acts provided or learned about something or someone”. It has already been pointed out that today it is this meaning that is, unquestionably, more common. It is interesting to note that, within the framework of classical Aristotelian-Thomistic philosophy, these two meanings are closely linked. In fact, if, in the first sense, the form actualizes the matter (as potency) giving rise to objects in the physical world, in the second sense form, regarding intelligibility, actualizes the (potential) intellect, giving rise to knowledge. In modern language there remains only residual links with these original meanings, but as we will progressively see they are again finding a place in our culture, principally as a result of scientific research and technological advancement.

There is another link between information and form: To be transmitted, the information must, in fact, be “formulated” (we are therefore led to a new concept, that of “formulation”), that is, fixed according to a “code” that is shared between the transmitter and the receiver. The word “code” (with its derivations “coded” and “to codify”) is very important. It is not intended here in the usual sense of a “set of laws” (as, for instance, when we speak of the Civil Code, or of the highway code), but rather in what has become an equally common meaning of a “system of signals,” or of signs, or symbols that represent and transmit information between the source (emitter) of the signals and the point of destination (the receiver). Simply stated, “to codify” means to provide information with a “form” recognizable by the receiver. Such is, for example, the natural language between people who speak the same mother tongue: The form is given by the words, by the grammatical and syntactical constructions, and by their “codification” both in terms of the sounds of vocal transmission and the graphic symbols used in written transmission. Such are also the scientific languages, and in particular mathematical language, which occupies a position of absolute preeminence in technology and in some natural sciences (the “mathematical” sciences).

Scientific language highlights another aspect, that of sharing, which exists in ordinary practice as well. This aspect of sharing deals with the “context” in which information is placed, that is, the legacy of previous knowledge to which the information makes an implicit appeal, and which must already be known to both the receiver and to the sender. Therefore information, in order to be received, must be situated within an adequate “framework of knowledge.” The formal structure, then, in which information must be placed so that it may be received, may be very complex.

There exists, therefore, a double link between information and form: Information can transmit form, but it must be codified, in turn, by a form. The “forming” information contains, codified, the “form” that it will give to the “matter.” But, on the other hand, the code is, itself, a form: Codifying gives form to the information, and this form represents the form that will be given to the receiving matter. Therefore, the information stabilizes a relationship between the two forms, in which one “signifies” the other: It contains a form and is, itself, contained in a form. Thus, the information-form relation is a circular relation. But it is also more: If we look at the code that represents and carries the information, we find that it also, inasmuch as it is a form, implements information that in itself contains a code of a higher level. We can think, for example, of the syntax with which the genetic code is codified, or that of the language of this text. And so it continues, so that we can see the whole as a great architecture in which information and form are stratified in levels one on top of the other. Every form has the function, in some way, of the archetype of the form of the inferior level, so that the whole of reality appears organized according to a vertical order, a hierarchy of forms that, at least analogically, reflect the hierarchy of existing beings.

II. The Codification of Information

Information presents itself as something immaterial but which needs material carriers to move around and to be preserved. The memorization of information and its transmission over a distance have posed problems for codification that are worth exploring. Until the end of the 18th century, voice and writing were almost the exclusive means for the transmission of information. In vocal transmission, the concepts are codified in words and grammatical constructions and syntaxes, and these are translated into intelligible sounds by the receiver. The codification and the decodification happen at a cerebral level. In written transmission something else happens. The sounds of the words are represented with graphic symbols and these, by means of a technical process, are printed on the page; whoever receives them must understand how to read what is written. We have here a double formalization, of the concept in words, and of the words in graphic symbols (for example, the Latin alphabet). There is also, at least potentially a double material “formation,” first (at least potentially) in sounds and then in signs on the page.

This multiple codification that exists at many levels becomes more complex when information begins to be transmitted over a distance, generally by electrical means. In the original workings of the telegraph, an operator would close and open a switch and so transcribe, by a succession of electrical impulses, a message that had been received in written form. These impulses would then travel along a conductive cable and, at the receiving end, make a machine print on a strip of paper. A second operator would read it and transcribe it into letters of the alphabet, words, and sentences. It was, therefore, a new phase of codifying and decodifying, entrusted to “specialists” who knew nothing of the message they were transmitting but knew how to write in the alphabet of impulses that travelled through electrical cables, an alphabet that was in all likelihood indecipherable for the person who generated the message and the person who received it.

It is interesting to observe that this codification through impulses is typically “binary,” no different from those used a century later in computers and then in the greater part of telecommunications systems. In Morse code, every letter of the alphabet is represented by an ordered succession of two elementary symbols: a dot and a line, a short impulse or a long one. But optical telegraphs, used since ancient times particularly within a military perspective, also used codified forms of only two (or a very few) symbols (light or dark, smoke or no smoke, etc.). So the binary code, which our technology boasts of as its own brilliant discovery, is in reality not a recent invention, but a continuation of a method used by our distant ancestors!

In the telephone, a different process takes place. The variations of pressure of the sound wave, emitted by the voice, are translated by the membrane of the microphone into variations of electrical currents, and these are transformed into variations of pressure by the membrane of the receiving device. The variations of the current are proportional to the variations of the pressure. Stretching the meaning of the terms a bit, we can speak here of an “analogical codification,” in the sense that the profile of variations of the currents is “analogous” to that of the atmospheric pressure produced by the emission of the voice. In this case, just like with the telegraph, the product of the codification itself is not even intelligible to the person who transmits it or to the person who receives it (moreover, it is not even perceptible to the senses); it is not generated by a human operator but rather by a machine. It should be noted that this same analogical codification is used in the recording of records (vinyl discs) and magnetic tapes: The waveforms of the sound are reproduced on a track in the first case and, in the second, on the level of the magnetization of a layer of iron oxides.

In radio and TV transmissions a further codification is introduced: the “modulation.” To be diffused by the antennae, the message must be “carried” by a high frequency electromagnetic wave. For this purpose, an electronic circuit generates a signal (a wave of constant amplitude) appropriate to transmission, which is then modified in order to represent the original information. Therefore, in this case, there is a sequence of codifications that could, for example, be schematized like this:  Information to be transmitted <—> Vocal emission <—> Conversion into an electric signal (microphone) <—> Modulation (and transmission) <—> (Reception and) demodulation <—> Acoustic reproduction (loudspeaker) <—> Listening into the receiver <—> Information received. In every one of these phases a different material (the vocal chords, air, the current in the microphone, the electromagnetic wave, etc.) is “formed” in a different way, and all the forms represent the same information. But only the extremes are perceptible and significant for those who transmit and receive them: In the middle there is a “gray zone” in which the information is hidden in a form inaccessible to perception. Therefore, the technique inserts into the process an irreparably artificial element which is non-human. This differs from the technique of printing in which anyone who knows how to read can decipher the message and so be informed.

In electronic computers, something happens which is different from what happens with telephones and record discs and is more similar to the telegraph. All the information, of whatever kind —the words of a verbal message, a punctuation mark, a mathematical number or operative symbol, a graphic sign or musical notation— is dealt with in the same way. It is codified in a sequence of bits, or to be precise, of elementary entities that can be only “0” or “1.” The ASCII Code, frequently used for text, uses sequences of eight bits(1 byte) to represent the letters of the alphabet, numbers, punctuation marks and a certain number of graphic signs and symbols. The capital letter A, for example, is represented by the sequence 01000001, while the lower case a is represented by the sequence 0110001. The figures can be codified in a variety of ways. Those conceptually more simple (although unwieldy because of the number of bits they employ) consist in the subdivision of the figure into a large number of elementary smaller areas, and to each of them is attributed a “word” which is formed of many bits that serve to individualize the color, shades, and luminous intensity.

The bits are then transformed into electrical impulses which circulate in the computer circuitry, which can modify their sequence by executing operations on them —for example, if they represent mathematical entities— by stably conserving their sequence, as magnetized points in the “memory” (see below, V). At other times the bits are transmitted over distance by electric cables or optical fibers, or converted into electromagnetic waves and broadcasted. Or they become microscopic pores on the underside of a CD (a compact disc), which an optical reader reconverts into music or computer programs.

Because the basic element of codification, the bit, can only take one of two values, this code is termed “binary” or “digital”: In our case the numbers “0” or “1.” At first adopted for numerical computers (once called “digital calculators”), it is now spreading into all the technical sectors of computing and information transmission in which it substitutes the codification I have termed “analogical,” or, “analog.” Thus the CD has replaced the vinyl disk, while, more and more frequently, in telephone communications our vocal inflections are codified into sequences of bits. Even television, the last great area in which the analog codification process still survives, is beginning to convert itself to the digital form.

The reason for this is simple, and is well exemplified by the CD, which is much smaller than the vinyl disk, yet nevertheless permits a much higher quality of reproduction. It transmits a broader frequency band, without distortion and with a wider “dynamic,” with a major difference in the acoustic level between “very soft” and “very loud.” This higher quality of digital information depends on an important fact: Once codified into digital form, the information is somehow indestructible. Analog information is, on the other hand, subject to deterioration. If, while a person is on the telephone, some disturbance occurs (on the telephone line itself or in the neighboring environment) the quality of some words is lost and, therefore, the significance of what is heard is also partially lost; if the analog disk is scratched, the listening quality is also compromised. This does not happen with “digitalized” information. It will be evident later on that it is always possible to codify information in a way that renders it immune to disturbances which would otherwise disrupt an analog counterpart. It is possible to compute and transmit information in a “perfect” manner: Once digitalized, the information participates in the “perfection of number,” the perfection which pertains to mathematics. The digital code introduces, in a particular way, a perfection into the technical world.

III. The Transdisciplinary Nature of the Concept of Information: Technology, Biology, and Physics

As a transmission of knowledge, information has initially been considered an activity specific to subjects capable of, and aware of, understanding. The list of subjects capable of information is growing progressively longer, raising notable problems regarding the relationship between information and knowledge, until at times the very meaning of the word “knowledge” becomes problematic. I will therefore explore this extension of meaning and conclude by mentioning some resonances within the philosophical field.

1. Information in the Context of the Relation between Human Beings and Machines. It is easy to see that the technical world today is dominated by the concept of information, so much so that information itself has become the characteristic element. At first the technical tool had been seen only as a means for the transmission of information between humans, but successively we have begun to speak of information also for communication between man and machine: The designer and the production technician communicate to an automatic machine the information relative to the fabrication of a product (for example, the paddle of a turbine). It is interesting to observe here, in the technical activity and in the role that the information has, that we can recognize the four Aristotelian causes. In fact, information involves, as has already been noted, primarily the “formal cause,” but it also contains the commands that will be imparted to the machine, that is, the “efficient cause,” which is applied to the metal that cuts (the “material cause”) and realizes the artificial object. The role of the “final cause” would appear if we asked ourselves, for example, what aim the technician proposes when he creates the paddle of the turbine, whether it is the production of electrical energy, the propulsion of an airplane, or something else, and what then would be the aim for which the airplane is destined. But it is not necessary to think about machines based on numerical control, which only constitute perhaps an extreme example: Every technical project, even on paper, contains all the necessary information for the production of the object. In this way, technology itself reclaims the original significance of the word information signifying “that which gives form.” The information inherent to the constructive process gives form to the object constructed.

In the field of information engineering, it must be recognized that, with the development of informatics techniques, the word information has also been applied to the communication between machine and machine. At first, information science, and now information engineering (that is to say, that which we designate as a whole with the name “informatics”) concerns the information circulating inside computers and between computers connected in a network. At this point, if we could set aside the fact that computers manipulate “human knowledge” —a frequently ignored, but not at all secondary, element— the information-knowledge equivalence would seem lost. Those who occupy themselves with artificial intelligence tend to speak of “knowledge” also with regard to machines, without further reference to the operator who uses them. But the question remains, even in this case, whether we speak in an actual sense, and not only metaphorically, of an “intelligent knowledge,” something that would seem excluded by the irreducibility of the relationship between semantics and syntax (see below, IV).

2. Information in the World of Living Organisms: Genetic Information. Information circulates not only in the world of human beings but also in the animal world, and even in that of the vegetable. Ethology has demonstrated that animals communicate among themselves, exchanging information useful to the life of the group or for their defense from aggressors. There are messages exchanged between animals of the same species or sent to animals of a different species that are capable of decoding them and of developing a corresponding behavior. Something similar also happens in communication between animals and human beings: Many people speak of some form of “dialogue,” sometimes very refined and sensitive, with domestic animals. Communication among animals often involves an exchange of very complex information: The bee that has discovered a source of food, for example, describes to its companions, with a beautiful “dance,” the topography of the place towards which it wants to direct them.

A quite interesting semantic extension of the concept and properties of information today is that which regards biology. It has been discovered that the life of every organism, whether simple or complex, depends in an essential way on the circulation of biochemical signals (e.g., neuroelectric signals as well as, perhaps, other types) that transmit information necessary to the harmonious development of the vital processes. Scientists tend to see the living organism as a gigantic chemical laboratory on one hand and, on the other hand, as a very complicated network of transmissions of information necessary for its functioning. In this respect, even more crucial is the discovery of the genetic information that presides over the formation of new organisms. This discovery of the role of information in the maintenance and transmission of life has been a great conquest for biology, because it has allowed the progression from a simple description of the phenomena to an analysis of the way in which they are caused. Two important aspects appear hear. The first is that the information presides over the formation of new organisms, and it therefore “gives,” or “communicates” form (we again encounter the connection between the two meanings of the word, “to give form” and “to communicate”). The second is that it presides over the ordered development of the vital processes. To be precise, information plays the role of “organizer.” Certain diseases, for example, which we perceive as a “pathologic disorder,” are the result of a mistaken reading of information.

The presence of information and of the exchange of information necessary to the functional processes of living organisms seems to be in strict connection with that unique property of life which concerns its tendency to conserve and reproduce itself. Life is the center of an intrinsic finality within nature, in which information plays a decisive role, regulating the processes of coordination and orientation. And so life seems to be the center of “information” at many levels. This occurs first of all at the level of a single, living individual, whose specificity within the environment and the biological, chemical, and physical elements makes existence possible, is the depository of a particular “form” that gives unity and meaning to the living subject. This occurs secondly at the level of complex, codified information that provides the subject with all that is necessary to develop its vital functions, making it the final reference point for the reciprocal coordination of each vital function, in conjunction with relating to information provided by the environment.

3. Information in the Physical Universe: Information and Natural Laws. An ulterior transdisciplinary extension of the concept of information, although of a rather different nature, occurs in the physical and chemical sciences. In this case the “communication” does not occur between living organisms, or between human beings and the technology invented by them and in which they have inserted an automatic language, but rather between the non-living world and us. The researcher acquires a certain knowledge of the physical world thanks to the information that he or she finds in some ways codified within nature, under the form of properties or laws. We therefore have here a different way to understand information, which no longer comes from an active subject, but is extracted, so to speak, from the object studied by the subject who studies it. If it is true that the researcher “imposes a form” on nature through the mathematical formulation of “scientific laws,” it is also true that his or her knowledge “is informed” by the “laws of nature” and by those objective properties, independent of the subject, that make possible their formulation in terms of numerical constants or scientific algorithms. The researcher can impose a form not only by means of mathematical algorithms, but also through “models,” which are subsequently verified through experience. The role of modeling is rather important for many fields, among them chemistry, where it makes it possible to represent and gather together forms that aren’t deducible from the physical realm, because they contain properties and information that only emerge in evaluating molecular structures or in evaluating a compound taken as a whole, as a new object of study.

Rising interest in considering scientists’ activities as an extraction of a certain type of information contained in nature is witnessed by the modern debate about the intelligibility of natural laws and about the significance of the constants of nature. The expression “cosmic code” has been coined to refer to the level of harmony existing between the laws that describe the principle physical phenomena, particularly regarding their delicate coordination that allows the existence of the universe itself and, within it, a biological and chemical niche adequate to support life. But in a much more general sense (leaving aside for the moment a consideration of the philosophical meaning that can be associated with the intelligibility of natural laws, or to their coordination) there remains the undisputed fact that the universe is not composed only of matter and energy, but also of information. In other words, the physical universe is not something indeterminate, indefinite, or utterly chaotic: Instead, it exists with specific properties. In other words, the universe conveys a certain amount of information. Even in this case, we encounter the original meaning of the term: Information is that which gives form to matter-energy and also makes the material world knowable and intelligible. And so scientists explore the natural world, giving it the form of the laws with which they describe it. However, progressive improvement in knowledge of the laws will be guided by the forms it receives from nature itself.

4. An Overall Outlook. From the preceding considerations, and from the diverse list of meanings that the concept of information evokes, some important relationships seem to emerge. The first is the relationship of circularity existing between information and order. Inasmuch as it “forms,” information brings about order: The organism produced by the genetic code appears as an “ordered” system of tissues and vital processes, and it is, on the contrary, the biological alterations, and especially those that we define as diseases, that are perceived as “disorder.” Conversely, information is also “described” by an order: The laws present in nature describe the order of the universe. If this order—recognized as coordination among the parts of a whole—was understood as something original, then it would manifest the “presence of information”; that is, it would reveal the universe as either a producer of, or a material support for, information. A second relationship is that which points in a more explicit way toward the notion of finalism. In harmony with Aristotle’s thought, where the formal cause and final cause are intimately connected, the existence of “formality,” that is, of information in the physical or biological universe, would refer existence towards a “finality” as well. Inasmuch as finalism is a carrier of information, and in light of the relationship between information and knowledge, such a finality could be known (or acknowledged) by man, analogously to the knowledge of which he is the subject and producer.

IV. The Theory of Information

When, at the end of the 1940s, there were warnings about the need to bring order and a scientific basis to the technology of information that was being haphazardly developed during World War II, it became necessary to establish a way to measure the “quantity of information.” It was a necessary operation, albeit in some senses debatable, since information is something essentially qualitative and, in certain cases, subjective (news represents an increase of information only for those who do not have knowledge of what it communicates).

In 1948, Claude Shannon (1916-2001) proposed a solution that, although in a very summary and schematic way, took account of this necessity “to measure” information with respect to the subject that receives it. In fact, he linked the degree of information contained in an event to the probability that the event would or would not happen. If an event is very probable, the fact that it occurs does not “tell us” much: It does not appreciably enrich our knowledge. If, instead, that very probable event does not occur (or its contrary takes place), it is a cause of surprise and reflection: By increasing our knowledge, it thus “informs us” all the more. The mathematical formula adopted to measure the quantity (or the content) of information is identical to that used in thermodynamics to measure entropy, apart from a change of sign. To measure the quantity of information, the term that was initially proposed was “negentropy,” although it was not very successful for understandable euphonic reasons. Today, to indicate such measurement, the word “entropy” has simply been adopted, though this has brought about some misunderstanding because the word entropy signifies two opposing things. In thermodynamics, an increase of entropy is equivalent to an increase in disorder, whereas in the theory of information it indicates an increase of order. It thus merits further explanation. In an “isolated system” (to be precise, in a set of bodies without an exchange of energy with the external environment) thermodynamic entropy measures the energy linked to the temperature, that is, the disordered motion of the atoms that compose it: The second principle of thermodynamics states that each irreversible energy transformation implies an increase of the entropy, that is, an increase of the “disorder.” Conversely, as has been seen, the concept of information is linked to the concept of order: Genetic information, for example, is carried by the order in which the “bases” follow along the double helix of the DNA. Another example would be a message codified in binary form. Information is given by the order of the sequence of the bits with values of “0” and “1.” A greater amount of information is therefore represented by a greater level of order. If, during the transmission of a message, some interference accidentally transforms a “0” into a “1,” or vice versa, we will simultaneously have a reduction of order and a loss of information, or to be precise, a decrease of the entropy of the message.

The correspondence between thermodynamic entropy and the entropy of information is not only formal. It stems from the understanding we have, in terms of “information,” of those structures in which the physical world is organized. The information contained in the crystalline structure of ice, for example, is represented by the reciprocal position of its atoms, that is to say, by the fact that their thermal agitation is bound to take place within the nodes of a lattice. If the ice melts, the information contained in the lattice is simultaneously destroyed, and therefore the entropy of the information is reduced, while the thermal agitation of the particles (that is, the thermodynamic entropy) increases.

The quantification of information, and the mathematical treatment that follows from it (the foundation of which is also due to Shannon), have brought about the solution to some fundamental technical problems found when one wants to transmit a message on a certain support (a “channel,” in technical language). The rate by which it is possible to transmit a signal, codified into binary form, for example, is limited by the technical characteristics of the channel. A telephone line (a “twisted pair,” as it is called, because it is made up of two wires) can transmit a certain number of bits per second and no more. A radio broadcast can transmit more, and an optical fiber cable yet still more, but always in a limited number. Among the factors that contributes to this limit is the “noise” of the channel; that is, the probability that an “interference” (the same type of which we sometimes hear on the telephone, and which makes what the speaker is saying less intelligible) will transform a 1 into a 0 or a 0 into a 1. The errors increase with the rate of the transmission and, beyond a certain rate, the errors can be irreparable. Shannon has demonstrated that there exists a theoretical limit rate (called “transmission capacity”) below which all the errors introduced by interference can be corrected, while over that rate it is no longer possible to make this correction.

On the other hand, clearly the intention is to transmit information at the greatest possible rate, that is, to transmit information in a way that utilizes the technical equipment to the best advantage. This interest becomes a necessity when the transmission rate is predetermined, as happens with television images. The problem has two aspects. On the one hand, it deals with not transmitting more bits than the bare minimum: to not transmit “redundant” bits, that is to say, superfluous bits that do not contribute to the information. If a sequence of data must be transmitted, each of which is independent from those that precede it, it is obvious that each data must be codified with all the bits necessary to describe it. But often the data depends on each other (we can say that they are inter-connected). In television transmissions, for example, the fact that the images have a certain extension makes each point resemble, as a rule, the surrounding points. And so it is not important to transmit all the information associated with every point: It is enough to codify the difference existing between one point and the preceding one. This difference is in general small and, therefore, requires few bits. On the other hand, affecting the information through errors must be avoided. It is a question, therefore, as demonstrated by Shannon, of correcting the errors, provided that the rate does not exceed the capacity of the channel. Regarding this, information theory has developed very refined “error correction codes” that permit the transmission rate to come much closer to the theoretical limit of the channel’s capacity.

The example of the television image brings up another point. It is known that the human eye is less sensitive to some characteristics of an image, so that they can be disregarded, without the observer being able to distinguish the compressed images from the original one. The same could be said of a person listening to musical transmissions. In this case, one could harmlessly “compress” the message, thereby reducing the quantity of information. Obviously, one could not do the same thing for numerical data, as a loss of information would render it useless. Therefore, the way in which a message is codified depends very much on its nature, on the significance it has for the person who receives it, and on the manner in which it is perceived; that is, it depends on its “semantics.” This is a fundamental limitation in the transmission and computation of information, a limitation that even the systems of artificial intelligence cannot overcome. The transmission and computation of information by technical operations concerns only formal aspects of information (i.e., the “syntax” of information) whereas the “semantic” aspects do not flow along the chain. They stop at the point of input, and are returned to the message by the person who receives and interprets it.

V. The Birth of Informatics

The word “informatics” is a neologism, coined for its assonance with mathematics and automatics. Informatics is the technique of the construction and utilization of electronic computers. As a natural extension of its definition, it also indicates the science and technique of the computation of data and, generically, of the automatic handling of information. In accord with the aim of the present article, let us look at some structural characteristics of informatics that can be considered reasonably constant, at least for the near future. Readers who are very interested in this topic can find more details, particularly regarding the quantitative aspects (the diffusion of computers throughout the world, their corresponding dimensions and facilities, etc.), in a variety of information science journals.

Informatics is tightly linked with applied mathematics and with information theory; moreover, it concerns electronic computers. One could distinguish a “theoretical informatics,” that is, the branch of applied mathematics that deals with the theory of algorithms (an “algorithm” is a sequence of operations capable of bringing about the solution to a problem in a finite number of steps) from the theory of formal languages and the theory of automation. There is also a “technical informatics,” which regards the construction of computers, and a “practical informatics,” which studies the specific ways in which many problems can be solved by computers, employing the various programming languages and other utility programs such as, for example, operating systems and databases. The actual object of technical informatics is called hardware, while software refers specifically to practical informatics. Hardware and software are the components of every electronic computer.

The term hardware indicates the set of electronic circuitry and electromechanical components that constitute the structure of computers. In every computer, from the smallest handheld calculators (that is, those that fit into the palm of a hand) to the big mainframe computers, we can distinguish three fundamental parts: the unit of computation, the unit of memory, and the units of input and output. The units of input and output are the best known, since they represent the connection between the operator and the machine. For input, that is to say, the data entry of the programs and the data input itself, we use, for example, the keyboard, floppydisks, CD-ROMs, and optical disks. For output, there is the screen, the printer (dot-matrix, laser, and ink jet), again the floppy disk, and the CD-ROM units in those machines supplied with a “disk burner.” We should also keep in mind that between the elements of input and output, there are “ports,” that is, “connectors” for the connection cables to other computers (the nets of calculation, including the Internet) and to auxiliary units such as, for example, printers, which are generally physically separated from the computer and connected to it through a high transmission rate “parallel port.”

The memory units conserve the data and programs and make them available to the unit of computation. Among the various types of memory units there is, first of all, the RAM (random access memory) from which the computing unit directly gets the data and to which it returns the data once the computation is completed. The access velocity with which the data is read and written must be comparable with the velocity of computation. This can only be obtained with electronic circuits that are relatively expensive and are, moreover, “weak” (i.e., they lose the information when the machine is turned off). For this reason a second type of memory is required, a “mass memory” that is both permanent and has a large capacity, where the data can be stored for an unlimited period, even when the computer is turned off. This is, in general, obtained by recording on magnetic supports (disks or, more rarely, tapes) whose access times are, however, much longer. In this memory, all the information necessary for the use of the computer, data, and programs is recorded in advance. During the running of the machine, the information is transferred, in blocks and in time for its subsequent usage, from the mass memory to the RAM. From here it is finally delivered for computation. There exist, then, the ROM memories (read only memory) and their derivatives PROM, EPROM, and EEPROM which allow for the limited possibility of writing. They are used to permanently store parts of programs that are essential for the running of the machine and are destined to remain unchanged for all of its life. Finally, disks and CD-ROMs, as well as the tapes and magnetic disks used in the mainframe systems, can also be considered control devices of an “external” memory, to be used mainly as data archives. In this way, data can be stored without cluttering up the internal memory, and can also be protected from the consequences of malfunctioning (referred to as the backing up of important data).

Finally, the CPU (central processing unit) is the heart of the machine. Normally, it is contained in a microprocessor that also hosts the RAM. The microprocessor is a device of very small dimensions that, on a surface of a few square millimeters, contains many millions of elementary circuits. This extreme miniaturization, obtained with very refined techniques of photoincision (which will eventually reach the limits imposed by the very structure of the matter) fulfills not only the requirement of being packed in a small space, but also, and primarily, that of allowing a shorter running time. In fact, all necessary data processing implies delays and attenuations of the signal during the running of electric currents in the circuits, and these delays can be reduced precisely by minimizing the length of the connections. Microprocessors are also widely used in applications other than the instruments of calculation as strictly understood. Much of the automation found in industrial factories is realized with specialized microprocessors, as is the case in transport, telecommunications systems, the electronic devices used in daily life, etc., so that they appear as one of the fundamental tools of the “informatization” of society, that is, of its progressive characterization as a society of information.

In the microprocessor, all the information—not only the arithmetic operations, but also the “logical” operations that control, for example, the order in which the different parts of a program are executed—results from the repetition, a great number of times, of three elementary operations (which are, in fact, ultimately reducible to two), namely “and,” “or,” and “not.” In the English language, used for practical purposes, they are called and, or, and not operators, while in mathematical expressions they are represented by the symbols Λ, / and ¬. These operations, simultaneously defined halfway through the 19th century by George Boole (1815-1864), from whom comes the name of “Boolean algebra,” and by Auguste De Morgan (1806-1871), constituted an extremely fertile subfield of algebra, from both a conceptual and a practical point of view. This algebra deals with logical operations, those that deduce the “true” or the “false” value of a proposition. The term “true” is here intended in a purely logical sense, that is to say, as a result of the rules of logical consequentiality. The operations can easily be translated in terms of binary quantities by assigning, for example, the value 1 to true and the value 0 to false. Then the first two operators put the variables A and B into relation with a third variable, C. The expression “C = A Λ B” means that C is equal to 1 if A and B together are equal to 1: otherwise, it equals 0. “C = A / B” means, instead, that C is equal to 1 if at least one of the two, A or B, is equal to 1, and it is 0 only if A and B are together equal to 0. “Not,” finally, links only two variables: “B = ¬ A” means that B is equal to 0 when A is  equal to 1, and 1 when A is  equal to 0. Everything a computer does can be expressed through these three operations, repeated methodically an adequate number of times.

The functioning of the CPU, and therefore of the whole computer, is “synchronized.” It is governed by a clock (usually a quartz oscillator), which stabilizes the rate of the operations and is “sequential,” that is, the operations are executed, in principle, one at a time. To increase the computational velocity, a certain degree of parallelism is introduced, allowing more units to operate simultaneously. In the large computers used for scientific calculation, the parallelism can also be relatively elevated, but in these “electronic brains,” as they are called in science fiction, there is nothing comparable to the brains of a living being in which the parallelism is total, that is, in which a very great number of computational units (neurons) function contemporaneously. Another difference between electronic computers and the brains of living beings is that, in the first case, the functions are “specialized,” that is, one unit is devoted to computation, another to memorization, a third to communication with the external environment, etc., whereas in the case of the brain the computing and memory functions, and in part also those of input and output, are distributed in the whole mass of neurons.

The term software indicates the set of programs available for the computer. We distinguish between system software, which makes the functioning of the computer possible, and application software dedicated to finding solutions to problems or to executing tasks entrusted to the computer. Programs of application software include executing mathematical calculations, archive management, word processing or drafting of texts, the processing of graphical images, and so on. In the system software, the operating systems have a very important role, such as the very widespread diffusion of Windows. The operating system is executed as the computer is switched on and then remains available to render the commands that come from the application programs “intelligible.”

Every program (whether it is system or application software) is written in a “programming language.” This expression signifies (though in a rather anthropomorphous way) a system of information coding that is accessible (“understandable”) to the computer. On the same level as natural language, a programming language is characterized by a “vocabulary” and a “syntax” according to which the “words” of the vocabulary are combined to form “propositions” that have a complete meaning. In reality, many languages exist at different levels of complexity. The simplest is the “machine language,” in which the commands to be directly given to the CPU are written. These commands can be of the type, “Read the data from the memory, continue with operations, write the results in the memory, and continue to the following command.” Identifying the data to be worked on is important, and this is the scope of the memory. Like every good archive, it is organized like a filing cabinet in which each folder or location has its correct place or address. Therefore, the elementary command becomes, more precisely, “Read the contents of these memory locations, execute this operation, write the result in this other location, and continue to the following command.” A program is nothing but a long list of such commands. And so it remained until the end of the 1950s. At that time, programming was quite a tedious process and the likelihood for error was very high. For this reason, languages have been produced of a much higher level in which, for example, in the programming of mathematical expressions the variables are designated with a name rather than with a memory location, and complicated operations are described synthetically, in a way similar to those used in algebra. These expressions must then be rendered intelligible to the computer. System programs (called “translators”) are provided for this purpose. They identify the memory locations corresponding to the variables and break the synthetic commands up into sequences of elementary operations. Sometimes the translator (which is then referred to as the “interpreter”) carries out this operation, command by command, during the running of the program, whereas at other times (playing the role of the “compiler”) it produces a new program, which it will then run.

VI. Telematics and the Internet

When the treatment of data involves transmission over a distance, another neologism is used: “telematics.” In particular, what today is called the Internet belongs to the sphere of telematics (and, in common, everyday understanding, is nearly identified with it). Dealing with a “net of relations,” the Internet is difficult to define because it cannot easily be expressed by the usual definitions pertaining to normal technical objects. The Internet originated in 1969 when the U.S. Ministry of Defense, wanting to build a telecommunications network impervious to sabotage, found the best solution to be in utilizing the world wide telecommunications system, in which messages run through itineraries nearly undetected. This elusiveness and uncontrollability remained when the system was opened up to civilian use. Very schematically, it could be said that the Internet is formed by millions of nodes, constituted by computers (host computers), connected through normal telecommunications networks, the same networks through which telephone calls and radio and television programs travel. At every node, computer terminals (there were around 410 million in the whole world at the end of the year 2000) can be connected through “switching” lines, that is, normal telephone lines (this is the case for domestic use), or “dedicated lines,” traditional telephone lines that do not pass through the switching devices of the central lines. It is now more and more common to have lines that can pass through special types of cable adapted for fast transmission. All these connections can also form a local network (LAN, local area network, also called an Intranet). The host computers of the nodes are connected permanently to the network (although the itinerary followed by the messages remains unknown). In their memories resides all the information that every user wants to place at the disposition of all the other users, including the mailboxes of the users of electronic mail. The user terminals are connected to the nodes only when they wish to send their own electronic mail (E-mail), “to open” their own mailbox, or to  navigate through the “web,” that is, to explore its information contents and search for something that interests them, and to access the information placed in any node of the world.

Anyone trying to identify all of the material objects which compose the Internet “system” will end up greatly confused. In fact, only a part of the host computers (not all) and part of the local connection lines are exclusively dedicated to the net; everything else is shared with other services. And so, if we look at the places where the messages in some moment originate, pass, and arrive, it could be said that all the other telecommunications networks of the world and all the computers that at some point can be connected to it belong to the Internet network. Likewise, if we consider the question in terms of which hardware belongs exclusively to the Internet network, the answer would be none. From a technological point of view then, the unique quality that belongs specifically to the Internet and that characterizes it is, on the whole, immaterial. It is a “protocol,” or, a set of codified rules that the messages must observe in order to be recognized by the receiver’s terminal.

For this reason, the Internet is complete “anarchy”: Nobody is in control, anybody can distribute any kind of message, and the police have great difficulty in preventing it from being used for illicit purposes. The internet seems to be the result of a spontaneous aggregation, tumultuous and almost without the guidelines inherent in the other technologies, which were created for a purpose; namely, to transmit telephone or television signals. This is a paradox in clear contradiction with the traditional rules of technology, which would have every great technical system defined and recognizable in all of its particulars, and its functioning entrusted to an efficient, well-organized directing power. But its anarchy is also contradictory in itself, and reflects an essential contradiction of the technique. On the one hand it, appears to be a fantastic thing whereby information is free from every type of conditioning by the various powers that be. On the other hand, it seems profoundly disturbing because it represents the extreme point of a process of technical “depersonalization” and of its own autonomous construction, which is hard to control and, for this reason, potentially prey to evil, inhumane powers. Once again, it is evident that a “culture of mankind” is necessary, one that is mature and profoundly respectful of human dignity, in order to get the best from these tools of information and globalization, and that a wise vision of science and technology is also necessary to direct our future.

VII. The Society of Information

The progressive growth of the concept of information reflects the corresponding increase of the importance of information not only in the world of science and technology, but in culture and society. Of course, no organized society has been able to restrict the circulation of information. But information had assumed, in the 20th century, a completely different role and scope. One can speak of history beginning with the time when the diffusion of information was entrusted to the spoken or written word, to the painted, sculpted, or etched picture, and to the tools of material culture, etc. With the advent of the printing press, the number of those who could be reached and made participants in information greatly increased. But now the technology of information, multiplying itself and being diffused by the technical tools that make reproduction and transmission possible, has profoundly changed this framework. The telegraph and the telephone (and the cellular phone of today) have made communication immediate over distance, while photography and cinematography have permitted the unlimited multiplication and diffusion of images. The radio for words, and the television for images, have ensured, at least in industrialized countries, that everybody is made aware of what is taking place in the world “in real time,” that is to say, while things are actually happening or immediately afterward. The social implications of these transformations are at least comparable to those of the conquest of energy, which distinguished the industrial revolution, although the effect on customs and culture is perhaps greater. These tools have potentially cancelled the isolation of individuals, even though such tools cannot, in themselves, overcome loneliness: For this it is necessary to have a real, human culture and not only a technological one. The person at home alone, or Alpine climbers ascending on their own, know when to call for help at any time. The inhabitants of small mountain regions have the same possibility to obtain knowledge as the citizens of the world’s capital cities. The network webs, and above all the Internet (see above VI), have enormously increased the possibility of producing and receiving culture. Scholars can consult the online catalogues of all of the world’s great libraries without moving from their desks; their bibliographical research, which once would have required days of tedious work, can now be done in a few minutes, and articles arrive via telefax, or through electronic mail.

One hears speculation about a “cabled city” and of an “intelligent house.” The first expression refers to telephone cables which would be substituted by supports with an enormously transmission capacity, for example by using optical fibers. On the same connection, one could speak on the telephone with several people, and the network connections could be established with a high velocity; television programs could be chosen with a freedom previously unknown. The second expression refers to a computer that would act as a “control center,” receiving information coming from every part of the house and controlling, from a distance, the opening and closing of doors and windows, turning the oven in the kitchen on and off, turning on and off electric lights and air conditioning, etc. Thus, it would not be necessary to move around in order to do domestic chores. And so the city and the house would no longer be regarded as material “places,” with certain dimensions and an organization of space within which to move, but rather as a network of connections in which information, rather than the inhabitants, would be circulating. The motion and the space would become, in a sense, “virtual.”

This is a profound transformation that integrates technology, the economy, and society: We have passed from an energy society to an information society. Until the 1950s, the main indicator of the well-being of a population was the consummation of energy. Then the energy crisis and environmental pollution weakened the conviction that quality of life is correlated, positively, with the use of energy. Other indicators have been variously adopted: first it was the consumption of printer paper, then the number of calculators, and finally the number of connections to the Internet. Increasingly larger quantities of energy continue to be used, it is true, but this is taking place with an increasing sense of guilt amid the warnings of ever-more credible scholars. Meanwhile, the progress of the stock-market is determined more and more by information industry stocks rather than by those of energy. For this reason, Norbert Wiener (1894-1964) the founder of cybernetics (one of the great fields of information engineering), spoke of a “second industrial revolution.” Actually, this expression had already been in use by the middle of the 19th century to describe the passage from steam machines to electric machines, but Wiener’s meaning seems more pertinent.

Faced with these rapid and incisive transformations, even sociology has turned its attention to the society of information, and it has done so from various viewpoints. Authors such as H.M. McLuhan, J. Habermas, N. Luhmann, E. Morin, K.O. Apel, and Karl Popper himself have been occupied with the communication of information as a phenomenon with the capacity to generate not only a new style of social life, but also a new culture. The concept of a “global village” is increasingly used by sociologists and scholars. In such a “village square,” people could be aware of what is happening in every sphere of life, and present their own ideas and initiatives. But there exists a certain circularity between anthropology and the society of information. On the one hand, an information society is based (often implicitly) on a certain vision of human beings (at times in a reductive sense), and it conveys a specific image of people: The human being as consumer, as player, as generator of economic profit, as the subject of cultural and scientific networks, etc. On the other hand, the image of the human being, of his or her lived experience, seems to be shaped today by the logic of the information society: Feelings, emotions, and opinions evolve around the impetus given by informative communications, thereby generating “new cultures” and “new values.” It is sufficient to recall the debate about the influence of “virtual reality” on our behavior, needs, or desires, or the change undergone in the meaning of the notion of “memory,” which has progressively moved from referring to the sphere of the life of the spirit to that of material technology.

VIII. Information within Theological Reflections

Theology, for its part, has had a specific interest in information regarding two principal aspects: the new developments brought about by the society of information, and the epistemological significance of information in the natural world.

Concerning the first aspect, the use of the mass media has always been centrally present to religious life and fervently used for the aims of catechesis and the promotion of Christian culture. One thinks in modern times of the immediate utilization of the press and radio by the church, but one could even go back to the use of sacred images as a vehicle for the transmission of the contents of the faith. In the last decades, this interest has also shifted towards levels of theoretical reflection. This seems to develop essentially along two lines. The first constitutes the study of the links between the theology of Revelation and communication, between the theology of the word and the philosophy of information, which has ultimately lead to a new discipline today known as a “theology of communication.” The second regards reflections concerning how the message of the Christian Gospel —characterized by personal contact, witnessing by one’s life, rejecting any kind of manipulation, etc.— can be conserved when using the modern means of production and the diffusion of information. In addition to many church organizations increasingly using technological tools for the production and diffusion of religious information, from an institutional point of view, after the Second Vatican Council, the Roman Catholic Church created the Pontifical Council for Social Communications. From a pastoral and theological point of view, the themes concerning the right to information, its relationships with truth and justice, and its equal distribution among the earth’s inhabitants are part of the ethics of information and the ethics of scientific work. Once these ethical guarantees are assured, there will no longer be any reason to see any opposition between the “society of information” and a Christian anthropology. Moreover, it is precisely Christian anthropology that enhances the relational conception of human beings. Communication, the reciprocal enrichment of information, and the exchange of giving and receiving are seen as ways to develop and perfect the human person, who is personal because, according to the Christian message, each human being has an intrinsic social nature and can be understood only in constructive relation with others. The Council document Gaudium et Spes (1965), after recognizing that human relationships, especially those based on service and on charity, reveal themselves to be “of great importance for men who are increasingly more dependent upon each other and for a world that moves increasingly more towards unification,” recalls that the human being is social by nature and open to communication because he or she is the image of a God who has revealed Himself as the intimate communion of three Persons (cf. n. 24). Also, the Second Vatican Council dedicated one of its first documents, the decree Inter Mirifica (1963), to the theme of the tools of social communication.

The second theological aspect, that which concerns the presence of information in the universe, enters into dialogue with philosophy of nature. While recognizing a diversity of approaches, the Christian perspective of a world created by the divine Word, who is the source of intelligibility and meaning, offers a connection with the perspective that philosophy, starting from the analysis of science, indicates regarding the intelligibility and order of nature, and the coordination shown by many of its processes. As stated previously (see above, III.3), one could consider the information-order or the information-finality manifested by nature and its laws (in its various physical, chemical, and biological levels) as information that governs the existence and the properties of the whole universe, that is, of a unique system considered in its entirety. One could also speak of a Cause or a Reason distinct from it, as the source of the information that is contained in it and transported by it. In this way, theology can bring the concept of information to the rapport between the created world and its Creator, not far, perhaps, from the message of Genesis, when it speaks of God who “gives form” to our first parents (cf. Gen 2:7, 2:22), nor from the words of Isaiah, when he says that God has “formed” the heavens and “shaped” the Earth, not so that it would remain a desolate region, but designing it to be lived in (cf. Is 45:18).

Bibliography: 

Science and Philosophy: E. AGAZZI E P. ROSSI (eds.), Cibernetica e teoria dell’informazione (Brescia: La Scuola, 1978); J.R. BENIGER, The Control Revolution: Technological and Economic Origins of the Information Society (Cambridge, MA: Harvard University Press, 1986); L. BRILLOUIN, La Science et la theorie de l’Information (Paris: Masson, 1959); E. CARPENTER, M. MCLUHAN (eds.), Explorations in Communication: An Anthology (London: Cape, 1970); L. FLORIDI, L'estensione dell'intelligenza. Guida all'informatica per filosofi (Rome: Armando, 1996); G.O. LONGO, Il nuovo Golem. Come il computer cambia la nostra cultura (Bari: Laterza, 1998); M. MCLUHAN, The Global Village: Transformations in World Life and Media in the 21st Century (Oxford: Oxford University Press, 1989); J.R. PIERCE, An Introduction to Information Theory: Symbols, Signals & Noise (New York: Dover, 1980); C.E. SHANNON, W. WEAVER, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1975); C. WASSERMANN, R. KIRBY, B. RORDORF (eds.), Theology of Information: Proceedings of the Third European Conference on Science and Theology (Geneva: Labor et Fides, 1992); E.I. WATKIN, A Philosophy of Form (London: Sheed and Ward, 1950); N. WIENER, Cybernetics: Or Control and Communication in the Animal and the Machine (New York: M.I.T. Press, 1961).

Theological Aspects: M.C. CARNICELLA, Comunicazione ed evangelizzazione nella Chiesa (Milan: Paoline, 1998); A. DULLES, “Il Vaticano II e le comunicazioni” in Vaticano II. Bilancio e prospettive 25 anni dopo,  R. Latourelle (ed.) (Assisi: Cittadella, 1987); F.J. EILERS, R. GIANNATELLI (eds.), Chiesa e comunicazione sociale: i documenti fondamentali (Turin: LDC, 1996); G. PANTEGHINI, Quale comunicazione nella Chiesa? (Bologna: Dehoniane, 1993); PONTIFICAL COUNCIL FOR SOCIAL COMMUNICATIONS, Aetatis novae February 20, 1992, in EV 13, 1002-1105; PONTIFICAL COUNCIL FOR SOCIAL COMMUNICATIONS, The Church and Internet, February 28, 2002 and Ethics in Internet, February 28, 2002; PONTIFICAL COUNCIL FOR SOCIAL COMMUNICATIONS, Ethics in Advertising, February 22, 1997; P. PRINI (ed.), Informatica e Pastorale (Brescia: Morcelliana, 1987); G. SANTANIELLO, Libertà etica garanzia dell'informazione (Casale Monferrato: Piemme, 1997).