Entropy: Primer and Historical Notes

{en'-troh-pee}




Entropy is the scientific term for the degree of randomness or disorder in processes and systems. In the physical sciences the concept of entropy is central to the descriptions of the THERMODYNAMICS, or heat-transfer properties, of molecules, heat engines, and even the universe as a whole. It is also useful in such diverse fields as communications theory and the social and life sciences.

Entropy was first defined by the German physicist Rudolf CLAUSIUS in 1865, based in part on earlier work by Sadi Carnot and Lord Kelvin. Clausius found that even for "perfect," or completely reversible, exchanges of heat energy between systems of matter, an inevitable loss of useful energy results. He called this loss an increase in entropy and defined the increase as the amount of heat transfer divided by the absolute temperature at which the process takes place. Because few real processes are truly reversible, actual entropy increases are even greater than this quantity. This principle is one of the basic laws of nature, known as the Second Law of Thermodynamics.

The First Law of Thermodynamics states that energy is conserved; no process may continuously release more energy than it takes in, or have an efficiency greater than 100%. The Second Law is even more restrictive, implying that all processes must operate at less than 100% efficiency due to the inevitable entropy rise from the rejection of waste heat. For example, large coal-fired electric power plants inevitably waste about 67% of the energy content of the coal. Other heat engines, such as the automobile engine and the human body, are even less efficient, wasting about 80% of available energy. An imaginary PERPETUAL MOTION MACHINE would have to defy these laws of nature in order to function. Such a machine, having its own output as its only energy source, would have to be 100% efficient to remain in operation. Friction always makes this impossible, for it converts some of the energy to waste heat.

Another manifestation of entropy is the tendency of systems to move toward greater confusion and disorder as time passes. Natural processes move toward equilibrium and homogeneity rather than toward ordered states. For example, a cube of sugar dissolved in coffee does not naturally reassemble as a cube, and perfume molecules in the air do not naturally gather again into a perfume bottle. Similarly, chemical reactions are naturally favored in which the products contain a greater amount of disorder (entropy) than the reactants. An example is the combustion of a common fuel. Such reactions will not spontaneously reverse themselves. This tendency toward disorder gives a temporal direction--the "arrow of time"--to natural events.

A consequence of nature's continual entropy rise may be the eventual degrading of all useful energy in the universe. Physicists theorize that the universe might eventually reach a temperature equilibrium in which disorder is at a maximum and useful energy sources no longer exist to support life or even motion. This "heat death of the universe" would be possible only if the universe is physically bounded and is governed as a whole by the same laws of thermodynamics observed on earth.

The concept of entropy also plays an important part in the modern discipline of INFORMATION THEORY, in which it denotes the tendency of communications to become confused by noise or static. The American mathematician Claude E. SHANNON first used the term for this purpose in 1948. An example of this is the practice of photocopying materials. As such materials are repeatedly copied and recopied, their information is continually degraded until they become unintelligible. Whispered rumors undergo a similar garbling, which might be described as psychological entropy. Such degradation also occurs in telecommunications and recorded music. To reduce this entropy rise, the information may be digitally encoded as strings of zeros and ones, which are recognizable even under high "noise" levels, that is, in the presence of additional, unwanted signals.

The onset and evolutionary development of life and civilization on Earth appears to some observers to be in conflict with the Second Law's requirement that entropy can never decrease. Others respond that the Earth is not a closed system, because it receives useful energy from the Sun, and that the Second Law allows for local entropy decreases as long as these are offset by greater entropy gains elsewhere. For example, although entropy decreases inside an operating refrigerator, the waste heat rejected by the refrigerator causes an overall entropy rise in the kitchen. Life on Earth may represent a local entropy decrease in a universe where the total entropy always rises. Ongoing work by the Belgian chemist Ilya PRIGOGINE and others is aimed at broadening the scope or traditional thermodynamics to include living organisms and even social systems.

Gary Settles

Bibliography: Carnap, Rudolf, Two Essays on Entropy, ed. by Abner Shimony (1978); Faber, M., and Niemes, H., Entropy, Environment, and Resources (1987); Fenn, John B., Engines, Energy and Entropy: A Thermodynamics Primer (1982); Kubat, D., Entropy and Information in Science and Philosophy (1975); Rifkin, Jeremy, and Howard, Ted, Entropy: A New World View (1980).

Thermodynamics is the branch of the physical sciences that studies the transfer of heat and the interconversion of heat and work in various physical and chemical processes. The word thermodynamics is derived from the Greek words thermos (heat) and dynamis (power). The study of thermodynamics is central to both chemistry and physics and is becoming increasingly important in understanding biological and geological processes. There are several subdisciplines within this blend of chemistry and physics. These include: classical thermodynamics, which considers the transfer of energy and work in macroscopic systems--that is, without any consideration of the nature of the forces and interactions between individual (microscopic) particles; statistical thermodynamics, which considers microscopic behavior, describing energy relationships on the statistical behavior of large groups of individual atoms or molecules and relying heavily on the mathematical implications of quantum theory; and chemical thermodynamics, which focuses on energy transfer during chemical reactions and the work done by chemical systems (see PHYSICAL CHEMISTRY). Thermodynamics is limited in its scope. It emphasizes the initial and the final state of a system (the system being all of the components that interact) and the path, or manner, by which the change takes place, but it provides no information concerning either the speed of the change or what occurs at the atomic and molecular levels during the course of the change.

Development of Thermodynamics

The early studies of thermodynamics were motivated by the desire to derive useful work from heat energy. The first reaction turbine was described by Hero (or Heron) of Alexandria (AD c.120); it consisted of a pivoted copper sphere fitted with two bent nozzles and partially filled with water. When the sphere was heated over a fire, steam would escape from the nozzles and the sphere would rotate. The device was not designed to do useful work but was instead a curiosity, and the nature of HEAT AND HEAT TRANSFER at that time remained mere speculation. The changes that occur when substances burn were initially accounted for, in the late 17th century, by proposing the existence of an invisible material substance called PHLOGISTON, which was supposedly lost when combustion took place.

In 1789, Antoine LAVOISIER prepared oxygen from mercuric oxide; in doing so he demonstrated the law of conservation of mass and thus overthrew the phlogiston theory. Lavoisier proposed that heat, which he called caloric, was an element, probably a weightless fluid surrounding the atoms of substances, and that this fluid could be removed during the course of a reaction. The observation that heat flowed from warmer to colder bodies when such bodies were placed in thermal contact was explained by proposing that particles of caloric repelled one another. Somewhat simultaneous to these chemical advances, the actual conversion of heat to useful work was progressing as well. At the end of the 17th century Thomas Savery invented a machine to pump water from a well, using steam and a system of tanks and hand-operated valves. Savery's pump is generally hailed as the first practical application of steam power. Thomas Newcomen developed Savery's invention into the first piston engine in 1712. The design of the steam-powered piston engine was further refined by James WATT during the last quarter of the 18th century.

Mechanical Equivalent of Heat

The downfall of the caloric theory was initiated by Sir Benjamin Thompson, Count Rumford. After spending his early years in America and England, Thompson became a minister of war and minister of police in Bavaria. In 1798, while overseeing the boring of cannon at the Munich Arsenal, Thompson noted that an apparently inexhaustible amount of heat was produced during the procedure. By having the cannon bored underwater, he found that a given quantity of water always required the same amount of time to come to a boil. If the caloric theory were correct, there would come a time when all of the caloric had been removed from the atoms of the cannon and no more heat would appear. Instead, Thompson interpreted his results as a demonstration that work was being converted into heat, just as the steam engines of his time converted heat into work. In 1799, Sir Humphry DAVY demonstrated that pieces of ice melt more rapidly when rubbed together, even in a vacuum. This provided additional support to the idea that work could be converted into heat. A precise determination of the mechanical equivalent of heat was reported in 1849 by James JOULE. With the use of very precise homemade thermometers, Joule found that by stirring water (mechanical work input), its temperature was increased (heat output). His conversion factor of 0.241 calories of heat energy equaling one joule of work was based on the observation that to generate one calorie of heat, a 1-kg weight must fall through a distance of 42.4 cm (the work performed by the falling weight was used to mechanically stir the water). Joule also electrically heated gases and measured the resulting pressure changes; he found similar results here on the interconversion of work and heat.

The First Law of Thermodynamics

The findings of Joule and others led Rudolf CLAUSIUS, a German physicist, to state in 1850 that "In any process, energy can be changed from one form to another (including heat and work), but it is never created or destroyed." This is the first law of thermodynamics. An adequate mathematical statement of this first law is delta E = q - w, where delta E is the change (delta) in internal energy (E ) of the system, q is the heat added to the system (a negative value if heat is taken away), and w is work done by the system. In thermodynamic terms, a system is defined as a part of the total universe that is isolated from the rest of the universe by definite boundaries, such as the coffee in a covered Styrofoam cup; a closed room; a cylinder in an engine; or the human body. The internal energy, E, of such a system is a state function; this means that E is dependent only on the state of the system at a given time, and not on how the state was achieved. If the system considered is a chemical system of fixed volume--for example, a substance in a sealed bulb--the system cannot do work (w) in the traditional sense, as could a piston expanding against an external pressure. If no other type of work (such as electrical) is done on or by the system, then an increase in internal energy is equal to the amount of heat absorbed at constant volume (the volume of the system remains constant throughout the process). If the heat is absorbed at constant pressure instead of constant volume (which can occur to any unenclosed system), the increase in the energy of the system is represented by the state function, H, which is closely related to the internal energy. Changes in H (heat content) are called changes in ENTHALPY. In 1840, before Joule had made his determinations of the mechanical equivalent of heat, Germain Henri Hess reported the results of experiments that indicated that the heat evolved or absorbed in a given chemical reaction (delta H) is independent of the particular manner (or path) in which the reaction takes place. This generalization is now known as HESS'S LAW and is one of the basic postulates of THERMOCHEMISTRY.

The Second Law of Thermodynamics

The steam engine developed by James Watt in 1769 was a type of heat engine, a device that withdraws heat from a heat source, converts some of this heat into useful work, and transfers the remainder of the heat to a cooler reservoir. A major advance in the understanding of the heat engine was provided in 1824 by N. L. Sadi Carnot, a French engineer, in his discussion of the cyclic nature of the heat engine. This theoretical approach is known as the CARNOT CYCLE. A result of the analysis of the heat engine in terms of the Carnot cycle is the second law of thermodynamics, which may be stated in a variety of ways. According to Rudolf Clausius, "It is impossible for a self- acting machine, unaided by external agency, to convey heat from a body at one temperature to another body at a higher temperature." William Thomson (Lord KELVIN), a British thermodynamicist, proposed that "it is impossible by a cyclic process to take heat from a reservoir and convert it into work without, in the same operation, transferring heat from a hot to a cold reservoir."

Entropy

The second law of thermodynamics leads to a new state function S, the ENTROPY of a system. The increase in the entropy of a system when heat is added to it must be at least q/T, where q is the added heat and T is the absolute temperature. If the heat is added in an idealized (reversible) process, delta S = q/T, but for real (irreversible) processes, the entropy change is always greater than this value. Ludwig BOLTZMANN, an Austrian physicist, demonstrated the significance of entropy on the molecular level in 1877, relating entropy to disorder. J. Willard GIBBS, an American mathematical physicist, referred to entropy as a measure of the "mixed-upedness" of the system.

The second law of thermodynamics may also be stated in terms of entropy: in a spontaneous irreversible process, the total entropy of the system and its surroundings always increases; for any process, the total entropy of a system and its surroundings never decreases.

The Third Law of Thermodynamics

Entropy as a measure of disorder is a function of temperature, increasing temperature resulting in an increase in entropy (positive delta S). The third law of thermodynamics considers perfect order, and it states that the entropy of a perfect crystal is zero only at ABSOLUTE ZERO . This reference point allows absolute entropy values to be expressed for compounds at temperatures above absolute zero.

Equilibrium and Free Energy

While thermodynamics does not deal with the speed of a chemical reaction, the driving force (or spontaneity) of a chemical reaction is a thermodynamic consideration. A reaction is said to be spontaneous if the reactants and the products of a chemical reaction are mixed together under carefully specified conditions and the quantity of the products increases while the quantity of reactants decreases. The spontaneity (or, less precisely, the direction) of a chemical reaction may be predicted by an evaluation of thermodynamic functions. Marcellin Berthelot, a French thermodynamicist, and Julius Thomsen, a Danish thermodynamicist, proposed in 1878 that every chemical change proceeds in such a direction that it will produce the most heat; in other words, all spontaneous reactions are those that result in a decrease in enthalpy, H, and are thus exothermic. This statement is incorrect, for many exceptions are known in which chemical reactions are spontaneous (proceed to more products than reactants) even though they are endothermic reactions (result in an increase in enthalpy).

The Gibbs Free Energy Function

Chemical reactions always occur in a direction (at constant temperature and pressure) that results in a decrease in the free energy of the system. The free energy of the system, G, is also a state function. (Several years ago free energy was designated by the symbol F, but it is now called Gibbs free energy for its discoverer, J. Willard Gibbs, and is given the symbol G.) The free energy is defined by G = H - TS; and, at constant temperature, delta G = delta H - T delta S. A reaction is spontaneous if delta G is negative, that is, if the reaction proceeds to a state of lower free energy. A negative delta G may be the result of a negative delta H (an exothermic reaction) and/or a positive T delta S (the absolute temperature multiplied by a positive delta S), indicative of an increase in entropy (or disorder) of the system. Spontaneous chemical reactions will continue until the minimum of free energy for the system is reached, so that, with reference to further reaction, delta G = 0. At this point a dynamic equilibrium is reached in the system (see CHEMICAL EQUILIBRIUM AND KINETICS). As long as the reaction conditions remain unchanged, no macroscopic change will be noted in the system; there will be no further change in the amounts of reactants and products even though, microscopically, the chemical reactions continue, because the reactants are being formed at the same rate as the products. Equilibrium, in a thermodynamic sense, is defined by delta G = 0.

Oxidation-Reduction Reactions

An efficient conversion of energy into work is accomplished by electrochemical cells (see ELECTROCHEMISTRY). An OXIDATION-REDUCTION REACTION takes place spontaneously in such an arrangement that the free energy released is converted into electrical energy. Non-spontaneous oxidation-reduction reactions (reactions with a positive value of delta G) can be caused to occur by doing work on the system by means of an external energy source (usually a DC electrical power supply). This process, which causes oxidation-reduction reactions to proceed in the reverse direction from that which would have been spontaneous, is called ELECTROLYSIS and was developed by Michael FARADAY in 1833. Changes in State Thermodynamics also studies changes in physical state, such as solid ice becoming liquid water. At temperatures above 0 deg C and at atmospheric pressure, ice spontaneously melts, an endothermic reaction (positive delta H) that is driven by a positive delta S; that is, liquid water is much more disordered than solid water. At 0 deg C and atmospheric pressure, solid ice and liquid water exist in PHASE EQUILIBRIUM (delta G = 0).

In 1876, Gibbs established a relationship between the number of phases present in a system, the number of components, and the number of degrees of freedom (the number of variables such as temperature and pressure), the values of which must be specified in order to characterize the system. A phase may be considered a homogeneous region of matter separated from other homogeneous regions by phase boundaries. For a pure substance, three phases are generally considered: solid, liquid, and vapor. Other types of phases exist, such as the two solid crystalline forms of carbon (graphite and diamond), and the ionized gaseous phase of matter known as plasma (see PLASMA PHYSICS). If a sample of a pure substance is a solid, and heat (q) is added to the substance, the temperature (T) will increase, indicating an increase in the heat content (H). The temperature of the solid will continue to increase until the solid begins to melt, at which point the two phases, solid and liquid, coexist in equilibrium (delta G = 0). This is the melting point and is reported at atmospheric pressure. The heat necessary to convert one mole of a solid substance into one mole of its liquid form is the molar heat of fusion. After the solid has been converted to liquid, additional input of heat into the system will cause an increase in temperature until the liquid and the gaseous form of the substance coexist in equilibrium at atmospheric pressure. This temperature is called the boiling point. The heat necessary to convert one mole of a liquid substance into one mole of its gaseous form is the molar heat of vaporization. There is one set of conditions (temperature and pressure, in the above example) at which the solid, liquid, and gas may coexist in equilibrium; this is called the triple point. (See also CRITICAL CONSTANTS.) A liquid-gas equilibrium may exist at a number of different temperatures. In 1834, the French engineer B. P. E. Clapeyron carried out studies on liquids and gases; these studies were later refined by Clausius. The relationship between the equilibrium vapor pressures of a liquid, its temperature, and its molar heat of vaporization is called the Clausius-Clapeyron equation.

Equation of State

Experimental measurements on solids, liquids, and gases have indicated that the volume (V) occupied by a substance is dependent on the absolute temperature (T ), the pressure (P), and the amount of the substance, usually expressed in moles (n). If three of these properties are known, the fourth is fixed by a relationship called an equation of state. The equation of state for a gas is PV = nRT, where R is a proportionality constant in appropriate units (see GAS LAWS). Gases that obey this equation are called ideal gases. The equation is obeyed by real systems when the distances between the particles of the gas are large (high V and T, low P and n). Under this condition the volume occupied by the gas molecules or atoms is small compared to the total volume, and the attractive and repulsive forces between the atoms and molecules are negligible. Real gases (as opposed to ideal gases) frequently show deviations from ideal behavior; in 1873, Johannes D. van der Waals proposed a modification of this equation to correct for non-ideal behavior. An extreme would be that the product of the pressure and the volume of the gas is predicted to be zero at absolute zero. In reality, of course, any gas will liquefy at low temperature, and the equation of state of a gas would no longer apply. The non-ideal behavior of gases has an important thermodynamic consequence. If an ideal gas is allowed to pass through an orifice from a region of higher pressure to one of lower pressure, no heat is evolved or absorbed, no change in internal energy has taken place, and therefore there is no change in temperature. Real gases, however, behave differently. All real gases, except for hydrogen and helium, cool when expanded in this fashion. If no heat is transferred (an ADIABATIC PROCESS, or one in which q = 0), the internal energy of the system decreases because of the work done by the system in decreasing the attractive forces between the gas molecules. This phenomenon is called the Joule- Thomson effect and has significance in such areas as refrigeration, the liquefaction of gases, and artificial snow production.

Perpetual Motion Machines and Heat Engines

PERPETUAL MOTION MACHINES are devices that would create energy out of nothing; such devices have been sought unsuccessfully for centuries. The impossibility of constructing a perpetual motion machine actually was an early basis for verification of the first law of thermodynamics, which states that heat and work may be interconverted. It also states the impossibility of creating a machine that, once set in motion, will continuously produce more useful work or energy than it consumes. This type of machine, in violation of the first law, is called a perpetual motion machine of the first kind. Another kind of perpetual motion machine is one that would be 100% efficient at converting heat into work and could, for example, extract heat from ocean waters to run the boilers of an ocean vessel, and returning the water to the ocean. This would involve transferring heat from a reservoir of lower temperature to one at a higher temperature without work being done on the system. Such a device is called a perpetual motion machine of the second kind and is forbidden by the second law of thermodynamics. A perpetual motion machine, if it could be built, would be the ultimate heat engine.

The Ultimate Source of Energy

The first law of thermodynamics has been called the law of conservation of energy. Lavoisier stated also the law of conservation of mass at the end of the 18th century. Relativity physics has demonstrated that the real conservation law is one combined of these two, and that matter and energy may be interconverted according to Einstein's equation E = mcc, where E is energy in ergs, m is the mass in grams, and c is the speed of light in centimeters per second. All energy ultimately originates from the conversion of mass into energy. In the burning of gasoline, the mass of the combustion products is slightly less than the mass of the reactants by an amount precisely proportional to the amount of energy (heat) produced.

Some of this heat may be converted into useful work and some must be lost. Nuclear power uses nuclear reactions as a source of heat to power heat engines (turbines), which convert this heat energy into other energy forms (for example, electricity). In nuclear reactions, substantially more mass is converted into energy; thus, far less fuel is required to provide an equivalent amount of energy. As always, the goal of the thermodynamicist is to convert efficiently this heat into work. Statistical Thermodynamics The major concern of thermodynamics is the state functions and the properties of the macroscopic system. Statistical thermodynamics deals with the distribution of the various atoms and molecules that make up the system and with the energy levels of these particles. The second law of thermodynamics on the atomic and molecular level is a statistical law; it expresses a tendency toward randomness and disorder in a system having a large number of particles. Statistical thermodynamics uses probability functions and complex mathematical methods to express thermodynamic functions in accord with the KINETIC THEORY OF MATTER.

Norman V. Duffy

Bibliography: Adkins, Clement J., Equilibrium Thermodynamics , 3d ed. (1984); Andrews, Frank C., Thermodynamics: Principles and Applications (1971); Dickerson, Richard E., et al., Chemical Principles (1974); Fermi, Enrico, Thermodynamics (1937); Hatsopoulos, George N., and Keenan, Joseph H., Principles of General Thermodynamics (1965; repr. 1981); Haywood, R. W., Equilibrium Thermodynamics (1980; repr. 1990); Johnston, R. M., et. al., Elements of Applied Thermodynamics, 5th ed. (1992); Moore, Walter J., Basic Physical Chemistry (1983); Mott-Smith, Morton, The Concept of Energy Simply Explained (1934); Rolle, K. A., Introduction to Thermodynamics, 2d ed. (1980); Sonntag, Richard E., and Van Wylen, Gordan J., Introduction to Thermodynamics, 2d ed. (1982); Sussman, M. V., Elementary General Thermodynamics (1972); Zemansky, Mark W., and Dittman, Richard, Heat and Thermodynamics, 6th ed. (1981).

Information Theory

Information theory, also called the theory of communication, is a branch of PROBABILITY theory that has been developed to provide a measure of the flow of information from an information source to a destination. It also supplies a measure of the channel capacity of a communications medium such as a telephone wire and shows the optimal coding procedures for communication. Although originally concerned with telephone networks, the theory has a wider application to any communication process, even as simple as one human being talking to another. It may also be viewed as a branch of CYBERNETICS, the science of control and communication, and it has strong associations with control engineering, theories of learning, and the physiology of the nervous system.

Information theory was developed to a great extent at the Bell Telephone Company laboratories in New Jersey under the auspices of Claude SHANNON in the 1940s and '50s. Many other versions of the theory have been suggested, notably by D. M. MacKay and Dennis GABOR.

Principles

The principal features involved in information theory are a source of information that is encoded and transmitted on a channel to a receiver, where it is decoded.

There are two versions of information theory, one for continuous and the other for discrete information systems. The first theory is concerned with the wavelength, amplitude, and frequency of communications signals, and the second with the stochastic (random) processes associated with the theory of AUTOMATA. The discrete theory applies to a larger range of applications and was developed for both noiseless and noisy channels. A noisy channel contains unwanted signals and requires a filter to take a copy of the transmitted message and compare it to the message received.

Entropy--the Measure of Information

(For a discussion of the Shannon-Weaver measure of information, see this article in the Academic American Encyclopedia.)

Channel Capacity

The measure of the channel capacity of an information system is best illustrated where the probabilities again are equal. Given a set of 16 carriers, A, B . . . , P, each carrying 4 bits of information, then the channel capacity is 4n bits per second, where the channel is capable of transmitting n symbols per second; this becomes slightly more complicated when the probabilities are not all the same. The encoding of messages now requires a suitable procedure. It requires punctuation, as in the case of a "pause" in Morse code, or alternatively, all the words must be of fixed length. Furthermore, to achieve an optimal code, there are certain procedures that are all based on the principle that the most frequently occurring words (or letters) should be coded with the symbol of shortest duration. Thus e (the most frequently occurring letter in English) would be 1 in binary code, whereas the letter x might be 26 (11010 in binary).

Applications

More complicated theorems for continuous and discrete systems, with or without noise, make up the mathematical theory of information. The discrete theory can generate letter sequences and word sequences that can approximate ordinary English. A Markov net is a stochastic process that deals with conditional probabilities. For example, the probability of q being followed by u in an English word is very nearly 1 (certainty); one can also work out the probabilities for all letters and all words: for instance, the probability of the being followed by the is very nearly 0 (impossible). Information theory is thus an important tool in the analysis of language or of any sequence of events--and its encoding, transmission, reception, and decoding. Such methods have been used to describe learning from the point of view of the learner, where the source is one where some pattern of events occurs (in the case of human learning, this is often nature or life).

The theory of information has also been used in some models of the brain, where thoughts and beliefs (some configuration of neurons) are the source; they are encoded in neural language, translated into a natural language such as English, and decoded by the hearer into his or her own thoughts. There is also a semantic of information, so far little developed, which deals with meaning, as opposed to uncertainty of information.

F. H. George

Bibliography: Ash, R. B., Information Theory (1965); Bendat, Julius S., Principles and Applications of Random Noise Theory (1958; repr. 1978); Clark, F., Information Processing (1970); Guiascu, Silviu, Information Theory with New Applications (1977); Haber, Fred, An Introduction to Information and Communication Theory (1974); Kullback, Solomon, Information Theory and Statistics (1974); Littlejohn, Stephen, Theories of Human Communication (1978); MacKay, Donald, Information, Mechanism and Meaning (1970); Meetham, A. R., Encyclopedia of Linguistics, Information and Control (1969); Rosie, A. M., Information and Communication Theory, 2d ed. (1973).