*PRELIMINARY NOTE*

This document is intended to give an overview of the main conclusions reached from recent developments in light-speed research. In order to do this effectively, it has been necessary to include background information, which, for a few, will already be well-known. However, for the sake of the majority who are not conversant with these areas of physics, it was felt important to include this information. While this overview is comprehensive, the actual derivation of many conclusions is beyond its scope. These derivations have, nevertheless, been fully performed in a major scientific paper using standard maths and physics coupled with observational data. Full justification of the conclusions mentioned here can be found in that technical thesis. Currently, that paper in which the new model is presented, is being finalised for peer review and will be made available once this whole process is complete.

*THE VACUUM*

During the 20th century, our knowledge regarding space and
the properties of the vacuum has taken a considerable leap forward.
The vacuum is more unusual than many people realise. It is popularly
considered to be a void, an emptiness, or just 'nothingness.'
This is the definition of a ** bare vacuum** [1]. However,
as science has learned more about the properties of space, a new
and contrasting description has arisen, which physicists call
the

To understand the difference between these two definitions,
imagine you have a perfectly sealed container. First remove all
solids and liquids from it, and then pump out all gases so no
atoms or molecules remain. There is now a vacuum in the container.
It was this concept in the 17^{th} century that gave rise
to the definition of a** vacuum** as a totally empty
volume of space. It was later discovered that, although this vacuum
would not transmit sound, it would transmit light and all other
wavelengths of the electromagnetic spectrum. Starting from the
high energy side, these wavelengths range from very short wavelength
gamma rays, X-rays, and ultra-violet light, through the rainbow
spectrum of visible light, to low energy longer wavelengths including
infra-red light, microwaves and radio waves.

*THE ENERGY IN THE VACUUM*

Then, late in the 19^{th} century, it was realised
that the vacuum could still contain heat or thermal radiation.
If our container with the vacuum is now perfectly insulated so
no heat can get in or out, and if it is then cooled to absolute
zero, all thermal radiation will have been removed. Does a complete
vacuum now exist within the container? Surprisingly, this is not
the case. Both theory and experiment show that this vacuum still
contains measurable energy. This energy is called the ** zero-point
energy **(ZPE) because it exists even at absolute zero.

The ZPE was discovered to be a universal phenomenon, uniform
and all-pervasive on a large scale. Therefore, its existence was
not suspected until the early 20^{th} century. In 1911,
while working with a series of equations describing the behaviour
of radiant energy from a hot body, Max Planck found that the observations
required a term in his equations that did not depend on temperature.
Other physicists, including Einstein, found similar terms appearing
in their own equations. The implication was that, even at absolute
zero, each body would have some residual energy. Experimental
evidence soon built up hinting at the existence of the ZPE, although
its fluctuations do not become significant enough to be observed
until the atomic level is attained. For example [2], the ZPE can
explain why cooling alone will never freeze liquid helium. Unless
pressure is applied, these ZPE fluctuations prevent helium's atoms
from getting close enough to permit solidification. In electronic
circuits another problem surfaces because ZPE fluctuations cause
a random "noise" that places limits on the level to
which signals can be amplified.

The magnitude of the ZPE is truly large. It is usually quoted
in terms of energy per unit of volume, which is referred to as
** energy density**. Well-known physicist Richard Feynman
and others [3] have pointed out that the amount of ZPE in one
cubic centimetre of the vacuum

Estimates of the energy density of the ZPE therefore range
from at least 10^{44} ergs per cubic centimetre up to
infinity. For example, Jon Noring made the statement that *"Quantum
Mechanics predicts the energy density [of the ZPE] is on the order
of an incomprehensible 10*^{98}* ergs per cubic centimetre."*
Prigogine and Stengers also analysed the situation and provided
estimates of the size of the ZPE ranging from 10^{100}
ergs per cubic centimetre up to infinity. In case this is dismissed
as fanciful, Stephen M. Barnett from the University of Oxford,
writing in ** Nature** (March 22, 1990, p.289), stated:

In order to appreciate the magnitude of the ZPE in each cubic centimetre of space, consider a conservative estimate of 10

*THE "GRANULAR STRUCTURE" OF SPACE*

In addition to the ZPE, there is another aspect of the physical
vacuum that needs to be presented. When dealing with the vacuum,
size considerations are all-important. On a large scale the physical
vacuum has properties that are uniform throughout the cosmos,
and seemingly smooth and featureless. However, on an atomic scale,
the vacuum has been described as a *"seething sea of activity"
*[2], or *"the seething vacuum"* [5]. It is
in this realm of the very small that our understanding of the
vacuum has increased. The size of the atom is about 10^{-8}
centimetres. The size of an atomic particle, such as an electron,
is about 10^{-13} centimetres. As the scale becomes smaller,
there is a major change at the ** Planck length** (1.616
x 10

This *"granular structure" *of space, to use
Pipkin and Ritter's phrase, is considered to be made up of Planck
particles whose diameter is equal to L*, and whose mass is equal
to a fundamental unit called the ** Planck mass**, M*,
(2.177 x 10

The physical vacuum of space therefore appears to be made up
of an all-pervasive sea of Planck particles whose density is an
unbelievable 3.6 x 10^{93} grams per cubic centimetre.
It might be wondered how anything can move through such a medium.
It is because de Broglie wavelengths of elementary particles are
so long compared with the Planck length, L*, that the vacuum is
'transparent' to these elementary particles. It is for the same
reason that long wavelength infrared light can travel through
a dense cloud in space and reveal what is within instead of being
absorbed, and why light can pass through dense glass. Therefore,
motion of elementary particles through the vacuum will be effortless,
as long as these particles do not have energies of the magnitude
of what is referred to as **Planck energy**, or M* c^{2}
('c' is the velocity of light). Atomic particles of that energy
would simply be absorbed by the structure of the vacuum. From
the figures for the density given above, the energy associated
with this Planck particle sea making up the physical vacuum can
be calculated to be of the order of 10^{114} ergs per
cubic centimetre, the same as the maximum value for the ZPE.

*TWO THEORIES DESCRIBING THE VACUUM*

Currently, there are two theories that describe the behaviour
and characteristics of the physical vacuum and the ZPE at the
atomic or sub-atomic level: the ** quantum electro-dynamic
**(QED) model [8], and the somewhat more recent

*THE QED MODEL OF THE VACUUM*

At the atomic level, the QED model proposes that, because of
the high inherent energy density within the vacuum, some of this
energy can be temporarily converted to mass. This is possible
since energy and mass can be converted from one to the other according
to Einstein's famous equation [E = m c^{2}], where 'E'
is energy, 'm' is mass, and 'c' is the speed of light. On this
basis, the QED model proposes that the ZPE permits short-lived
particle/antiparticle pairs (such as a positive and negative pion,
or perhaps an electron and positron) to form and almost immediately
annihilate each other [2,11]. These particle/antiparticle pairs
are called ** virtual particles. **Virtual particles
are distinct from Planck particles that make up the structure
of the vacuum. While virtual particles are, perhaps, about 10

The Heisenberg uncertainty principle states that the uncertainty
of time multiplied by the uncertainty of the energy is closely
approximated to ** Planck's constant '**h' divided by
2p. This quantum uncertainty, or indeterminacy,
governed by the value of 'h', imposes fundamental limitations
on the precision with which a number of physical quantities associated
with atomic processes can be measured. In the case under consideration
here, the uncertainty principle permits these virtual particle
events to occur as long as they are completed within an extraordinarily
brief period of time, which is of the order of 10

Consequently, a proton or electron is considered to be the
centre of constant activity; it is surrounded by a cloud of virtual
particles with which it is interacting [12]. In the case of the
electron, physicists have been able to penetrate a considerable
way into this virtual particle cloud. They have found that the
further into the cloud they go, the smaller, more compact and
point-like the electron becomes. At the same time they have discovered
there is a more pronounced negative charge associated with the
electron the further they penetrated into this cloud [13]. These
virtual particles act in such a way as to screen the full electronic
charge. There is a further important effect verified by observation
and experiment: the absorption and emission of these virtual particles
also causes the electron's "jitter motion" in a vacuum
at absolute zero. As such, this jittering, or ** Zitterbewegung**,
as it is officially called [14], constitutes evidence for the
existence of virtual particles and the ZPE of the vacuum.

*THE SED MODEL OF THE VACUUM*

In the SED approach, the vacuum at the atomic or sub-atomic
level may be considered to be inherently comprised of a turbulent
sea of randomly fluctuating electro-magnetic fields or waves.
These waves exist at all wavelengths longer than the Planck length
L*. At the macroscopic level, these all-pervasive ** zero-point
fields **(ZPF) are homogeneous and isotropic, which means
they have the same properties uniformly in every direction throughout
the whole cosmos. Furthermore, observation shows that this

Importantly, with the SED approach, ** Planck's quantum
constant**, 'h', becomes a measure of the strength of the
ZPF. This situation arises because the fluctuations of the ZPF
provide an irreducible random noise at the atomic level that is
interpreted as the innate uncertainty described by Heisenberg's
uncertainly principle [4,16]. Therefore, the zero-point fields
are the ultimate source of this fundamental limitation with which
we can measure some atomic phenomena and, as such, give rise to
the indeterminacy or uncertainty of quantum theory mentioned above.
In fact, Nelson pointed out in 1966 that if the ZPR had been discovered
at the beginning of the 20

In the SED explanation, the ** Zitterbewegung** is
accounted for by the random fluctuations of the ZPF, or waves,
as they impact upon the electron and jiggle it around. There is
also evidence for the existence of the zero-point energy in this
model by something called the surface

The Casimir effect is directly proportional to the area of
the plates. However, unlike other possible forces with which it
may be confused, the Casimir force is inversely proportional to
the fourth power of the plates' distance apart [18]. For plates
with an area of one square centimetre separated by 0.5 thousandths
of a millimetre, this force is equivalent to a weight of 0.2 milligrams.
In January of 1997, Steven Lamoreaux reported verification of
these details by an experiment reported in ** Physical Review
Letters** (vol.78, p5).

The surface Casimir effect therefore demonstrates the existence
of the ZPE in the form of electromagnetic waves. Interestingly,
Haisch, Rueda, Puthoff and others point out that there is a microscopic
version of the same phenomenon. In the case of closely spaced
atoms or molecules the all-pervasive ZPF result in short-range
attractive forces that are known as ** van der Waals forces**
[4,16]. It is these attractive forces that permit

The common objections to the actual existence of the zero-point energy centre around the idea that it is simply a theoretical construct. However the presence of both the Casimir effect and the Zitterbewegung, among other observational evidences, prove the reality of the ZPE.

*LIGHT AND THE PROPERTIES OF SPACE*

This intrinsic energy, the ZPE, which is inherent in the vacuum,
gives free space its various properties. For example, the magnetic
property of free space is called the ** permeability**
while the corresponding electric property is called the

Because light waves are an electro-magnetic phenomenon, their
motion through space is affected by the electric and magnetic
properties of the vacuum, namely the permittivity and permeability.
To examine this in more detail we closely follow a statement by
Lehrman and Swartz [22]. They pointed out that light waves consist
of changing electric fields and magnetic fields. Generally, any
magnetic field resulting from a change in an electric field must
be such as to oppose the change in the electric field, according
to ** Lenz's Law**. This means that the magnetic property
of space has a kind of inertial property inhibiting the rapid
change of the fields. The magnitude of this property is the

The electric constant, or permittivity, of free space is also
important, and is related to electric charges. A charge represents
a kind of electrical distortion of space, which produces a force
on neighbouring charges. The constant of proportionality between
the interacting charges is 1/Q, which describes a kind of electric
elastic property of space. The quantity Q is usually called the
** electric permittivity** of the vacuum. It is established
physics that the velocity of a wave motion squared is proportional
to the ratio of the elasticity over the inertia of the medium
in which it is travelling. In the case of the vacuum and the speed
of light, c, this standard equation becomes

As noted above, both U and Q are directly proportional to the energy density of the ZPE. It therefore follows that any increase in the energy density of the ZPF will not only result in a proportional increase in U and Q, but will also cause a decrease in the speed of light, c.

*WHY ATOMS DON'T SELF-DESTRUCT*

But it is not only light that is affected by these properties
of the vacuum. It has also been shown that the atomic building
blocks of matter are dependent upon the ZPE for their very existence.
This was clearly demonstrated by Dr. Hal Puthoff of the Institute
for Advanced Studies in Austin, Texas. In ** Physical Review
D**, vol. 35:10, and later in

Instead of ignoring the known laws of physics, Puthoff approached this problem with the assumption that the classical laws of electro-magnetics were valid, and that the electron is therefore losing energy as it speeds in its orbit around the nucleus. He also accepted the experimental evidence for the existence of the ZPE in the form of randomly fluctuating electro-magnetic fields or waves. He calculated the power the electron lost as it moved in its orbit, and then calculated the power that the electron gained from the ZPF. The two turned out to be identical; the loss was exactly made up for by the gain. It was like a child on a swing: just as the swing started to slow, it was given another push to keep it going. Puthoff then concluded that without the ZPF inherent within the vacuum, every atom in the universe would undergo instantaneous collapse [4, 23]. In other words, the ZPE is maintaining all atomic structures throughout the entire cosmos.

*THE RAINBOW SPECTRUM*

Knowing that light itself is affected by the zero-point energy, phenomena associated with light need to be examined. When light from the sun is passed through a prism, it is split up into a spectrum of seven colours. Falling rain acts the same way, and the resulting spectrum is called a rainbow. Just like the sun and other stars making up our own galaxy, distant galaxies each have a rainbow spectrum. From 1912 to 1922, Vesto Slipher at the Lowell Observatory in Arizona recorded accurate spectrographic measurements of light from 42 galaxies [24, 25]. When an electron drops from an outer atomic orbit to an inner orbit, it gives up its excess energy as a flash of light of a very specific wavelength. This causes a bright emission line in the colour spectrum. However when an electron jumps to a higher orbit, energy is absorbed and instead of a bright emission line, the reverse happens a dark absorption line appears in the spectrum. Each element has a very specific set of spectral lines associated with it. Within the spectra of the sun, stars or distant galaxies these same spectral lines appear.

*THE REDSHIFT OF LIGHT FROM GALAXIES*

Slipher noted that in distant galaxies this familiar pattern
of lines was shifted systematically towards the red end of the
spectrum. He concluded that this redshift of light from these
galaxies was a ** Doppler effect** caused by these galaxies
moving away from us. The Doppler effect can be explained by what
happens to the pitch of a siren on a police car as it moves away
from you. The tone drops. Slipher concluded that the redshift
of the spectral lines to longer wavelengths was similarly due
to the galaxies receding from us. For that reason, this redshift
is usually expressed as a velocity, even though as late as 1960
some astronomers were seeking other explanations [25]. In 1929,
Edwin Hubble plotted the most recent distance measurements of
these galaxies on one axis, with their redshift recession velocity
on the other. He noted that the further away the galaxies were,
the higher were their redshifts [24].

It was concluded that if the redshift represented receding galaxies, and the redshift increased in direct proportion to the galaxies distances from us, then the entire universe must be expanding [24]. The situation is likened to dots on the surface of a balloon being inflated. As the balloon expands, each dot appears to recede from every other dot. A slightly more complete picture was given by relativity theory. Here space itself is considered to be expanding, carrying the galaxies with it. According this interpretation, light from distant objects has its wavelength stretched or reddened in transit because the space in which it is travelling is expanding.

*THE REDSHIFT GOES IN JUMPS*

This interpretation of the redshift is held by a majority of
astronomers. However, in 1976, William Tifft of the Steward Observatory
in Tucson, Arizona, published the first of a number of papers
analysing redshift measurements. He observed that the redshift
measurements did not change smoothly as distance increased, but
went in jumps: in other words they were ** quantised**
[26]. Between successive jumps, the redshift remained fixed at
the value it attained at the last jump. This first study was by
no means exhaustive, so Tifft investigated further. As he did
so, he discovered that the original observations that suggested
a quantised redshift were strongly supported wherever he looked
[27 - 34]. In 1981 the extensive Fisher-Tully redshift survey
was completed. Because redshift values in this survey were not
clustered in the way Tifft had noted earlier, it looked as if
redshift quantisation could be ruled out. However, in 1984 Tifft
and Cocke pointed out that the motion of the sun and its solar
system through space produces a genuine Doppler effect of its
own, which adds or subtracts a little to every redshift measurement.
When this true Doppler effect was subtracted from all the observed
redshifts, it produced strong evidence for the quantisation of
redshifts across the entire sky [35, 36].

The initial quantisation value that Tifft discovered was a redshift of 72 kilometres per second in the Coma cluster of galaxies. Subsequently it was discovered that quantisation figures of up to 13 multiples of 72 km/s existed. Later work established a smaller quantisation figure just half of this, namely 36 km/s. This was subsequently supported by Guthrie and Napier who concluded that 37.6 km/s was a more basic figure, with an error of 2 km/s [37-39]. After further observations, Tifft announced in 1991 that these and other redshift quantisations recorded earlier were simply higher multiples of a basic quantisation figure [40]. After statistical treatment, that figure turned out to be 7.997 km/s. However, Tifft noted that this 7.997 km/s was not in itself the most basic result as observations revealed a 7.997/3 km/s, or 2.67 km/s, quantisation, which was even more fundamental [40]. When multiplied by 14, this fundamental value gave a predicted redshift of 37.38 km/s in line with Guthrie and Napier's value. Furthermore, when the basic 2.67 km/s is multiplied by 27, it gives the 72.12 km/s initially picked up in the Coma cluster of galaxies. Accepting this result at face value suggests that the redshift is quantised in fundamental steps of 2.67 km/s across the cosmos.

*RE-EXAMINING THE REDSHIFT*

If redshifts were truly a result of an expanding universe, the measurements would be smoothly distributed, showing all values within the range measured. This is the sort of thing we see on a highway, with cars going many different speeds within the normal range of driving speeds. However the redshift, being quantised, is more like the idea of those cars each going in multiples of, say, 5 kilometres an hour. Cars don't do that, but the redshift does. This would seem to indicate that something other than the expansion of the universe is responsible for these results.

We need to undertake a re-examination of what is actually being observed in order to find a solution to the problem. It is this solution to the redshift problem that introduces a new cosmological model. In this model, atomic behaviour and light-speed throughout the cosmos are linked with the ZPE and properties of the vacuum.

The prime definition of the redshift, 'z', involves two measured quantities. They comprise the observed change in wavelength 'D' of a given spectral line when compared with the laboratory standard wavelength 'W'. The ratio of these quantities [D / W = z] is a dimensionless number that measures the redshift [41]. However, it is customarily converted to a velocity by multiplying it by the current speed of light, 'c' [41]. The redshift so defined is then 'c z', and it is this c z that is changing in steps of 2.67 km/s. Since the laboratory standard wavelength 'W' is unaltered, it then follows that as [z = D/W] is systematically increasing in discrete jumps with distance, then D must be increasing in discrete jumps also. Now D is the difference between the observed wavelength of a given spectral line and the laboratory standard wavelength for that same spectral line [41]. This suggests that emitted wavelengths are becoming longer in quantum jumps with increasing distance (or with look-back time). During the time between jumps, the emitted wavelengths remain unchanged from the value attained at the last jump.

The basic observations therefore indicate that the wavelengths of all atomic spectral lines have changed in discrete jumps throughout the cosmos with time. This could imply that all atomic emitters within each galaxy may be responsible for the quantised redshift, rather than the recession of those galaxies or universal expansion. Importantly, the wavelengths of light emitted from atoms are entirely dependent upon the energy of each atomic orbit. According to this new way of interpreting the data, the redshift observations might indicate that the energy of every atomic orbit in the cosmos simultaneously undergoes a series of discrete jumps with time. How could this be possible?

*ATOMIC ORBITS AND THE REDSHIFT*

The explanation may well be found in the work of Hal Puthoff. Since the ZPE is sustaining every atom and maintaining the electrons in their orbits, it would then also be directly responsible for the energy of each atomic orbit. In view of this, it can be postulated that if the ZPE were lower in the past, then these orbital energies would probably be less as well. Therefore emitted wavelengths would be longer, and hence redder. Because the energy of atomic orbits is quantised or goes in steps [42], it may well be that any increase in atomic orbital energy can similarly only go in discrete steps. Between these steps atomic orbit energies would remain fixed at the value attained at the last step. In fact, this is the precise effect that Tifft's redshift data reveals.

The outcome of this is that atomic orbits would be unable to access energy from the smoothly increasing ZPF until a complete unit of additional energy became available. Thus, between quantum jumps all atomic processes proceed on the basis of energy conservation, operating within the framework of energy provided at the last quantum jump. Thus any increase in energy from the ZPE will not affect the atom until a particular threshold is reached, at which time all the atoms in the universe react simultaneously.

*THE SIZE OF THE ELECTRON*

This new approach can be analysed further. Mathematically it is known that the strength of the electronic charge is one of several factors governing the orbital energies within the atom [42]. Therefore, for the orbital energy to change, a simultaneous change in the value of the charge of both the electron and the proton would be expected. Although we will only consider the electron here, the same argument holds for the proton as well.

Theoretically, the size of the spherical electron, and hence
its area, should appear to increase at each quantum jump, becoming
"larger" with time. The so-called ** Compton radius**
of the electron is 3.86151 x 10

*THE ELECTRONIC CHARGE*

With this in mind, it might be anticipated, on the SED approach,
that if the energy density of the ZPF increased, the *"point-like
entity"* of the electron would be *"smeared out"*
even more, thus appearing larger. This would follow since the
** Zitterbewegung** would be more energetic, and vacuum
polarization around charges would be more extensive. In other
words, the spherical electron's apparent radius and hence its
area would increase at the quantum jump. Also important here is
the

The QED model can explain this formula another way. There is
a cloud of virtual particles around the "bare" electron
interacting with it. When a full quantum increase in the vacuum
energy density occurs, the strength of the charge increases. With
a higher charge for the *"point-like entity"* of
the electron, it would be expected that the size of the particle
cloud would increase because of stronger vacuum polarisation and
a more energetic ** Zitterbewegung**. (Note that

*THE BOHR ATOM*

Let us now be more specific about this new approach to orbit
energies and their association with the redshift. The** Bohr
model** of the atom has electrons going around the atomic
nucleus in miniature orbits, like planets around the sun. Although
more sophisticated models of the atom now exist, it has been acknowledged
in the past that the Bohr theory

In the Bohr model of the atom, two equations describe orbital
energy [42]. In 1913, Niels Bohr quantised the first of these,
the angular momentum equation. The ** angular momentum**
of an orbit is described mathematically by 'mvr', where 'm' is
the mass of the electron, 'v' is its velocity in an orbit whose
radius is 'r'. Bohr pointed out that a close approximation to
the observed atomic behaviour is obtained if electrons are theoretically
restricted to those orbits whose angular momentum is an integral
multiple of h / (2p). Mathematically,
that is written as

where 'n' is a whole number such as 1, 2, 3, etc., and is called
the ** quantum number**. As mentioned above, 'h' is

*BOHR'S SECOND EQUATION*

Bohr's second equation describes the kinetic energy of the
electron in an orbit of radius 'r'. ** Kinetic energy**
is defined as m v

where 'e' is the charge on the electron, and 'Q' is the permittivity of the vacuum. This kinetic energy is equal in magnitude to the total energy of that closest orbit. When an electron falls from immediately outside the atom into that orbit, this energy is released as a photon of light. The energy 'E' of this photon has a wavelength 'W' and both the energy and the wavelength are linked by the standard equation

As shown later, observational evidence reveals the 'hc' component in this equation is an absolute constant at all times. The kinetic energy and the photon energy are thus equal. This much is standard physics [42]. Accordingly, we can write the following equality for the ground state orbit from Bohr's second equation:

However, as A. P. French points out in his derivation of the relevant equations [42], the energy 'E' of the ground state orbit, can also be written as

where 'R' is the ** Rydberg constant** and is equal
to 109737.3 cm

where 'K' is the ** Rydberg wavelength** such that

*A NEW QUANTUM CONDITION*

If we now follow the lead of Bohr, and quantise his second equation, a solution to several difficulties is found. Observationally, the incremental increase of redshift with distance indicates that the wavelengths of light emitted from galaxies undergo a fractional increase. Therefore, for the ground state orbit of the Bohr atom, the wavelength 'K' must increment in steps of some set fraction of 'K', say K / z = R*. This means that K = z R*. Furthermore, the wavelength increment D can be defined as

Here, the term '__n__' is the ** new quantum integer**
that fulfils the same function as Bohr's quantum number 'n'. Furthermore,
Planck's quantum constant 'h' finds its parallel in 'R*'. As a
consequence, 'R*' could be called the

Under these circumstances, the Rydberg quantum wavelength 'R*' is defined as

It therefore follows that wavelengths increment in steps of

This new quantisation procedure means that the energy E of the first Bohr orbit will increment in steps of D E such that

This holds because of two factors. First, if '__n__' decreases
with time, it will mimic the behaviour of the redshift, which
also decreases with time. High redshift values from distant objects
necessarily mean high values for '__n__' as well. Second, all
atomic orbit radii 'r' can be shown to remain unchanged throughout
any quantum changes. If they were not, the abrupt change of size
of every atom at the quantum jump would cause obvious flaws in
crystals, which would be especially noticeable in ancient rocks.
This new quantisation procedure effectively allows every atom
in the cosmos to simultaneously acquire a new higher energy state
for each of its orbits in proportion as the ZPE increases with
time. In so doing, it opens the way for a solution to the redshift
problem.

*A QUANTUM REDSHIFT*

In the Bohr atom, all orbit energies are scaled according to
the energy of the orbit closest to the nucleus, the ground state
orbit. Therefore, if the ground state orbit has an energy change,
all other orbits will scale their energy proportionally. This
also means that wavelengths of emitted light will be scaled in
proportion to the energy of the ground state orbit of the atom.
Accordingly, if W_{0} is any arbitrary emitted wavelength
and W_{1} is the wavelength of the ground state orbit,
then the wavelength change at the quantum jump is given by

Now the redshift is defined as the change in wavelength, given
by 'D', divided by the reference wavelength 'W'. For the purposes
of illustration, let us take the reference wavelength to be equal
to that emitted when an electron falls into the ground state orbit
for hydrogen. This wavelength is close to 9.1127 x 10^{-6 }centimetres.
For this orbit, the value of 'D' from the above equation is given
by 8.12072 x 10^{-11} centimetres since (__n__ = 1)
in this case and (W_{0} = W_{1}). Therefore, the
redshift

and so the velocity change

This compares favourably with Tifft's basic value of 2.67 km/sec
for the quantum jumps in the redshift velocity. Furthermore, when
the new quantum number takes the value (__n __= 27), the redshift
velocity becomes cz = 72 km/sec compared with the 72 km/s that
Tifft originally noticed. It may also be significant that for
(__n__ = 14), the redshift velocity is 37.39 km/s compared
with Tifft's 36.2 km/s and 37.5 km/s that was subsequently established
by Guthrie and Napier.

Imposing a quantum condition on the second Bohr equation for the atom therefore produces quantum changes in orbit energies and emitted wavelengths that accord with the observational evidence. This result also implies the quantised redshift may not be an indicator of universal expansion. Rather, this new model suggests it may be evidence that the ZPE has increased with time allowing atomic orbits to take up successively higher energy states.

*RECONSIDERING LIGHT-SPEED*

It is at this point in the discussion that a consideration
of light-speed becomes important. It has already been mentioned
that an increase in vacuum energy density will result in an increase
in the electrical permittivity and the magnetic permeability of
space, since they are energy related. Since light-speed is inversely
linked to both these properties, if the energy density of the
vacuum increases, light-speed will decrease uniformly throughout
the cosmos. Indeed, in 1990 Scharnhorst [48] and Barton [20] demonstrated
that a lessening of the energy density of a vacuum would produce
a higher velocity for light. This is explicable in terms of the
QED approach. The virtual particles that make up the *"seething
vacuum"* can absorb a photon of light and then re-emit
it when they annihilate. This process, while fast, takes a finite
time. The lower the energy density of the vacuum, the fewer virtual
particles will be in the path of light photons in transit. As
a consequence, the fewer absorptions and re-emissions which take
place over a given distance, the faster light travels over that
distance [49, 50].

However, the converse is also true. The higher the energy density
of the vacuum, the more virtual particles will interact with the
light photons in a given distance, and so the slower light will
travel. Similarly, when light enters a transparent medium such
as glass, similar absorptions and re-emissions occur, but this
time it is the atoms in the glass that absorb and re-emit the
light photons. This is why light slows as it travels through a
denser medium. Indeed, the more closely packed the atoms, the
slower light will travel as a greater number of interactions occur
in a given distance. In a recent illustration of this light-speed
was reduced to 17 metres/second as it passed through extremely
closely packed sodium atoms near absolute zero [51]. All this
is now known from experimental physics. This agrees with Barnett's
comments in **Nature** [11] that *"The vacuum is certainly
a most mysterious and elusive object...The suggestion that the
value of the speed of light is determined by its structure is
worthy of serious investigation by theoretical physicists."*

*THE BEHAVIOUR OF REDSHIFT AND LIGHT-SPEED*

One of the main points established in the major technical thesis currently undergoing review has been that redshift 'z' is proportional to light-speed 'c' [52]. This can be written as

where 'k' is the constant of proportionality. This constant allows values of 'z' to be converted to values of 'c' and vice versa. This is an important key to the behaviour of 'c', because there exists a well-accepted graph of redshift 'z' of distant astronomical objects on the vertical axis, against distance 'd' on the horizontal axis. This graph describes the general behaviour of redshift with distance in a way that has been verified by recent Hubble Space Telescope observations.

A second clue to the behaviour of 'c' is obtained when it is
realized that by looking out into progressively greater astronomical
distances 'd', we are systematically looking further back in time
'T'. Thus distance and time are directly related and can be inter-converted.
Consequently, the graph of redshift 'z' against distance 'd' can
be converted to become a graph of light-speed 'c' against time
'T'. Essentially it is the same graph, only it has different scales
on both axes. Thus the behaviour of light-speed over astronomical
time is simply given by the accepted observations of redshift
behaviour with distance [53, 54]. This behaviour consists of a
rapid drop in 'c' initially, which then tapers down to a much
flatter decay rate. For each redshift quantum change, the speed
of light has apparently changed by a significant amount. The precise
quantity is dependent upon the value adopted for the ** Hubble
constant**, which links a galaxy's redshift with its distance.

*AN OBSERVED DECLINE IN LIGHT-SPEED*

The question then arises as to whether or not any other observational
evidence exists that the speed of light has diminished with time.
Surprisingly, some 40 articles about this very matter appeared
in the scientific literature from 1926 to 1944 [55]. Some important
points emerge from this literature. In 1944, despite a strong
preference for the constancy of atomic quantities, N. E. Dorsey
[56] was reluctantly forced to admit: *"As is well known
to those acquainted with the several determinations of the velocity
of light, the definitive values successively reported have, in
general, decreased monotonously from Cornu's 300.4 megametres
per second in 1874 to Anderson's* *299.776 in 1940 "*
Even Dorsey's own re-working of the data could not avoid that
conclusion.

However, the decline in the measured value of 'c' was noticed
much earlier. In 1886, Simon Newcomb reluctantly concluded that
the older results obtained around 1740 were in agreement with
each other, but they indicated 'c' was about 1% higher than in
his own time [57], the early 1880's. In 1941 history repeated
itself when Birge made a parallel statement while writing about
the 'c' values obtained by Newcomb, Michelson, and others around
1880. Birge was forced to concede that* " these older results
are entirely consistent among themselves, but their average is
nearly 100 km/s greater than that given by the eight more recent
results" *[58]. Each of these three eminent scientists
held to a belief in the absolute constancy of 'c'. This makes
their careful admissions about the experimentally declining values
of measured light speed more significant.

*EXAMINING THE DATA*

The data obtained over the last 320 years at least imply a decay in 'c' [55]. Over this period, all 163 measurements of light-speed by 16 methods reveal a non-linear decay trend. Evidence for this decay trend exists within each measurement technique as well as overall. Furthermore, an initial analysis of the behaviour of a number of other atomic constants was made in 1981 to see how they related to 'c' decay. On the basis of the measured value of these "constants", it became apparent that energy was being conserved throughout the process of 'c' variation. This conclusion was reached after an exhaustive study was made of all available alternatives. In all, confirmatory trends appear in 475 measurements of 11 other atomic quantities by 25 methods. Analysis of the most accurate atomic data reveals that the trend has a consistent magnitude in all the other atomic quantities that vary synchronously with light-speed [55].

All these measurements have been made during a period when
there have been no quantum increases in the energy of atomic orbits.
These observations reinforce the conclusion that, between any
proposed quantum jumps, energy is conserved in all relevant atomic
processes, as no extra energy is accessible to the atom from the
ZPF. Because energy is conserved, the c-associated atomic constants
vary synchronously with c, and the existing order in the cosmos
is not disrupted or intruded upon. Historically, it was this very
behaviour of the various constants, indicating that energy was
being conserved, which was a key factor in the development of
the 1987 Norman-Setterfield report, ** The Atomic Constants,
Light And Time** [55].

The mass of data supporting these conclusions comprises some 638 values measured by 43 methods. Montgomery and Dolphin did a further extensive statistical analysis on the data in 1993 and concluded that the results supported the 'c' decay proposition if energy was conserved [59]. The analysis was developed further and formally presented in August 1994 by Montgomery [60]. These papers answered questions related to the statistics involved and have not yet been refuted.

*ATOMIC QUANTITIES AND ENERGY CONSERVATION*

Planck's constant and mass are two of the quantities that vary synchronously with 'c'. Over the period when 'c' has been measured as declining, Planck's constant 'h' has been measured as increasing as documented in the 1987 Report. The most stringent data from astronomy reveal 'hc' must be a true constant [61 - 64]. Consequently, 'h' must be proportional to '1/c' exactly. This is explicable in terms of the SED approach since, as mentioned above, 'h' is essentially a measure of the strength of the zero-point fields (ZPF). If the ZPE is increasing, so, in direct proportion, must 'h'. As noted above, an increasing ZPE also means 'c' must drop. In other words, as the energy density of the ZPF increases, 'c' decreases in such a way that 'hc' is invariant. A similar analysis could be made for other time-varying "constants" that change synchronously with 'c'.

This analysis reveals some important consequences resulting
from Einstein's famous equation [E = m c^{2}], where 'E'
is energy, and 'm' is mass. Data listed in the Norman/Setterfield
Report confirm the analysis that 'm' is proportional to 1 / c^{2}
within a quantum interval, so that energy (E) is unaffected as
'c' varies. Haisch, Rueda and Puthoff independently verify that
when the energy density of the ZPF decreases, mass also decreases.
They confirm that 'E' in Einstein's equation remains unaffected
by these synchronous changes involving 'c' [16].

If we continue this analysis, the behaviour of mass 'm' is
found to be very closely related to the behaviour of the ** Gravitational
constant** 'G' and gravitational phenomena. In fact 'G'
can be shown to vary in such a way that 'Gm' remains invariant
at all times. This relationship between 'G' and 'm' is similar
to the relationship between Planck's constant and the speed of
light that leaves the quantity 'hc' unchanged. The quantity 'Gm'
always occurs as a united entity in the relevant gravitational
or orbital equations [65]. Therefore, gravitational and orbital
phenomena will be unchanged by varying light speed as will planetary
periods and distances [66]. In other words, acceleration due to
gravity, weight, and planetary orbital years, remain independent
of any variation of 'c'. As a result, astronomical orbital periods
of the earth, moon, and planets form an independent time-piece,
a dynamical clock, with which it is possible to compare atomic
processes.

*THE BEHAVIOUR OF ATOMIC CLOCKS*

This comparison between dynamical and atomic clocks leads to
another aspect of this discussion. Observations reveal that a
higher speed of light implies that some atomic processes are proportionally
faster. This includes atomic frequencies and the rate of ticking
of atomic clocks. In 1934 'c' was experimentally determined to
be varying, but measured wavelengths of light were experimentally
shown to be unchanged. Professor Raymond T. Birge, who did not
personally accept the idea that the speed of light could vary,
nevertheless stated that the observational data left only one
conclusion. He stated that if 'c' was actually varying and wavelengths
remained unchanged, this could only mean *"the value of
every atomic frequencymust be changing"* [67].

Birge was able to make this statement because of an equation linking the wavelength 'W' of light, with frequency 'F', and light-speed 'c'. The equation reads 'c = FW.' If 'W' is constant and 'c' is varying, then 'F' must vary in proportion to 'c'. Furthermore, Birge knew that the frequency of light emitted from atoms is directly proportional to the frequency of the revolution of atomic particles in their orbits [42]. All atomic frequencies are therefore directly proportional to 'F', and so also directly proportional to 'c', just as Birge indicated.

The run-rate of atomic clocks is governed by atomic frequencies.
It therefore follows that these clocks, in all their various forms,
run at a rate proportional to c. The atomic clock is thereby c-dependent,
while the orbital or dynamical clock ticks independently at a
constant rate. In 1965, Kovalevsky pointed out the converse of
this. He stated that if the two clock rates were different, *"then
Planck's constant as well as atomic frequencies would drift"
*[68]. This is precisely what the observations reveal.

This has practical consequences in the measurements of 'c'.
In 1949 the frequency-dependent ammonia-quartz clock was introduced
and became standard in many scientific laboratories [69]. But
by 1967, atomic clocks had become uniformly adopted as timekeepers
around the world. Methods that use atomic clocks to measure 'c'
will always fail to detect any changes in light-speed, since their
run-rate varies directly as 'c' varies. This is evidenced by the
change in character of the 'c' data following the introduction
of these clocks. This is why the General Conference on Weights
and Measures meeting in Paris in October of 1983 declared 'c'
an absolute constant [70]. Since then, any change in the speed
of light would have to be inferred from measurements other than
those involving atomic clocks.

*COMPARING ATOMIC AND DYNAMIC CLOCKS*

However, this problem with frequencies and atomic clocks can
actually supply additional data to work with. It is possible in
principle to obtain evidence for speed of light variation by comparing
the run-rate of atomic clocks with that of dynamical clocks. When
this is done, a difference in run-rate is noted. Over a number
of years up to 1980, Dr. Thomas Van Flandern of the US Naval Observatory
in Washington examined data from lunar laser ranging using atomic
clocks, and compared their data with data from dynamical, or orbital,
clocks. From this comparison of data, he concluded that *"the
number of atomic seconds in a dynamical interval is becoming fewer.
Presumably, if the result has any generality to it, this means
that atomic phenomena are slowing down with respect to dynamical
phenomena"* [71]. Van Flandern has more recently been
involved in setting the parameters running the clocks in the Global
Positioning System of satellites used for navigation around the
world. His clock comparisons indicated that atomic phenomena were
slowing against the dynamical standard until about 1980. This
implies that 'c' was continuing to slow until at least 1980, regardless
of the results obtained using the frequency-dependent measurements
of recent atomic clocks.

*AN OSCILLATION IS INVOLVED*

These clock comparisons are useful in another way. The atomic dates of historical artifacts can be approximated via radiometric dating. These dates can then be compared with actual historical, or orbital, dates. This comparison of clocks allows us to examine the situation prior to 1678 when the Danish astronomer Roemer made the first measurement of the speed of light. When this comparison is done, light-speed behaviour is seen to include an oscillation, which seems to have had one minimum around 2570 BC, with an error of about ± 200 years, following which it climbed to a secondary maximum, and then started dropping again. Indeed, it is of interest to note that measurements of several atomic constants associated with 'c' seem to indicate that the 'c' decay curve apparently bottomed out around 1980 AD and may have started to increase again. More data are needed before a positive statement can be made.

Furthermore, the redshift observations themselves reveal this oscillation that results in a steps and stairs pattern superimposed on the general trend of the main curve. At the 'flat points' in this pattern, the value of 'z' changes slowly over a large distance so that many galaxies are involved. Consequently, significant numbers of galaxies appear to congregate at preferred, systematic redshifts [72]. By contrast, on the steeply rising part of the step, the value of 'z' changes rapidly over a relatively short distance, so relatively few galaxies are found with those redshifts. These redshift 'periodicities' form a precise mathematical sequence [73] and are different to any quantisation as these periodicities are dependent on the numbers of galaxies counted at a given redshift. By contrast, the line of change in redshift value due to quantisation may often pass right through individual galaxies.

As both Close [74] and D'azzo & Houpis [75] pointed out in 1966, this oscillation is typical of many physical systems. The complete response of a system to an input of energy comprises two parts: the forced response and the free or natural response. This can be illustrated by a number of mechanical or electrical systems. The forced response comes from the injection of energy into the system. The free response is the system's own natural period of oscillation. The two together describe the complete behaviour of the system. In this new model, the main trend of the curve represents the energy injection into the system, while the oscillation comes from the free response of the cosmos to this energy injection. This dual process has affected atomic behaviour and light-speed throughout the cosmos.

*LIGHT-SPEED AND THE EARLY COSMOS*

The issue of light-speed in the early cosmos is one that has
received some attention recently in several peer-reviewed journals.
Starting in December 1987, the Russian physicist V. S. Troitskii
from the Radiophysical Research Institute in Gorky published a
twenty-two page analysis in ** Astrophysics and Space Science**
regarding the problems cosmologists faced with the early universe.
He looked at a possible solution if it was accepted that light-speed
continuously decreased over the lifetime of the cosmos, and the
associated atomic constants varied synchronously. He suggested
that, at the origin of the cosmos, light might have travelled
at 10

In 1993, J. W. Moffat of the University of Toronto, Canada,
had two articles published in the ** International Journal
of Modern Physics D **(see also [76]). He suggested that
there was a high value for 'c' during the earliest moments of
the formation of the cosmos, following which it rapidly dropped
to its present value. Then, in January 1999, a paper in

Like Moffat before them, Albrecht and Magueijo isolated their
high initial light-speed and its proposed dramatic drop to the
current speed to a very limited time during the formation of the
cosmos. However, in the same issue of ** Physical Review D
**there appeared a paper by John D. Barrow, Professor of
Mathematical Sciences at the University of Cambridge. He took
this concept one step further by proposing that the speed of light
has dropped from the value proposed by Albrecht and Magueijo down
to its current value over the lifetime of the universe.

An article in ** New Scientist** for July 24, 1999,
summarised these proposals in the Editor's introduction.

*EXPANDING THE COSMOS*

Given all these results, the key question then becomes, why
should the ZPE increase with time? One basic tenet of the Big
Bang and some other cosmologies is an initial rapid expansion
of the universe. That initial rapid expansion is accepted here.
However, the redshift can no longer be used as evidence that this
initial expansion has continued until the present. Indeed, if
space were continuing its uniform expansion, the precise quantisation
of spectral line shifts that Tifft has noted would be smeared
out and lost. The same argument applies to cosmological contraction.
This suggests that the initial expansion halted before redshifted
spectral lines were emitted by the most distant galaxies, and
that since then the universe has been essentially static. In 1993,
Jayant Narliker and Halton Arp published a paper in ** Astrophysical
Journal** (vol. 405, p. 51) which revealed that a static
cosmos containing matter was indeed stable against collapse under
conditions that are fulfilled in this new model.

However, the initial expansion was important. As Paul S. Wesson
[77], Martin Harwit [78] and others have shown, the physical vacuum
initially acquired a potential energy in the form of an elasticity,
tension, or stress as a result of the inflationary expansion of
the cosmos. This might be considered to be akin to the tension,
stress, or elasticity in the fabric of a balloon that has been
inflated. In order to appreciate what is happening to the structure
of the vacuum under these conditions, the statement of Pipkin
and Ritter is again relevant, namely that *"the Planck
length is a length at which the smoothness of space breaks down,
and space assumes a granular structure"* [79]. Since this
granular structure of space is made up of Planck particle pairs,
whose dimensions are equal to the Planck length, then it is at
the level of these Planck particle pairs that the vacuum is likely
to respond to the expansion of the cosmos.

More specifically, such an expansion of the fabric of space is likely to cause an increased separation and spin of the Planck particle pairs. Because these Planck particle pairs have positive and negative charges, their separation will give rise to electric fields and their spin will give rise to magnetic fields. It is these electro-magnetic fields from the Planck particle pairs that comprise the all-pervasive ZPE. In that sense, then, the original expansion set the initial conditions governing the ZPE. However, once those parameters were set and the cosmos reached a static state, the energy density of the ZPE would depend upon the number of Planck particle pairs that manifested in a unit volume in any given dynamical interval. Anything that changes this number will also change the energy density of the ZPE, along with all the effects that have been discussed in this paper. In this way, the structure and behaviour of the vacuum at the Planck particle level is determining all the observed effects at the atomic level.

*AN INCREASING VACUUM ENERGY*

An important factor in the discussion then becomes the interval known as the Planck time, which is the length of time that Planck particle pairs exist before annihilating. This time interval is governed by the behaviour of Planck's constant 'h'. Since 'h' is increasing with the passing of dynamical time, as discussed above, this means that the Planck time interval is also increasing. In this sense it is rather like a cheap watch that slows down as its spring unwinds so that the period between its ticks increases. The function governing this rate of ticking is the same as the function governing light-speed behaviour. This effectively means that, for any given constant dynamical interval, more Planck particle pairs will be in existence per unit volume, as each particle pair will remain in existence for a longer time.

In order to illustrate this more effectively, consider a unit volume of space in which the conditions are such that a Planck particle pair manifests every dynamical second. Furthermore, let the Planck time interval also be one dynamical second. Thus, at any given observed interval of one dynamical second, only one particle pair will exist in that unit volume. Let the Planck time then be increased by a factor of 3, so that each particle pair exists for 3 dynamical seconds. Since other conditions remain unchanged, a new particle pair will still manifest every second. Thus 3 particle pairs will exist during any given dynamical second. First, there is the pair that originated at the beginning of that interval, just as the situation was before. Then there is also the pair that originated one second earlier, so that the observational interval is the middle second of their 3 second lifespan. Then in addition there is also the pair that originated two seconds earlier, so that the observational second is the 3rd second of their existence. It can therefore be demonstrated that if Planck's constant increases by a factor N, the Planck time interval is also increased by a factor N, and therefore the number of Planck particle pairs per unit volume in any given dynamical interval increased by a factor N. All the effects outlined in this summary then respond as a consequence.

** IS THERE A BASIC CAUSE?**The only issue remaining for examination is the basic
reason for the behaviour of the Planck particle pairs. Since light-speed
'c' is dependent upon the ZPE as outlined above, its behaviour
cannot be influencing the ZPE. In a similar way, it can be argued
that both mass and atomic time are dependent upon the ZPE for
their behaviour so that their performance does not constitute
the heart of the matter. On the SED approach, even the Newtonian
gravitational constant 'G' is a ZPE phenomenon, which removes
it from contention here. The one factor that does emerge from
the foregoing discussion is the increasing quantum uncertainty
that allows Planck particle pairs to manifest for an increasing
length of time. Thus, as the intrinsic potential energy of the
cosmos runs down, quantum uncertainty increases, so the Planck
time interval increases, in an analogous way to the behaviour
of some spring-driven clocks.

*IMPLICATIONS OF THIS PROPOSED MODEL*

*(***1). Quantum "shells"**

This model assumes each quantum change occurs instantaneously throughout the cosmos. Yet a finite time is taken for light emitted by atomic processes to reach the observer. Consequently, the observed redshift will appear to be quantised in spherical shells centred about any observer anywhere in the universe. All objects that emit light within that shell will have the same redshift.

**(2). "Missing mass" in galaxy clusters**

The relative velocities of individual galaxies within clusters
of galaxies are measured by their redshift. From this redshift
measurement, it has been concluded that the velocities of galaxies
are too high for them to remain within the cluster for the assumed
age of the universe. Therefore astronomers have been looking for
the "missing mass" needed to hold such clusters together
by way of gravitational forces. However, if the redshift does
not actually represent velocity at all, then the problem disappears
since the quantised redshift largely explains the changing cz
values across the diameters of most clusters of galaxies. Indeed,
a large actual velocity component in these cz values would destroy
the quantisation effect. Recent work on galaxy clusters has revealed
the significant information that in the centre of the Virgo cluster,
galaxies *"were moving fast enough to wash out the [redshift]
periodicity"* [80]. As the actual relative velocities
of galaxies is therefore small, no mass is "missing."
(Note that this does not solve the problem of the "missing
mass" within spiral galaxies which is a separate issue.)

**(3). A uniform microwave background**

An initial very high value for light-speed means that the radiation in the very early moments of the cosmos would be rapidly homogenised by scattering processes. This means that the radiation we observe from that time will be both uniform and smooth. This is largely what is observed with the microwave background radiation coming from all parts of the sky [81]. This model therefore provides an answer to its smoothness without the necessity of secondary assumptions about matter distribution and galaxy formation that tend to be a problem for current theories.

**(4). Corrections to the atomic clock**

As a consequence of knowing how light-speed and atomic clocks have behaved from the redshift, atomic and radiometric clocks can now be corrected to read actual orbital time. As a result, geological eras can have a new orbital time-scale set beside them. This will necessitate a re-orientation in our current thinking on such matters.

**(5). Final note**

The effects of changing the vacuum energy density uniformly throughout the cosmos have been considered in this presentation. This in no way precludes the possibility that the vacuum energy density may vary on a local astronomical scale, perhaps due to energetic processes. In such cases, dramatically divergent redshifts may be expected when two neighbouring astronomical objects are compared. Arp has listed off a number of potential instances where this explanation may be valid [82, 83].

*SUMMARY*

This model proposes that an initial small, hot, dense, highly energetic universe underwent rapid expansion to its current size, and remained static thereafter. The response of the fabric of space, through the behaviour of Planck particle pairs, gave rise to an increasing energy density for the ZPE. This had two results. First, there was a progressive decline in light-speed. Concurrently, atomic particle and orbital energies throughout the cosmos underwent a series of quantum increases, as more energy became available to them from the vacuum. Therefore, with increasing time, atoms emitted light that shifted in jumps towards the more energetic blue end of the spectrum. As a result, as we look back in time to progressively more distant astronomical objects, we see that process in reverse. That is to say the light of these galaxies is shifted in jumps towards the red end of the spectrum. The implications of this model solve some astronomical problems but, at the same time, challenge some current historical interpretations.

******************

*ACKNOWLEDGMENTS:*

My heartfelt thanks goes to Helen Fryman for the many hours
she spent in order to make this paper readable for a wide audience.
A debt of gratitude is owed to Dr. Michael Webb, Dr. Bernard Brandstater,
and Lambert Dolphin for their many helpful discussions and sound
advice. Finally, I must also acknowledge the pungent remarks of
'Lucas,' which resulted in some significant improvements to this
paper.

*REFERENCES:*

[1]. Timothy H. Boyer, *"The Classical Vacuum"*,
**Scientific American,** pp.70-78, August 1985.

[2]. Robert Matthews, *"Nothing like a Vacuum"*,
**New Scientist**, p. 30-33, 25 February 1995.

[3]. Harold E. Puthoff, *"Can The Vacuum Be Engineered
For Spaceflight Applications? Overview Of Theory And Experiments"*,
**NASA Breakthrough Propulsion Physics Workshop**, August 12-14,
1997, NASA Lewis Research Center, Cleveland, Ohio.

[4]. Harold E. Puthoff, *"Everything for nothing"*,
**New Scientist**, pp.36-39, 28 July 1990.

[5]. Anonymous, *"Where does the zero-point energy come
from?"*, **New Scientist**, p.14, 2 December 1989.

[6]. Martin Harwit, *"Astrophysical Concepts"*,
p. 513, Second Edition, Springer-Verlag, 1988.

[7]. A. P. French, *"Principles of Modern Physics"*,
p. 176, John Wiley & Sons, New York, 1959.

[8]. P. W. Milonni, *"The Quantum Vacuum: An Introduction
to Quantum Electrodynamics"*, Academic Press, New York,
1994.

[9]. Timothy H. Boyer, *"Random Electrodynamics: The
theory of classical electrodynamics with classical electromagnetic
zero-point radiation"*, **Physical Review D**, Vol.
11:4, pp.790-808, 15 February, 1975.

[10]. L. de la Pena, and A. M. Cetto,* "The Quantum
Dice: An introduction to stochastic electrodynamics."*
Kluwer Academic Publisher, Dordrecht, 1996.

[11]. Stephen M. Barnett, *"Photons faster than light?"*,
**Nature**, Vol. 344, p. 289, 22 March, 1990.

[12]. Kenneth W. Ford, *"Classical and Modern Physics"*,
Vol. 3, p.1290, Wiley, New York, 1974.

[13]. I. Levine et al., *"Measurement of the Electromagnetic
Coupling at large momentum Transfer"*, **Physical Review
Letters**, 78:3, pp. 424-427, 20 Jan 1997.

[14]. K. Huang,* "On the Zitterbewegung of the Dirac
Electron"*, **American Journal of Physics,** Vol. 20,
pp.479-484, 1952.

[15]. Bernard Haisch, Alfonso Rueda and H.E. Puthoff,* "Beyond
E = mc2. A First Glimpse of a Universe Without Mass"*,**
The Sciences**, pp. 26-31, New York Academy of Sciences, November/December
1994.

[16]. B. Haisch, A. Rueda, and H. E. Puthoff, *"Physics
of the Zero-Point Field: Implications for Inertia, Gravitation
and Mass"*, **Speculations in Science and Technology,**
Vol. 20, pp. 99-114, 1997.

[17]. E. Nelson,* "Derivation of the Schroedinger Equation
from Newtonian Mechanics"*, **Physical Review,** Vol.
150, pp.1079-1085, 1966.

[18]. Jack S. Greenberg and Walter Greiner, *"Search
for the sparking of the vacuum"*, **Physics Today**,
pp.24-32, August 1982.

[19]. Walter J. Moore, *"Physical Chemistry"*,
pp. 12-13, Longmans 1961.

[20]. G. Barton, *"Faster-Than-c Light Between Parallel
Mirrors"*, **Physics Letters B**, Vol. 237, No. 3,4,
p. 559-562, 22March, 1990.

[21]. B.I. Bleaney and B. Bleaney, *"Electricity and
Magnetism"*, p.242, Oxford, at the Clarendon Press, 1962.

[22]. R. L. Lehrman and C. Swartz, "Foundations of Physics", pp. 510-511, Holt, Rinehart and Winston Inc., 1969.

[23]. H. E. Puthoff, *"Ground state of hydrogen as a
zero-point-fluctuation-determined state"*, **Physical
Review D**, Vol. 35, No. 10, pp. 3266-3269, 15 May, 1987.

[24]. Donald Goldsmith, *"The Evolving Universe"*,
Second Edition, pp. 108-110, Addison-Wesley, 1985.

[25]. Paul Couderc, *"The Wider Universe"*,
p. 92, Arrow Science Series, Hutchinson, London, 1960.

[26]. William G. Tifft, *"Discrete States Of Redshift
And Galaxy Dynamics I"*, **Astrophysical Journal**,
Vol. 206:38-56, 15 May, 1976.

[27]. William G. Tifft, *"Discrete States Of Redshift
And Galaxy Dynamics II: Systems Of Galaxies"*, **Astrophysical
Journal**, Vol. 211:31-46, 1 Jan., 1977.

[28]. William G. Tifft, *"Discrete States Of Redshift
And Galaxy Dynamics III: Abnormal Galaxies"*, **Astrophysical
Journal**, 211:377-391, 15 January, 1977.

[29]. William G. Tifft,* "The Discrete Redshift And
Asymmetry In H I Profiles"*, **Astrophysical Journal**,
Vol. 221:449-455, 15 April, 1978.

[30]. William G. Tifft, *"The Absolute Solar Motion
And The Discrete Redshift", ***Astrophysical Journal**,
Vol. 221:756-775, 1 May, 1978.

[31]. William G. Tifft,* "Periodicity In The Redshift
Intervals For Double Galaxies"***Astrophysical Journal**,
Vol. 236:70-74, 15 February, 1980.

[32]. William G. Tifft,* "Structure Within Redshift-Magnitude
Bands", ***Astrophysical Journal**, Vol. 233:799-808,
1 November, 1979.

[33]. William G. Tifft, *"Quantum Effects In The Redshift
Intervals For Double Galaxies", ***Astrophysical Journal**,
Vol. 257:442-499, 15 June, 1982.

[34]. William G. Tifft,* "Double Galaxy Investigations
II", ***Astrophysical Journal**, Vol. 262:44-47, 1
November, 1982.

[35]. John Gribbin, *"Galaxy red shifts come in clumps",
***New Scientist**, pp.20-21, 20 June, 1985.

[36]. W. J. Cocke and W. G. Tifft, *"Redshift Quantisation
In Compact Groups Of Galaxies", ***Astrophysical Journal**,
Vol. 268:56-59, 1 May, 1983. Also Cocke and Tifft, **Astrophysical
Journal**, Vol. 287:492. Also Cocke, **Astrophysics Letters**,
Vol. 23, p. 239; **Astrophysical Journal**, Vol. 288, p.22.

[37]. T. Beardsley, *"Quantum Dissidents", ***Scientific
American**, December 1992.

[38]. John Gribbin, *"Riddle of the Red Shift",
***New Scientist**, p.17, 9 July, 1994.

[39]. R. Matthews,* "Do Galaxies Fly Through The Universe
In Formation?", ***Science**, Vol. 271:759, 1996.

[40]. W. G. Tifft, *"Properties Of The Redshift III:
Temporal Variation", ***Astrophysical Journal**, Vol.
382:396-415, 1 December, 1991.

[41]. J. Audouze and G. Israel,* "Cambridge Atlas of
Astronomy", *p. 382, Cambridge/Newnes, 1985.

[42]. A. P. French,* "Principles Of Modern Physics",
p.103-121, *John Wiley & Sons, New York, 1959.

[43]. H. E. Puthoff, *"Polarizable-Vacuum (PV) representation
of general relativity", *published by Institute For Advanced
Studies at Austin, Texas, September 1999.

[44]. John Gribbin,* "More to electrons than meets the
eye", ***New Scientist**, p. 15, 25 January, 1997.

[45]. Robert M. Eisberg,* "Fundamentals Of Modern Physics",
*p.137, Wiley, New York, 1961.

[46]. M. Russell Wehr and James A. Richards Jr., *"Physics
Of The Atom", *pp. 108, 196, Addison-Wesley, 1960.

[47]. Peter Fong,* "Elementary Quantum Mechanics",
*p. 16, Addison-Wesley, 1962.

[48]. K. Scharnhorst,* "On Propagation Of Light In The
Vacuum Between Plates",***Physics Letters B**, Vol.
236:3, pp.354-359, 22 February, 1990.

[49]. Marcus Chown, "*Can photons travel 'faster than
light'?", *

[50]. Anonymous,* "Secret of the vacuum: Speedier light",
***Science News**, Vol. 137, p.303, 12 May, 1990.

[51]. Philip F. Schewe and Ben Stein,* "Light Has Been
Slowed To A Speed Of 17 m/s", ***American Institute of
Physics, Bulletin of Physics News**, Number 415, 18 February,
1999.

[52]. B. Setterfield,* "Atomic Quantum States, Light,
and the Redshift," *June 2001.

[53]. B. Lovell, I. O. Paine and P. Moore, *"The Readers
Digest Atlas of the Universe," *p. 214, Mitchell Beazley
Ltd., 1974.

[54]. J. Audouze and G. Israel,* op. cit., *pp. 356, 382.

[55]. Trevor Norman and Barry Setterfield,* "Atomic
Constants, Light, and Time", *SRI International, August
1987. See detailed list under their reference [360].

[56]. N. E. Dorsey,* "The Velocity Of Light", ***Transactions
of the American Philosophical Society**, 34, (Part 1), pp. 1-110,
October, 1944.

[57]. Simon Newcomb,* "The Velocity Of Light",
***Nature**, pp. 29-32, 13 May, 1886.

[58]. Raymond T. Birge, *"The General Physical Constants",
***Reports On Progress In Physics**, Vol. 8, pp.90-101,
1941.

[59]. Alan Montgomery and Lambert Dolphin, *"Is The
Velocity Of Light Constant In Time?", ***Galilean Electrodynamics**,
Vol. 4:5, pp. 93-97, Sept./Oct. 1993.

[60]. Alan Montgomery,* "A determination and analysis
of the appropriate values of the speed of light to test the Setterfield
hypothesis", ***Proceedings of the Third International
Conference on Creationism,** pp. 369-386, Creation Science Fellowship
Inc., Pittsburgh, Pennsylvania, August 1994.

[61]. J. N. Bahcall and E. E. Salpeter,* "On the interaction
of radiation from distant sources with the intervening medium",
***Astrophysical Journal**, Vol. 142, pp.1677-1681, 1965.

[62]. W. A. Baum and R. Florentin-Nielsen, *"Cosmological
evidence against time variation of the fundamental constants",
***Astrophysical Journal**, Vol. 209, pp. 319-329, 1976.

[63]. J. E. Solheim et al.,* "Observational evidence
against a time variation in Planck's constant", ***Astrophysical
Journal**, Vol. 209, pp. 330-334, 1976.

[64]. P. D. Noerdlinger,* "Primordial 2.7 degree radiation
as evidence against secular variation of Planck's constant",
Physical Review Letters, Vol. 30, pp.761-762, 1973.*

*[65]. S. L. Martin and A. K. Connor, "Basic Physics",
*Vol. 1, Seventh Edition, pp. 207-209, Whitcombe & Tombs,
Melbourne, 1958.

[66]. V. Canuto and S. H. Hsieh,* "Cosmological Variation
Of G And The Solar Luminosity", ***Astrophysical Journal**,
Vol. 237, pp. 613-615, April 15, 1980.

[67]. R. T. Birge,* "The Velocity Of Light", ***Nature**,
Vol. 134, pp.771-772, 1934.

[68]. J. Kovalevsky,* "Astronomical time", ***Metrologia**,
Vol. 1:4, pp.169-180, 1965.

[69]. Samuel A. Goudsmit and Robert Claiborne, *"Time",
*p. 106, Life Science Library, Time-Life International, 1967.

[70]. T. Wilkie,* "Time to Re-measure the Metre",
New Scientist, p. 258, 27 October, 1983.*

*[71]. T. C. Van Flandern, **"Is the Gravitational
Constant Changing?", ***Precision Measurements and Fundamental
Constants II**, NBS (US) Special Publication 617, B. N. Taylor
and W. D. Phillips eds., pp.625-627, 1984.

[72]. D. Duari et al.,* "Statistical Tests of Peaks
and Periodicities in the Observed Redshift," ***Astrophysical
Journal**, Vol. 384:35-42, 1 January, 1992. Also, G. Burbidge
and A. Hewitt,* "The Redshift Peak At z = 0.06," ***Astrophysical
Journal**, Vol. 359:L33-L36, 20 August, 1990.

[73]. Halton Arp,* "Seeing Red: Redshifts, Cosmology
and Academic Science", *p. 203, Apeiron, Montreal, 1998.

[74]. C. M. Close,* "The Analysis of Linear Circuits",
*p. 476, Harcourt, Brace, and World Inc., 1966.

[75]. J. J. D'azzo and C. H. Houpis,* "Feedback Control
System Analysis and Synthesis", *pp. 257-259, McGraw Hill
International Edition, 1966.

[76]. M. A. Clayton and J. W. Moffat, *"Dynamical mechanism
for varying light velocity
as a solution to cosmological problems",*

[77]. Paul S. Wesson, *"Cosmology and Geophysics",
*pp. 64-66, Adam Hilger Ltd., Bristol, 1978.

[78]. Martin Harwit,* op. cit., *pp. 514-517.

[79]. F. M. Pipkin and R. C. Ritter,* Science, V*ol.
219, p. 4587, 1983.

[80]. Halton Arp,* "Seeing Red: Redshifts, Cosmology
and Academic Science", *p. 199, Apeiron, Montreal, 1998.

[81]. Martin Harwit,* op. cit., *pp. 177-180.

[82]. Halton Arp,* "Quasars, Redshifts and Controversies",
*Interstellar Media, Berkeley, California, 1987.

[83]. Halton Arp, *"Seeing Red: Redshifts, Cosmology
and Academic Science", *Apeiron, Montreal, 1998.

*****************