## A SIMPLIFIED EXPLANATION OF THE SETTERFIELD HYPOTHESIS

Helen Fryman (tuppence@ns.net), with Barry Setterfield (barry4light2@yahoo.com)

February 26, 2001

The following is a simple explanation of the Setterfield cDK hypothesis intended for teens and undergraduate non-physics majors. Because it is a simplified explanation, there is no formal referencing. Referenced papers from which this is taken are available here: http://ldolphin.org/setterfield/index.html or http://setterfield.org/.

THE EXPANDED UNIVERSE

Both the secular 'Big Bang' (or Big Expansion) model and the Bible agree that the universe has been stretched out. Although there is good evidence that this expansion is no longer taking place, that is not what we will discuss at first here. What we need to review is the effect of that expansion.

Blow up a balloon, or stretch out a rubber band. At first the elasticity is strong and the amount of potential energy it has is at a maximum for that particular thing. When that balloon pops, or that rubber band is released, all the potential energy stored in the stretching becomes immediately released as kinetic, or active, energy. But should that stretched rubber band or that blown up balloon be left for awhile, the rubber would begin to relax, losing its elasticity and simply remaining in a stretched-out condition.

Where did the energy go in the latter case? As the rubber relaxed, the potential energy was actually transformed into active, or kinetic energy, and released into the surrounding air. However the amount of energy was so small compared to the volume of air into which it was dispersed, that it was impossible to discern a difference in the air.

THE 'VACUUM' OF SPACE AND ZPE

We can apply this same idea to the universe itself. However first of all, we need to correct a common misconception. We have a tendency to think of space as an empty or true vacuum. As it turns out, however, that is not the case. Even 'empty' space is filled with all types of radiation, virtual particles, Planck particles, and perhaps even more energy or particles than we are aware of at the present time. This is one reason you will hear the phrase 'fabric of space' being used by some. Space - out there - is simply not an 'empty' thing.

With this in mind, it might be a little easier to understand the idea that when space itself was stretched out, there was a tremendous amount of potential energy locked up in the stretching. Through time, just as with the rubber band or the balloon, this stretching has gradually relaxed (which is different from 'going down'), thus releasing steady amounts of energy into space itself.

Is there any evidence of this? Yes, there is. It has been known for some time that electrons, and all atomic particles, 'jiggle,' or vibrate rapidly, even at absolute zero temperature. This vibration can be measured. The measurement does not take place directly, but takes place the same way we can measure Brownian motion, when a drop of colour is gently placed in a glass of cold water. Gradually the colour will disperse although every effort to keep the water still has been made. Motion exists down to the smallest levels and can be measured by its effects on other things. The measure of the motion of electrons is referred to as Planck's Constant, even though it has been shown not to be constant. It has been measured as systematically increasing over the last century, thus indicating that there has been an increase in energy affecting the electron. This energy is not coming from any 'known' source, but seems intrinsic to space itself. This energy, because it can be seen to be effective at absolute zero, is thus logically called 'zero point energy,' or ZPE. Setterfield's theory is that this is the energy being slowly but steadily released from the fabric of stretched space itself.

QUANTISATION

Now, we need to jump subjects for a second to provide another picture of something that is happening. Do this if you like, or simply imagine it, as you have probably done something like it before: put a glass full of water on a table top. Using as steady an amount of energy as you can control yourself to do, start gently pushing that glass. At first it will not move. Then as the pressure builds up, it will jerk forward a bit and stop. As the pressure from your hand continues, it will jerk forward again a bit. If you were to measure the places on the table where the glass is in time, your measurements would show that jerk - you would not have a smooth 'smear' of measurements across the table.

For instance, if you were to have used enough force to smoothly move that glass across the table, your measurements would show something like this:

 TIME DISTANCE 1 second 1/2 inch 2 seconds 1 inch 3 seconds 1 1/2 inches 4 seconds 2 inches 4 1/2 seconds 2 1/4 inches 5 seconds 2 1/2 inches 51/2 seconds 2 3/4 inches 6 seconds 3 inches 7 seconds 3 1/2 inches, and so on.

However, if the force from your hand was gentle compared to the resistance of the glass, then the jerking would show up something like this:

 TIME DISTANCE 1 second 1/2 inch 2 seconds 1/2 inch 3 seconds 1 1/2 inches 4 seconds 1 1/2 inches 41/2 seconds 2 1/4 inches 5 seconds 2 1/4 inches 5 1/2 seconds 2 3/4 inches 6 seconds 2 3/4 inches 7 seconds 3 1/4 inches

This second measurement table shows what are referred to as 'quantised measurements.' The measurements are clumping together in identifiable groups, unlike the first set of measurements. This is what the term 'quantised measurements' refers to and this will be important later on, too.

Now, let's go back to outer space. We know that there are certain laws of motion that we can count on. Things remain the way they are until enough force is exerted on them to cause a change of speed, or direction, or both. So if there is energy actually being released into space from the potential energy caused by the expansion of the universe, then we should see two things: first we should see any actual measurements of energy show a smooth increase in measurements. Secondly, we should see quantised measurements regarding the effects of this increased energy. That is like your hand and the glass. We could measure the output of energy/pressure from your hand, and the increase in energy released would be smooth. But the effects of that energy - the glass jerking across the table, will be quantised. We should, then, see this quantised effect in the mass of the universe itself, as any energy coming from the mass should show jerks in measurements.

It appears that this may be precisely what we are seeing.

QUANTISATION AND REDSHIFT

The glass on the table was a good picture of resistance to force and then the response when that force built up. With the glass, the resistance was friction. However, when we move down to the atomic scale, which we must do when considering the speed of light and the fabric of space, we are no longer dealing with friction. What we are dealing with is the kinetic, or expressed, energy of the electron itself. This kinetic energy is maintained until enough force or energy is applied to jerk it out of its previous pattern, forcing it into a new one. We should see evidence of this in the light emitted from the atoms if this is truly happening.

Here's a bit more foundation again, so you can understand what is happening. The light we see is part of the entire electromagnetic spectrum. This spectrum is made up of a series of wave lengths, and in the visible part of the spectrum, every colour we see is a different wavelength from every other colour. At the one end of this rainbow, the red colour has relatively long wavelengths, and as we progress down the rainbow of colours to blue and then purple, the wavelengths get shorter and shorter. The longer the wavelength, the lower the energy of the electron which caused the light to be emitted.

When astronomers look at a star, they see the light which was emitted by that star some time ago. It took time for that light to reach us. In other words, when we look at distant stars, we are looking back in time. There is no disagreement that the farther the star, the further back in time we are looking. But there is something happening with the light that needs explanation. We are seeing the light from distant stars is 'red shifted.' What this means is that astronomers are not seeing the colour of light they expect to see from these stars. Every element has its own set of wavelengths when it emits or absorbs light. Scientists have developed a series of laboratory standards which show exactly what group of wavelengths is emitted or absorbed by each element. This is how scientists know what elements a star is made up of. They look at the signature colours. However, as they look at stars progressively more distant from the earth, the emitted light signatures increasingly differ from those used as laboratory standards for these particular elements. Instead the signature colours are shifted more towards the red end of the spectrum. There are two possible explanations for this:

1. That the universe is expanding. This is the commonly accepted
explanation. The idea here is that, as the universe expands, the fabric of
space is getting stretched, including the light waves travelling in it from
distant objects. This means the wavelengths will appear longer, or more red, than they were when
they were originally emitted.

2. That something is happening to the electrons/atoms themselves to cause
them to emit more energetic, or bluer light now, so that when we look back
in time, the emitted light appears redder than our laboratory standards.

How would we be able to tell which is the right explanation? If the expanding universe idea is correct, then we should be able to see the sort of smooth and constant change that was demonstrated in Table 1. If the second idea is correct, however, we should see the effect of the atomic activity resisting change for awhile and then jerking to a new pattern or state. So the redshift measurements are very important for us to look at. Are they showing a smooth pattern or a 'lumpy' pattern showing clusters of measurements with some kind of interval between them?

An astronomer in Arizona, named William Tifft has done about twenty years' of measurements of redshifts. He has documented that there is a clumping effect with the measurements. In other words, they are quantised. Quantised redshift measurements present evidence against the expanding universe explanation for the redshift. Tifft's work was challenged by a number of people. Among them were Drs. Guthrie and Napier, two astronomers. In both 1992 and 1994 they endeavoured to disprove Tifft's work. These two men collected an entirely new set of data for examination. Instead of being able to disprove Tifft, they found, to their amazement, that they were ending up in agreement with Tifft and substantiating his work.

So what does this redshift quantisation tell us? It may indicate that some kind of energy 'jumping' is occurring within atomic structures. If the universe has actually finished expanding and is releasing that potential energy into space, then the progressive increase of the energy available to each and every atom at the same time will result in the sudden jumping of the redshift measurements at specific intervals. This is because the pressure of the released energy would be building up simultaneously throughout the universe and thus every atom in the cosmos would also react simultaneously when the energy reached the threshold stage. Thus we would expect to see emitted light undergoing the quantised jumps that Tifft saw as we look back in time/distance into the universe.

At this point we see some of the possible evidence for substantiation of Setterfield's theory that the universe is no longer expanding but that energy is slowly being released into the cosmos to cause the redshift changes the way we are seeing them. To understand more, we have to look at the atom itself.

THE ATOM, LIGHT, AND MASS

The common idea of the atom, taught in most school science classes, is that it is composed of a nucleus made up of protons and neutrons and an outer series of electrons at various 'levels' around the nucleus. One common model of the atom, called the Bohr model, shows the electrons circling the nucleus like planets circle the sun. Although this is a model that is easy to work with theoretically, the actual positions and movements of the electrons is a matter of dispute. What we do know is that the electrons are not all equally close to the nucleus, but exist at certain definite levels, or distances out from the atom. The level ­ sometimes called a 'shell' or even 'cloud' ­ at which chemical interactions happen is the outermost level or valence level. The implied idea in school up until university is that all the other electrons stay nicely in place in their own little areas.

This is not what actually happens, though. First of all, light is emitted from an atom when some kind of incoming energy or particle pops an electron out of its customary place to one farther away from the nucleus. In returning, or popping back, to its original level, that electron then gives up the extra energy it received, and that energy is emitted as light. So the first thing to understand is that the emission of light depends on the amount of energy the electron received in the first place.

The next thing to understand is not nearly as simple. It has to do with the answer to the question, "What is mass?" The simplest explanation is that "no one knows for sure." The first and most common idea is that mass is something in and of itself, which is affected by energy. This idea considers mass to be, at some point, a solid 'something.' The concept of most physicists who hold this view is that each bit of mass is a kind of positively or negatively charged 'point' inside a cloud of energy. It is known that there is a good deal of vibration in all of the atomic structure, and that this vibration surrounds each atomic point of charge. What is in the middle of this vibration has been assumed to be something solid, something ever so tiny, but really there, which has a positive or negative charge.

There are some physicists who dispute this view, however. Since the mid 1990's there has been a group who has put forward the theory that there is nothing solid about matter, or mass, at all. This theory postulates that every electron, and every other subatomic 'particle', is really simply energy existing as positive or negative charges in a very compact form. Mass, and therefore matter, would really then only be specifically interacting charges of various configurations.

This can be a hard one to swallow -- it's kind of hard to think of this computer and the table it is sitting on, not to mention ourselves, as being conglomerations of pure energy! But if you look at Einstein's equation, it might help it make a bit more sense.

THAT FAMOUS EQUATION

Whichever view of mass one chooses to take, we need to take a closer look at Einstein's equation. Almost everyone knows Einstein's famous equation, E = m c2. E is energy, m is mass, and c is the speed of light. The most basic fact about this equation is that it indicates that mass can become energy and energy can become mass. Actually, we can see this matter/energy conversion in our own lives when we burn wood and get heat and light energy along with ashes. Plants, on the other hand, take energy from sunlight and use it to manufacture carbohydrates, which we then eat and use for energy. So we can see both in theory and in life that mass and energy can be exchanged.

As can be seen, there is another factor involved -- 'c', or light speed. If the speed of light is changing, then, to keep Einstein's equation balanced, and therefore true, either energy or mass must also be changing, or perhaps both of them. How can one determine what might be going on here?

Through the years, various 'atomic constants' have been discovered and worked with mathematically and in physics. An atomic constant tells us how the atom behaves in response to the energy available to it. These constants are seen as determining the behaviour of the atom in the same way as the 'natural laws' we recognise in science (such as laws of gravity, laws of motion, etc.) govern the way other things behave. The mathematical formulas dealing with these atomic constants are recognised as being accurate. The 'constants' are called constant because it was originally believed they wouldn't vary. However, for some of them, that has not proved to be the case. There will be a little more about his later.

In determining effects associated with the speed of light, Setterfield spent a great deal of time working with the mathematical formulas associated with these constants. If light speed was not constant, then was it energy or mass which was being conserved, or, looking at it the other way, which one was changing? After working with all manner of possibilities regarding the various atomic constants and the equations involved, there ended up being only one possible answer: energy was being conserved and mass was changing.

Yeah, right. So why aren't we all getting fatter or skinnier or something? It's because we are not talking about mass we can see and work with; we are talking about the mass of the atomic parts. With atomic parts, mass refers to, in very simple language, how much space, or volume, each atomic 'particle' takes up. And that depends on how much it is vibrating. The more it vibrates, the more space, or volume, it occupies. So whether mass is pure energy or whether there is some kind of a 'thing' inside of all that energy, it doesn't matter. The more it vibrates, the more volume it occupies, and thus the more mass it is considered to have atomically.

Thus, when Setterfield says the mass is increasing, he does not mean anything is gaining weight. He is saying that there is more 'jiggle' to each part of the atom's structure, thus meaning every sub-atomic unit has a slightly greater volume, thereby taking up a bit more space. This does not change the atom's structure or chemical interactions. What it does change is the amount of energy in the electron itself. So when an electron is 'popped' out of its level by incoming energy, and then 'pops' back, the energy difference between where it is and where it is returning to is higher after each quantum jump. It is this energy that is responsible for the emission of light, so the light itself will be emitted at a slightly higher energy, or shorter (bluer) wave-length, with each quantum jump.

Let's go one step further now.

In the picture used with the balloon, the energy released by the relaxing rubber was so minute compared to the surrounding air, that no changes were noticed. However the universe is enormously large, and as its fabric relaxes, the energy released into space is enormous. One of the things this energy does is flip back and forth between matter and energy on an incredibly tiny scale. The very tiny bits of matter that will flash into and out of existence are referred to as virtual particles. They will be the key to being able to measure the energy being released. The more virtual particles, the greater the energy in space.

So how to we count virtual particles? We don't. But we can measure something they are doing. When light travels, it gets absorbed by whatever it comes in contact with. In the case of a wall, it is absorbed or reflected and that's that. With glass, most is absorbed and then re-emitted the other side, with only a small amount getting lost or reflected. Virtual particles also absorb the light that comes in contact with them. And then they re-emit it, or pass it on. This takes a very short amount of time, but nevertheless, it does take SOME time. Therefore the more virtual particles there are, the slower it will appear light will travel, as it must be absorbed and re-emitted by more particles in any given distance.

Following this theory through, then, we should see a general drop in the speed of light measurements through time as the amount of usable energy in the fabric of space increases. In other words, as more energy is released into space, more virtual particles will be popping into and out of existence, and this will cause light to take a slightly longer time to travel from point A to point B. Because light speed is direct result of the number of virtual particles in a given distance, we should see a somewhat smooth function in the change in the speed of light. In other words, like the first chart at the beginning of this article, the measurements would show a smooth systematic change. What do we see historically?

Before the seventeenth century, it was thought that the speed of light was infinite. However, by the middle of the seventeenth century, the first light speed measurements had been made by timing eclipses of Jupiter's moons. It became evident through the next centuries that something strange seemed to be happening. Not only was the speed of light NOT infinite, but, as more and more measurements of the speed of light were made, there seemed to be a general trend showing it was slowing. By the early twentieth century, this phenomenon was starting to be discussed and argued about in the scientific literature.

We can now start tying a few of the pieces above together. We know that the zero point energy is increasing, as "Planck's Constant" is increasing. With this increase of energy, we will have more of the virtual particles impeding the light as it travels, thus slowing it down. In other words, the more energy we see affecting atomic particles, the more we would expect to see light speed decreasing. Historic measurements bear this out. The evidence of the quantised redshift also supports this theory.

QUANTUM INTERVALS AND QUANTUM JUMPS

There is more to consider in this theory. The atom has two distinct 'times' in its existence. The first is when it is jumping to a new energy level, propelled by the increasing energy affecting it, and the second is all that time in between these quantum jumps, which is called the quantum interval.

During the quantum interval, the energy being released from the fabric of space continues to build, slowly but surely. Various atomic 'constants' give indication of this with their changing measurements. The interesting thing that has been found is that as some 'constants' go 'up', other related 'constants' go 'down.' Because of this, they cancel out each other's effect on the atom itself, so the atom stays the same during the quantum interval, even though some of the constants show continual, small, gradual change.

But, eventually, when the energy pressure has built up enough throughout the entire universe, every atom in it changes simultaneously with a little jerk. These 'jerks' are not only incredibly small, but although they were happening rapidly at first, have been very rare in the last four thousand years. The energy being released into the universe is now being released very slowly as the pressure from the initial stretching has been progressively dissipated. But when these 'jerks' happen, and the atom finally responds to the build-up of energy, every single atomic 'particle' starts jiggling, or vibrating, a little more, taking up a little more volume for itself. This is the gain in mass mentioned earlier.

This gain in mass is a result of the same release of energy that is slowing down light speed. That is because the more energy is released into 'empty space', or the vacuum, the more virtual particles will be popping into and out of existence, as a result of that energy. And the more virtual particles, the more of them will be in the way of a beam of light, and the more often the beam of light will be absorbed and re-emitted by them. Thus it will take light more time to get from one point to another.

Einstein realised that mass and energy and light speed were all related. Thus we have the equation E=mc2. Setterfield is trying to show us where and when this is true in the universe itself. There are a number of men who are studying the speed of light right now, and some of them have been writing articles and making the news. Studies which show man himself can slow down or speed up the speed of light are interesting, but they are not directly relevant to the idea that the speed of light has not stayed constant in the universe through time. Other studies and papers by men such as Albrecht, Magueijo, Barrow, and Troitskii, who are studying the idea of a changing speed of light in the universe through time, are mostly dealing with the subject on a purely theoretical basis. By contrast, Setterfield is primarily dealing with the data that has been collected and with the phenomena we have been able to note scientifically. As such, whether his work ends up being right or wrong, it deserves much closer attention than the mainstream scientific world has been willing to give it up to this point.

1. Why hasn't the mainstream scientific world paid attention to Mr. Setterfield?

Primarily for two reasons: Mr. Setterfield's analysis is a strong indicator that the entire universe is probably very young. This does not sit well with evolutionists, who require a very long time for their ideas to work. Secondly, Mr. Setterfield was forced to leave university training to take care of sick family members years ago and had to continue his studies on his own, and thus never got a degree. He has, however, continually subjected his work to the scrutiny of others who are highly qualified in the fields of math, statistics, and physics to make sure he is not mishandling data or miscalculating in his math.

2. Why hasn't Mr. Setterfield been published in peer-reviewed journals if his work is correct?

Setterfield's recent technical paper has been submitted to three different journals in the past two years. It was refused by the two physics journals because a) it was declared not of sufficient importance or substantive enough; b) it would not be agreed to by a majority of scientists in the field, and c) one reviewer did not like the fact that one of his references (out of over 150 references) was a university text and not a peer-reviewed or other professional journal. The astronomy journal refused it saying it looked very interesting but belonged in a physics journal. The paper is now being prepared for the web. None of the refusals Mr. Setterfield received criticised his physics or his math. They simply did not like the clear conclusions that had to be drawn from them.

3. Wouldn't a change in the speed of light upset biological processes?

No. This is a common misconception. Biological processes are basically chemical processes. Speed of light changes, as noted above, do not change the position of the valence electrons of each atom. It is these outer electrons which govern chemical reactions. Would the increased amount of charge (the greater volume taken up by each charge) not change the rate of these reactions? There are two things to consider here. First of all, the increased charges of the electrons would repel each other more forcefully, thus providing a slowing effect, which would tend to counterbalance the possibly more rapid reaction rate in the present. In the past, conversely, the atomic particles would have had a lower charge resulting in decreased energy for the atom. This would have slowed reaction rates in counterbalance to the decrease in repulsive force between electrons. In addition, if one were to consider simply one individual chemical reaction, there might still be a significant change in biological processes. However biological processes are not individual reactions, they are chains, or cascades of reactions. Each cascade is governed by the slowest part of the reaction. Thus there is a natural brake applied which protects life itself from the consequences of too fast a series of reactions.

4. What about radioactive decay? Wouldn't a faster light speed cause a much faster rate of decay in the past, releasing more heat, and burning up the planet, or at least all of life?

First of all yes, a faster light speed is indicative of a much faster rate of radio decay. This is because the equivalent of 'c', the speed of light, is in the numerator of every reduced equation for every decay rate. Thus, the faster the speed of light, the higher the rate of decay. The first effect we see, then, is that we need to be very careful about the ages we assign to items which are analysed this way.

In addition, most radioactive elements were deep inside the earth's interior initially, so that life on the surface was safe. Because all the radioactive atomic elements were decaying initially, those with short half lives, which have since finished decaying would have contributed to a very rapid build up of heat in the interior. However, it was not be until the heat caused enough of a build-up to break through the crust of the earth in some kind of explosive activity that the surface of the earth would have been affected. (What is interesting is that throughout the ancient cultures of man, we have stories and legends about just such activity happening.) We do not see this heating effect today on the surface or underneath in part because the speed of light has slowed significantly and also because the original short half-lived elements have finished decaying.

It should also be noted that the amount of heat radiation in a given volume from any given reaction would also be lower. This may be the end of the paper, but listen up here ­ this one is a bit complicated.

1. Space transmits electromagnetic waves, such as light. This means space itself must have both electric and magnetic properties. The electric property of space is referred to as 'permittivity' and the magnetic property is referred to as 'permeability.' These properties are governed by the number of virtual particles popping in and out of existence in a given volume. When there are fewer virtual particles per given volume, both the permittivity and the permeability of space are lower, which means that there is less resistance to the electric and magnetic elements of the photon ('packet' of light). Without this resistance, light travels more quickly.

2. In combination with the first point, when the speed of light was faster, a photon of light would travel farther in one second than it would travel now. That means that the same amount of light, or any radiation, would take up a greater volume at any one time. And THAT means that in any given, or defined, volume, the actual density of radiation from any given reaction would be less before than now.

3. Although faster radioactive decay rates mean that more radioactive atoms are decaying in a given time, the heat problem is offset by two factors: First that the amount of heat radiation in a given volume is lower, as explained in the previous two points. Secondly, as explained earlier in this paper, as we go back in time we are also going back to before so much energy was available to the atom. Before each quantum jump, the atom had lower energy than after. So the net effect here is that the earlier in time, the lower the energy of the atom, even though the light speed and therefore the actual rate of decay was faster. This lower energy in the atom thus somewhat reduced the amount of heat released by any given decay process.

Thus, the expected 'frying' effect of a higher radiodecay rate which would be part of a time of higher light speed was counteracted by several factors:
First, the initial depth in the earth of radioactive materials.
Second, the increased volume taken up by any given photon.
Third, the lower energy in the atom in the past.

* * * * * * * * * * *

[Note from the authors: In an effort to provide the student with easily visualisable concepts, there has been a necessary simplification of some technical points. However we do have confidence that the basic concepts, as presented here, are correct.]

March 1, 2001