Archive for the ‘Quantum-Geometry Dynamics’ Category

LIGO: Gravitational Waves or Gravitational Tidal Effect?

General relativity correctly predicted the precession of the perihelion of Mercury and the correct angle of deflection of starlight by the sun both of which Newton’s theory of universal gravitation apparently had failed to correctly predict.

Newton’s theory of universal gravity also fails to describe the orbital decay of binary systems such as the Hulse-Taylor binary system which observation was consistent with general relativity. Favoring general relativity as the theory that correctly describes gravity is a clear cut decision considering its successes. General relativity succeeded where Newton’s theory of gravity had failed. But is the matter really settled? Let’s take a closer look at how Newton’s theory of gravity has been applied to the observations cited above.

In order to describe the evolution of two gravitationally interacting bodies $a$  and $b$  , the magnitude of the gravitational force is calculated using Newton’s equation for gravity $\vec{F}={{G}_{N}}\frac{{{m}_{a}}{{m}_{b}}}{{{d}^{2}}}\vec{x}$  where ${{m}_{a}}$  and ${{m}_{b}}$  are the masses of the bodies, then substituted in the equation for Newton’s second law of motion; the familiar $\vec{F}={{m}_{a}}\frac{\Delta {{{\vec{v}}}_{a}}}{\Delta t}$  where $\frac{{{{\vec{v}}}_{a}}}{\Delta t}$  is the acceleration of $a$ . This is as straightforward a calculation as can be but there lays the problem.

Gravity, according to Newton’s law, is instantaneous. It follows that if gravity is instantaneous, so must the action of gravity be instantaneous. So applying the second law of motion (which is time dependent) to describe the effect of Newtonian gravity introduces a lag in the action that is incompatible with instantaneous gravity. This lag of the action of gravity introduced by using the second law of motion is precisely what caused predictive errors in Newtonian mechanical description of the precession of the perihelion of Mercury, of the bending of star light and of the orbital decay of binary systems. In fact, once the time dependency and consequently the time lag are eliminated from the gravitational action, we find that Newtonian gravity is in perfect agreement with observations (see Special and General Relativity Axiomatic Derivations).

The fact is that Newtonian gravity (when correctly applied) and general relativity can and with equal precision predict the behaviour of gravitationally interacting bodies for the above phenomena is problematic. This forces us to find other ways to answer the question as to whether gravity is a force that acts instantaneously between bodies or if is the effect of curvature of space due to the presence of matter. Clearly, the two explanations of the nature of gravity are foundationally incompatible.

It follows from QGD’s equation for gravity $G\left( a;b \right)={{m}_{a}}{{m}_{b}}\left( k-\frac{{{d}^{2}}+d}{2} \right)$  that gravity becomes repulsive when bodies separated by distances such that $k\le \frac{{{d}^{2}}+d}{2}$ . That is, there is a threshold distance ${{d}_{\Lambda }}\approx 10Mpc$  (from observations) beyond which gravity becomes repulsive and increases proportionally to the square of the distance.  The effect of repulsive gravity as described by QGD is consistent with the observed expansion of the universe which is currently attributed to dark energy. This allows for new predictions that are distinct from those of general relativity.

If QGD is correct, the magnitude of the gravitational repulsion between the Earth and the black holes that caused the GW150914 event must be $2*{{10}^{3}}$  greater than the magnitude of the attractive gravitational force in close proximity to the binary system that caused the event. Such gravitational effect is astronomically greater than the signal detected by LIGO in 2015. In fact, the repulsive force would be enough to tear our galaxy apart from the gravitational tidal force and accelerate it to speed approaching the speed of light. And the repulsive force between the Earth and the recently observed GW170104 event, presumed to be a twice the distance, would be four times as great. The reason our galaxy (and others) is not torn apart is that the distribution of matter in the universe is nearly homogenous so that the repulsive gravitational forces from distant massive systems acting on each individual particle that compose our galaxy are nearly cancelled out by the repulsions from systems in the opposite directions; resulting in a weak net gravitational effect. So, if the GW150914 and GW170104 events are gravitational, the detected signals would be tidal effects of the net gravitational forces acting on the detectors . That is, the signals are not gravitational waves but the measurement of the instantaneous gravitational tidal effect $\sum\limits_{i=1}^{n}{\vec{G}\left( a;{{b}_{i}} \right)}$  where $a$   is the detector and ${{b}_{i}}$  is one of a total of $n$  massive structures forming the universe. So, LIGO may be thought as measuring the fluctuations of the gravitational tidal effect of the universe on its instruments.

Some Distinctive Predictions of QGD that Are Now Being Tested (or will be in the near Future)

If gravity is instantaneous as predicted by QGD and Newton’s law of universal gravity, then

• we will never detect multi-messengers signals from events predicted to simultaneously generate gravitational and electromagnetic signals.  Electromagnetic signal from the merging, for example, of neutron stars, would arrive up to billions of years after the gravitational signal.
• Gravitational signal from the merging of massive objects at distance close the threshold distance ${{d}_{\Lambda }}\approx 10Mpc$ would be undetectable.
• No loss in mass of the merging massive objects in the form of gravitational waves (in fact, there is no mechanism that may account for the conversion of mass into gravitational waves). The mass of the object resulting from the merging will be equal to the sum of the masses of the merged objects.
• Angular radius of the shadow of Sagittarius A* should be 10 times larger than predicted by general relativity

(more can be found in different section of this blog and in Introduction to Quantum-Geometry Dynamics)

New LIGO Announcement Tomorrow (Where’s the Fanfare)

Last year was all about Advanced LIGO’s announcement that they had for the first time detected gravitational waves predicted to exist a hundred years earlier. Understandingly, the press coverage was proportional to the importance of the discovery. The conference which was released in the entire world was, to my knowledge, amongst the events that received the widest press coverage ever for a scientific discovery.

In the field of astrophysics, the only comparable event was probably the detection of primordial gravitational waves by the BICEP2 experiment announced with great fanfare in 2014.

Immediately after the BICEP2 announcement, I predicted that the results would be refuted by further observations. It was not that I was skeptic. It was not just a random opinion, but a direct consequence of quantum-geometry dynamics. The level of confidence in the BICEP2 discovery was so high than very few doubted the validity of the results. I was one of few people who immediately predicted that the results would not hold and as we all know the BICEP2 discovery was refuted later that year.

I made a similar prediction for the LIGO detections the days prior and following the announcement in February 2016. Since the announcement, the sensitivity of LIGO was increased and the second run of observation started in November 2016. Tomorrow, the results of the second run of observations will be released, but this time, there is no press coverage except from two minor local news sources. The release is not even mentioned on the Facebook page of the LIGO collaboration. Why is the release so hush hush? One would think that after the last year’s announcement of the detection of gravitational waves (and the unrelenting news coverage since then) that any news from LIGO would be treated as a highest priority by the media if that is what the LIGO collaboration made the slightest effort to publicize it. But the lack of any attempt to draw attention to the results is probably, as I predicted, because the earlier detection have not be corroborated by new detections.

Good science requires that before being considered a discovery the results of any observation or experiment must be reproducible. Considering its higher sensitivity, the duration of the second run and the theoretical probability of more detection, Advanced LIGO should have made more detections in its second run and it had in its first. Because of that, null results are even more significant than the detection announced last year as they cast doubts on the validity of the discovery.

My prediction is no new detections of black hole mergers announced tomorrow but not to worry, that only provides new constraints on the frequency of events capable of producing detectable gravitational waves, right?

[UPDATE] It seems that they are announcing the detection of one black holes merger (see article here).

From the article:

“Normally, an event like this would trigger an alert to the astronomy community, which could then attempt observations in the area of the sky where the event took place. But, in this case, a recent period of maintenance had left one of the two detectors set in a calibration mode.”

That is disappointing since the simultaneous independent detections of the non-gravitational signals would test the predicted speed of propagation of gravitational waves and would put to rest the prediction of QGD that gravity is instantaneous and that the signals detected by LIGO are due to the tidal effect of gravity.

If QGD’s equation for gravity is correct, gravity becomes repulsive at distances greater than 10Mpc and the magnitude of the repulsion increases as a function of distance (this would account of the expansion of the universe we attribute to dark energy). That means that the greater the distance, the greater the tidal effect of gravity.

QGD prediction of the Density and Size of Black Holes

QGD predicts that black holes are extremely dense but not infinitely so. Considering that $preon{{s}^{\left( + \right)}}$ are strictly kinetic and that no two can simultaneously occupy any given $preon{{s}^{\left( - \right)}}$ then $\max densit{{y}_{BH}}=\frac{1preo{{n}^{\left( + \right)}}}{2preon{{s}^{\left( - \right)}}}or\frac{1}{2}$ . It follows that $\min Vo{{l}_{BH}}=2{{m}_{BH}}preon{{s}^{\left( - \right)}}$ or, since $preo{{n}^{\left( - \right)}}$ is the fundamental unit of space, we can simply write $\min Vo{{l}_{BH}}=2{{m}_{BH}}$ for the minimum corresponding radius $\min {{r}_{BH}}=\left\lfloor \sqrt{\frac{3{{m}_{BH}}}{2\pi }} \right\rfloor$ .

For the radius of the black hole predicted to be a the center of our galaxy, ${{m}_{BH}}\approx 4*{{10}^{6}}{{M}_{\odot }}$ and $\min {{r}_{BH}}=\left\lfloor \sqrt{\frac{3{{m}_{BH}}}{2\pi }} \right\rfloor \approx 1.24*{{10}^{2}}{{M}_{\odot }}$ where the mass is expressed in $preon{{s}^{\left( + \right)}}$ and radius in $preon{{s}^{\left( - \right)}}$ . Though converting this into conventional units requires observations to determine the values of the QGD constants $k$ and $c$ , using relation between QGD and Newtonian gravity, we also predict that the radius within which light cannot escape a massive structure is $\displaystyle {{r}_{qgd}}=\sqrt{{{G}_{const}}\frac{M}{c}}$ where $\displaystyle {{G}_{const}}$ is used to represent the gravitational constant. Since the Schwarzschild radius for a black hole of mass ${{M}_{BH}}$ is ${{r}_{s}}={{G}_{const}}\frac{{{M}_{BH}}}{{{c}^{2}}}$ then $\displaystyle {{r}_{qgd}}=\sqrt{c{{r}_{s}}}$ .

Using ${{r}_{qgd}}$ to calculate ${{\delta }_{{{r}_{qgd}}}}$ the angular radius of the shadow of Sagitarius A*, the black hole at the center of our galaxy, we get ${{\delta }_{{{r}_{qgd}}}}\approx 26.64*{{10}^{-5}}$ arcsecond as a minimum value which is about 10 times the angular radius calculated using the Schwarzschild radius which i ${{\delta }_{{{r}_{s}}}}=27.6*{{10}^{-6}}$ arcsecond. This prediction will be tested in the near future by the upcoming observations by the Event Horizon Telescope.

The Concept of Time

The single most misleading concept in physics is that of time.

Although time is a concept that has proven useful to study and predict the behaviour of physical systems (not to mention how, on the human level, it has become an essential concept to organize, synchronize and regulate our activities and interactions) it remains just that; a concept.

Time is a relational concept that allows us to compare events with periodic systems; in other words, clocks. But time has no more effect on reality than the clocks that are used to measure it. In fact, when you think of it, clocks don’t really measure time. Clocks count the number of recurrence of a particular state. For instance, the number of times the pendulum of a clock will go back to a given initial position following a series of causality linked internal states. So clocks do not measure time, they count recurrent states or events.

If clocks do not measure time, what does?

That answer is nothing can. There has never been a measurement of time and none will ever be possible since time is non-physical. Neither has there been or ever will be a measurement of a physical effect of time on reality. Experiments have shown that rates of atomic clocks are affected by speed and gravity, but these are slowing down of clocks and not a slowing of time.

Yet, as useful the concept of time may be, it is not, as generally believed, essential to modeling reality. In fact, taking the concept of time out of our descriptions of reality solves a number of problems.

For instance mass, momentum, speed and energy are intrinsic properties thus different observers will measure the same mass, speed, momentum and energy regardless of the frame of reference they use.

And if time does not exist, neither does time dilation. Time dilation and the implied assumption of space continuum are essential to explain the constancy of the speed of light in special relativity. But neither is necessary in QGD since the constancy of the speed of light follows naturally from the discreteness of space.

Finally, if time does not exist, then although the unification of space (a representation of space to be precise) and time (which is a relational concept) into mathematical space-time provides a useful framework in which we can study the evolution of a system, physical space-time makes no sense.

Time Distance Equivalence

A simpler and physical way to measure the duration of an event is to mark its start and end and measure the absolute distance a photon will simultaneously travel. This provides measurement of duration that is based on actual physical quantities.

Note: The above is an excerpt from Introduction to Quantum-Geometry Dynamics

Extraordinary Claims Require Extraordinary Evidence

Carl Sagan used to say “Extraordinary claims require extraordinary evidence.” What are extraordinary claims and what is extraordinary evidence? How does this moto apply to quantum-geometry dynamics?

Assuming that by “extraordinary claims”, Carl Sagan meant predictions that are not only in contradiction with current understanding (which is part of the normal evolution of science) but threatens to overturn our most fundamental understanding of nature then, yes, one can say that QGD makes extraordinary claims.

Take for instance QGD’s explanation of the redshift effect. If QGD is correct, then a star could be speeding on a collision course towards the Earth and its light would still be redshifted as long as the Earth moves in the same direction. Current understanding predicts that its light would be blueshifted. QGD challenges the redshift-distance relation that follows the accepted interpretation of the redshift effect and if correct then all maps of the universe generated from the redshift-distance relation would be wrong. That is without doubt an outrageously extraordinary claim.

But does such extraordinary claim as the one QGD makes about the redshift require extraordinary evidence?

The answer to this question depends on what one’s definition of what constitutes “extraordinary evidence is.” If extraordinary evidence is observations or experimental results that have never been observed before that contradict current observations, then no, QGD does not require extraordinary evidence. If “extraordinary evidence” is what results from extraordinary experimental or observational means, then again no.

What is required is that QGD’s descriptions be consistent with nature (and that includes the data from the body of experiments and observations up to this point). Additionally, it must make new and original predictions that can be tested through experiments or observations. Much of the evidence QGD requires is most possibly hidden in the data we already have collected or within the data from new observations such at the GAIA mission.

In conclusion, even the most extraordinary claims of quantum-geometry dynamics require quite ordinary evidence. But maybe, ordinary evidence becomes extraordinary when it is found to support extraordinary claims. In which case, Carl Sagan is right.

Dark Matter’s Two Types of Interactions

Quantum-geometry dynamics (QGD) is a theory derived from a minimal axiom set necessary to describe dynamics systems in a fundamentally discrete universe.

According to QGD, all matter in the universe is compose of $preon{{s}^{\left( + \right)}}$ which is the fundamental unit of matter. $Preon{{s}^{\left( + \right)}}$, being fundamental, do not decay or transmute into other particles but they combine to form all that we know from photons and neutrinos, to more massive and complex structures.

Most $preon{{s}^{\left( + \right)}}$ are still free and permeate space and interact in only two ways: Gravitationally and through the electromagnetic effect.

We have explained in an earlier article that in its initial state the universe only contained free $preon{{s}^{\left( + \right)}}$ that distributed homogeneously throughout the entire space. The cosmic microwave background was formed when $preon{{s}^{\left( + \right)}}$ combined to form photons. Thus QGD explains the isotropy of the CMBR with few physical assumptions; all of them testable using present technology. $Preon{{s}^{\left( + \right)}}$ account for all other large scale effects attributed to dark matter (gravitational lensing for example) but there are local effects at our scale that we observe or make use of every day.

QGD explains that magnetic fields result from the interaction of charged particles or structures and the free $preon{{s}^{\left( + \right)}}$ of their neighbouring regions. And changes in momentum induced by magnetic fields are simply the momentums imparted by their polarized $preon{{s}^{\left( + \right)}}$ .

If QGD is correct, there is nothing mysterious or unusual about dark matter. We encounter it every day but just don’t call it that.

The chapter on the laws of momentum in Introduction to Quantum-Geometry Dynamics.

There is a lot of advice out there for outsider scientists from allegedly well-intentioned members of the scientific community and much of it makes a lot of sense. Doing your homework, understanding your subject, acquiring the mathematical skills necessary to express ideas in a way that can be understood and which may allow them to make not only quantitative descriptions of physical phenomena but testable predictions; all of which are essential if one wants to contribute to whatever subject one choses.

But a lot of that advice is purely sociological and have little to do with science. A lot the well-meaning, well-intentioned advice is really about doing all that is necessary to be recognized and accepted by the institutionalized scientific community and ignores one essential fact. The goal of science is to try and make sense of nature as it is revealed through experiments and observations. But to most other respected members of the academia, acceptance by the scientific community implies scientific validity and vice versa, but does it really?

I have read quite a few articles aimed to help the outsiders (they pretty much say the same thing). The last one I came across is an article written in 2007 by physics professor Sean Carroll. He correctly describes what is required for an outsider scientist to be taken seriously by the academia, but he never really discusses what is required of a scientific theory to be considered valid.

A theory is required to describe, explain and predict. And by predictions, I mean testable predictions. But even that is unclear and subject to interpretation so let me clarify.

For a theory to be scientific, it must answer positively to the following questions:

1. Do its axioms form an internally consistent set?
2. Is the theory rigorously derived from the axiom set?
3. Are all descriptions derived from the theory consistent with observations?
4. Can we derive explanations from its axiom set that are consistent with observations?
5. Can we derive from the axiom set unique and testable predictions?

Questions 1 to 5 allow us to determine if theory is scientific, but to be valid, a theory must answer the question:

1. Have the predictions derived from the theory been observationally or experimentally confirmed?

Carroll also paraphrases an argument proposed by pretty much everyone who provides advice for outsiders. That is: A new theory must agree with theories that have been well tested. But what does that mean exactly?

Does it mean that a new theory must agree with an established theory’s explanations or interpretations of observations? If that were the case, then the relativity theory’s interpretations of observations would need to agree with Newtonian interpretations which they don’t and can’t since they are based on are mutually exclusive axiom sets.

What agreeing with an established theory means then is that the explanations, descriptions and predictions derived from a new theory must minimally agree with observations that have been found to support established theories. This requirement appears simple enough, yet it is misunderstood by the majority of both insiders and outsiders alike. So further clarification is required.

Agreeing with observations is not the same as agreeing with theory-dependant interpretations of observations. Both Newtonian gravity and general relativity agree on many observations for which they have completely different interpretations. So a number of theories can be in contradictions with one another yet correctly describe the very same observations. Because of that, only by testing predictions unique to each theory can we determine which ones are valid.

If I may offer a piece of advice to members of the scientific community who are willing to bestow their wisdom upon us lowly outsiders. I agree that we should respect all the hard work, the sacrifices, the dedication and passion of the professional researchers. That said; respect is a two way street.

There is no much respect in underestimating the intellects of outsiders, no much respect in choosing as the only examples of outsider theories the most idiotic. There are outsiders who though they may have unorthodox approaches have and can contribute to our understanding of nature. They are no less dedicated, hardworking and no less capable of mathematical rigor.

Condescension may get a few laughs from your peers, but it only shows arrogance and contempt for those you seek respect from. If you really want to help, if you really do want to help, then try and step down from your academic ivory tower and do just that; help. Filter out the crackpots and cranks (you already have experience dealing with plenty of those in your own ranks) and open some channels. Outreach programs are important, but “inreach” programs are the only thing that would help if help is what you sincerely want to do.

Determining the Intrinsic Luminosities of Distant Supernovas

The current methods of determining the intrinsic luminosities of supernovas require correcting their apparent luminosities using their redshifts.

These methods are consistent with our current understanding of the redshift effects but this understanding may be put into question if the data collected from the GAIA mission confirms that the motion of stars around in our galaxy does not show the flatness of the angular speed of stars of similar galaxies as determined by their redshifts.

Observational confirmation of the above prediction would support QGD’s explanation of the redshift effects. The relation between redshifts and apparent luminosities following from QGD would allow for more precise determinations of the intrinsic luminosities of supernovas, hence their distances, but only provided that the distances of the reference supernovas are determined through their parallaxes so as to avoid model dependent physical assumptions.

And should QGD’s description of the redshift effect, it would be possible to determine the intrinsic speed of the Earth using supernovas as explained here.

The Measurement of the Rotation of Galaxies and Redshifts

We have shown (see QGD and the Redshift Effect) that the redshift effect is dependent on the speed of the detector relative to the intrinsic speed of the photon. This provides a very different interpretation of the redshift observations from distant galaxies. The usual theoretical interpretation of the redshift, as dependent on the motion of the source relative to the detector is used to measure the speed of distant objects, including the rotation speed of galaxies.

The classical interpretation of the redshift gives speeds of rotation that are not in agreement our best theories of gravity which predicts the nearer star are to its galactic center, the greater their speeds should be. But that is not what was observed.

The orbital speeds of stars, estimated from their redshifts, are about the same regardless of their distance from their galactic centre. This led to the introduction of dark matter models to explain the discrepancy between predictions and observations. QGD does not dispute the existence of dark matter which existence it predicts and is supported by a number of observations that do not depend on redshifts measurements. However, QGD shows that the redshifts from all stars from a galaxy will be the same independently of their speed. In other words, even if their actual orbital speeds are in agreement with our theories of gravity, their redshifts will be the same. Hence the orbital speeds of stars derived from the accepted redshift interpretation will give similar speeds in agreement with observations.

Prediction

QGD predicts that the angular and axial speeds of stars estimated through their parallaxes will show them to be dependent on their distance from the galactic center. GAIA , which is underway, will be making such observations which could confirm QGD’s prediction.

The Measurement of Physical Properties and Frames of Reference

Note: the following is a section of Introduction to Quantum-Geometry Dynamics

According to QGD:

• ${{m}_{a}}$, the mass of an object $a$, is equal to the number of $preon{{s}^{\left( + \right)}}$ that compose it;
• ${{E}_{a}}$ , its energy, is equal to its mass multiplied by the fundamental momentum of the $preo{{n}^{\left( + \right)}}$; that is: where ${{\vec{c}}_{i}}$ is the momentum vector of a $preo{{n}^{\left( + \right)}}$ and $c=\left\| {{{\vec{c}}}_{i}} \right\|$is the fundamental momentum, then ${{E}_{a}}=\sum\limits_{i=1}^{{{m}_{a}}}{\left\| {{{\vec{c}}}_{i}} \right\|}={{m}_{a}}c$.
• ${{\vec{P}}_{a}}$ , the momentum vector of an object, is equal to the vector sum of all the momentum vectors of its component $preon{{s}^{\left( + \right)}}$ or ${{\vec{P}}_{a}}=\sum\limits_{i=1}^{{{m}_{a}}}{{{{\vec{c}}}_{i}}}$ and ${{P}_{a}}$ , its momentum, is the magnitude of its momentum vector. That is: ${{P}_{a}}=\left\| {{{\vec{P}}}_{a}} \right\|=\left\| \sum\limits_{i=1}^{{{m}_{a}}}{{{{\vec{c}}}_{i}}} \right\|$ and finally
• ${{v}_{a}}$ , its speed, is the ratio of its momentum over its mass or ${{v}_{a}}=\frac{{{P}_{a}}}{{{m}_{a}}}=\frac{\left\| \sum\limits_{i=1}^{{{m}_{a}}}{{{{\vec{c}}}_{i}}} \right\|}{{{m}_{a}}}$.

All the properties above are intrinsic which implies that they are qualitatively and quantitatively independent of the frame of reference against which they are measured. We must however make the essential distinction between the measurement of a property of an object and its actual intrinsic property.

Take for instance the speed of light which we have derived from the fundamental description of the properties of mass and momentum and shown to be constant. That is: ${{v}_{\gamma }}=\frac{{{P}_{\gamma }}}{{{m}_{\gamma }}}$ and since, for momentum vectors of photons all point in the same direction we have ${{P}_{\gamma }}={{E}_{\gamma }}$ and $\displaystyle {{v}_{\gamma }}=\frac{{{P}_{\gamma }}}{{{m}_{\gamma }}}=\frac{{{E}_{\gamma }}}{{{m}_{\gamma }}}=\frac{{{m}_{\gamma }}c}{{{m}_{\gamma }}}=c$.

If we were to experimentally measure the speed of light, or more precisely, the speed of photons, we would set up instruments within an agreed upon frame of reference. We would map the space in which the measurement apparatus is set and though the property of speed is intrinsic, thus independent of the frame of reference, the measurement of the property is dependent on the frame of reference. But if, as we know, the speed of light has been observed to be independent of the frame of reference, then how can this be reconciled with QGD’s intrinsic speed?

Before moving forward with the experiment it is important to consider what it is that our apparatus actually measures. Speed is conventionally defined as the ratio of displacement over time, that is $v=\frac{d}{t}$ where $d$ the distance is and $t$ is time. Space and time here are considered physical dimensions and as a consequence the conventional definition of speed is never questioned.

Distance can be measured by something as primitive as a yard stick and its physicality is hard to argue with. Time and its physicality pose serious problems. Time is assumed to be measurable using a clock of some sort but, it is easily shown that clocks are simply cyclic and periodic systems linked to counting devices and they do not measured time but merely count the number of repetitions of arbitrarily chosen states of these systems.

So conventional speed in general, and that of light in particular, is simply the distance in conventional units something travels divided by the number of cycles a clock goes through during its travel. Therefore the conventional definition of speed, which is the ratio of the distance travelled by an object over the number of cycles, is not the objects speed, but of the distance travelled between two cycles. That goes for the speed of photons.

There is a relation between conventional speed and intrinsic speed and we find that the conventional speed of a photon is proportional to its intrinsic speed, that is $\frac{d}{t}\propto {{v}_{\gamma }}$, but while conventional speed is relational (and not physical since time itself is not physical) , the intrinsic speed is physical since it is derived from momentum and mass, both of which are measurable, hence physical.

Now going back to frames of reference, let us assume a room moving at an intrinsic speed ${{v}_{a}}$. A source of photons is placed at the very centre of the room which photons are detected by detectors placed on the walls, floor and ceiling. The source and detectors are linked are in turn linked to a clock by wires of the same length. The clock registers the emission and the reception of the photons in such a way that we can calculate the conventional speed of photons. For now, we will assume that the direction of motion of the room is along the $x$ axis. QGD predicts that even though the intrinsic speed of photons is reference frame independent, their one way conventional speed to detector ${{D}_{{{x}_{1}}}}$ will be larger than their one way conventional speed at the detector ${{D}_{{{x}_{2}}}}$. The relativity theory predicts that the conventional speed of photons will be the same at both detectors independently of ${{v}_{a}}$. So all that is needed to test which theory gives the correct prediction is to make one way measurements of the conventional speed of photons. Problem is; all measurements of the speed of light are two way measurements and since any possible contribution of ${{v}_{a}}$ to the conventional speed of photons traveling in one direction is cancelled out when it is reflected in the other direction. In other words since both QGD and the relativity theory predicts the two way measurements will be equal at ${{D}_{{{x}_{1}}}}$ and ${{D}_{{{x}_{2}}}}$ such experiments cannot distinguish between QGD and the relativity theory.

However, a similar experiment which measures not speed but momentum can distinguish between the theories. The photons at detector ${{D}_{{{x}_{2}}}}$ will be redshifted while those at ${{D}_{{{x}_{1}}}}$ would be blueshifted. Both theories predict ${{P}_{{{D}_{{{x}_{1}}}}}}>{{P}_{{{D}_{{{x}_{2}}}}}}$but their predictions for the other detectors are different.

Assuming that the room’s motion is align with the $x$ axis*, the relativity theory predicts that ${{P}_{{{D}_{{{x}_{1}}}}}}>{{P}_{{{D}_{{{y}_{1}}}}}}={{P}_{{{D}_{{{y}_{2}}}}}}={{P}_{{{D}_{{{z}_{1}}}}}}={{P}_{{{D}_{{{z}_{2}}}}}}>{{P}_{{{D}_{{{x}_{2}}}}}}$. For the same experiment the QGD theory predicts ${{P}_{{{D}_{{{x}_{1}}}}}}={{P}_{{{D}_{{{y}_{1}}}}}}={{P}_{{{D}_{{{y}_{2}}}}}}={{P}_{{{D}_{{{z}_{1}}}}}}={{P}_{{{D}_{{{z}_{2}}}}}}>{{P}_{{{D}_{{{x}_{2}}}}}}$.

If QGD’s prediction is verified, then the intrinsic of the frame of reference can be calculated using the equations we introduced earlier to describe the redshift effect. That is; from our description of the redshift effect, we know that $\displaystyle {{P}_{\gamma }}=\Delta {{P}_{{{D}_{{{x}_{1}}}}}}$ then we have $\displaystyle \frac{c-{{v}_{a}}}{c}{{m}_{\gamma }}={{P}_{\gamma }}-\frac{{{v}_{a}}}{c}={{P}_{{{D}_{{{x}_{1}}}}}}-\frac{{{v}_{a}}}{c}={{P}_{{{D}_{{{x}_{2}}}}}}$and $\displaystyle {{v}_{a}}=\left( {{P}_{{{D}_{{{x}_{1}}}}}}-{{P}_{{{D}_{{{x}_{2}}}}}} \right)c$.

Once the intrinsic speed of a reference system is known, then it can be taken into account when estimating the physical properties of light emitting objects from within it.

QGD’s description of the redshift effect implies distinct predictions for all observations based on redshifts measurement but I would like to bring attention to one direct consequence which has been confirmed by observations; the observed flatness of the orbital speed of stars around their galactic centers .

* The alignment with the $x$ axis is found by rotating that detector assembly so that the ${{D}_{{{x}_{2}}}}$ detector measures the lowest momentum (largest redshift).

Bad Behavior has blocked 298 access attempts in the last 7 days.