In his article titled The Good Reasons Scientists Are So Hostile to New Ideas Ethan Siegel writes:

*“Proposals that attempt to revolutionize one (or more) of our accepted theories have a large suite of hurdles to overcome. In particular, they must:*

*reproduce all the successes of the prevailing theory,**explain a phenomenon more successfully than the current theory can,**and make novel predictions that can be tested that differ from the theory it’s attempting to supersede.”*

Satisfying the above criterions requires that a new theory or proposal must be sufficiently developed. It is unlikely that a new proposal or theory will be capable of meeting those criterions early on. Unlike accepted theories, new theories have not benefited from a century of contributions by generations of scientists. So, the consequences of ideas must be fully explored before shooting them down for not meeting the criterions.

That said, I agree with pretty much every point Segal makes in his article in that at some point a new idea, when sufficiently developed in a theory, must meet those criterions but one should avoid putting them to the test prematurely. Not only do I agree with the criterions he enumerates, but I think a theory should be held to an even more rigorous set of criterions. From Quantum-Geometry Dynamics; an axiomatic approach to physics:

*Any theory that is rigorously developed from a given consistent set of axioms will itself be internally consistent. That said, since any number of such axiom set can be constructed, an equal number of theories can be derived that will be internally consistent. To be a valid axiomatic physics theory, it must answer positively to the following questions.*

- Do its axioms form an internally consistent set?
- Is the theory rigorously derived from the axiom set?
- Are all descriptions derived from the theory consistent with observations?
- Can we derive explanations from the axiom set that are consistent with observations?
- Can we derive from the axiom set unique and testable predictions?

*And if an axiom set is consistent and complete, then:*

*Does the theory derived from the axiom set describe physical reality at all scales?*

Considering that the theory exists for just a little more than a decade, I have set the bar even higher than what Siegel proposes.

There is a need however to clarify what Siegel meant by “reproduce all the successes of the prevailing theory”. I will take it that he means that it should make it possible to derive predictions that are consistent with the predictions of accepted theories. That said, predictions maybe consistent but derived from a different set of axioms which implies that those the observations are consistent with both the new and the accepted, the interpretations of the observations would differ.

He concludes:

*“In the end, the Universe will always be the ultimate arbiter of what is real and what theories best describe our reality. But it’s up to us — the intelligent beings that conduct the enterprise of science — to rigorously uncover those truths. Unless we do it responsibly, we run the risk of fooling ourselves into believing what we want to be true. In science, integrity and intellectual honesty are the ideals to which we must aspire.”*

This is the very definition of the scientific method. Ethan Siegel understands and explains it very well in this and other articles so it is not for lack of understanding that he forgoes the scientific method in his article titled ** This is Why Space Needs to be Continuous** in which he argues against the idea of space being discrete.

The validity of a theory (or premise) can only be determined by the scientific method described above which implies descriptions, explanations, predictions that have or can be tested experimentally. Theoretical arguments are important but cant substitute for experiments.

The validity of a theory cannot be determined from within the framework of a second theory when their axiom sets are mutually exclusive.** **There is a simple reason for that. Using a premise in an argument based on a theory that axiomatically excludes it will inevitably make it internally inconsistent and consequently render any theoretical argument and its conclusion inconsistent with both the theory (or premise) and the theory on which the argument is based. Sure, we can determine if an idea is consistent with a theory, but being found to be inconsistent with an accepted theory does not invalidate it.

For example, space continuum is an implicit axiom of the relativity theories. Space continuum and space discreteness are mutually exclusive. It follows that the relativity theories cannot be used to describe physics in discrete space or make consistent predictions about discrete space.

Siegel argues that space discreteness is also inconsistent with the relativity principle but that is a forgone conclusion considering that they are axiomatically mutually exclusive.

Several strong theoretical arguments can be made against space discreteness. If space were discrete rather than continuous, then gravitational waves would not exist, time would not be physical and the universe would be strictly deterministic to give only a few examples, all of which having overwhelming supporting evidence. But Siegel’s entire argument against space discreteness is based on an application of the principle of relativity, a postulate (another word for axiom) of special relativity. The principle of relativity precludes the possibility of measuring absolute velocity, distance, momentum, etc.,

From the impossibility to make an absolute measurement, Siegel concludes that space cannot be discrete. His argument is that space discreteness implies the existence of a fundamental unit of distance (a smallest possible distance), yet measurements of this distance is dependent on the observer. Hence, two observers in constant motion relative to one another would not agree on its length, thus contradicting of the existence of a fundamental unit of distance. It follows that space discreteness implies that the principle of relativity would be wrong and consequently so would be special relativity. But how could special relativity be wrong considering that its predictions have been shown to be consistent with observations to a very high degree of accuracy?

One must keep in mind that special relativity is a measurement theory which accurately predicts the relative measurements of physical properties. Special relativity (not necessarily nature) precludes absolute measurements. But special relativity cannot handle space discreteness or make any predictions about the physics of discrete space. Only a theory derived from a self-consistent axiom set which has space discreteness as one of its axioms can make predictions about physics in discrete space. It would not only be able to predict the absolute measurements of all physical properties (all observers would see make the same measurements), but also make accurate predictions of relative measurements as or more accurately than even special relativity.

It stands to reason that any such theory should possess explanatory powers comparable to those of the relativity theories and beyond that, should make unique testable predictions that can set it apart. These would be the necessary and sufficient criterions to validate the theory. But Siegel ignores the criterions of the scientific method and replace them with a single theoretically biased criterion: space discreteness must be consistent with the principle of relativity.

On its own, the idea of space discreteness has nothing to say about reality, it has zero descriptive, explanatory, or predictive power. Only a theory that is based on a self-consistent axiom set that contains an axiom of space discreteness can be tested using the criterions we have set forth as necessary and sufficient. Only by comparing such a theory’s descriptions, explanations and predictions to observations can it be validated or refuted. This brings me to quantum-geometry dynamics.

Quantum-geometry dynamics (QGD) is derived from a consistent axiom set containing an axiom of space discreteness. In terms of state of development there is no comparison between what a single individual can accomplish in a decade versus generations of researchers over a century, but already explains and describes what we already know, and makes a number of testable predictions. So, let us hope that some physicists will do what Segal suggests but failed to do in the case of space discreteness and rigorously subject QGD to the scientific method. The idea of space discreteness certainly deserves further exploration.

There are several articles on my blog that discuss different fields of application, but reading the opening chapters of Quantum-Geometry Dynamics; an axiomatic approach to physics are a minimal prerequisite to understand them. Also, QGD has evolved in important ways since 2010 as I gained a better understanding of it. I kept the older articles for references but only the latest articles and book reflect the current state of QGD.