Possible Explanation of the Quark Confinement and Asymptotic Freedom, and Possible Solution to the Yang-Mills Mass Gap Problem

With this work, we try to answer 3 fundamental questions that have plagued mathematicians and physicists for several decades. As known, the spontaneous symmetry breaking (SSB) and the Brout-Englert-Higgs Mechanism (BEH-M) solved the Yang-Mills Mass Gap Problem. However, various mathematicians, even prestigious ones, consider the basic assumptions of the gauge theories to be wrong, as well as in conflict with the experimental evidences and in clear disagreement with the facts, distorcing the physical reality itself. Likewise, the Quantum Fields Theory (QFT) is mathematically inconsistent, adopting a mathematical structure somewhat complicated and arbitrary, which does not satisfy the strong demands for coherence. The weakest point of the gauge theories, in our opinion, consists in imposing that all the particles must be free of an intrinsic mass (massless). On the contrary, even for the particle considered universally massless, i.e. the photon (P), our calculations show a dynamic-mass, a push-momentum (p) of 1.325⋅10−22[g⋅cm/s]. That is, an optic P hits a particle with an energy-mass greater than 100 protons rest-mass’. It is clear that if we replaced this value with the full value of the P inserted in the equations of the Perturbation Theory, QFT and Yang-Mills theories, all divergences, that is all zeroes and infinities, would suddenly disappear. Consequently, the limits imposed by the SSB disappear so that there is no longer any need to deny the mass to the Nuclear Forces bosons, including the Yang-Mills b quantum. Still, the photons (Ps) are the basis of the quantum vacuum energy, which is distributed ubiquitously, also within the intra-atomic spaces. It is likely that a lot of Ps were trapped in atomic nuclei (at the time of nucleosynthesis) and among quarks (Qs) at the time of primordial nucleonic synthesis. We believe that when Qs get too close to each other, till repelling each other (Asymptotic Freedom of Qs), this may depend on the presence of a multitude of Ps that, no further compressible, begin to exert an antigravity repulsive force, just as a Dark Energy. This limit to Compressibility (C) of the radiation is shown in equation: PV 4/3 = C, where V is the volume, and P is the Pressure of the photonic gas. Quantum Mechanics plays a crucial role, through the Uncertainty Principle, in the spatial Confinement of Qs, which have remained eternally confined in an extremely narrow space by the Strong Interaction, but in primis by the very short range (likely ≈8.44[±1.44]⋅10−16cm) and lifetime of gluon(G) which, from our calculations, is ≈2.73[±0.564]⋅10−26 sec. Therefore, a new parameter may be added to the Qs and G spatial Confinement: the b quantum or G Temporal Confinement (and of their Colours and anti-Colours).


1.INTRODUCTION
It is well known that "the observation that the laws of physics are not invariant for gauge transformations dates back to Galileo Galilei" [1]. The Newtonian physical-mathematical interpretation of the Universe is based on the basic Galileo's concept. Later Einstein goes along with his famous proposal of 1905, now known as the Special Relativity Theory [2]. This theory is a systematic continuation of the idea -central in all of Physics from Newton onwards -according to which the Laws of Physics must be written in the same way, i.e., they must be formally the same, in a vast class of reference systems. Galileo had already named it Principle of Relativity. Both the Newtonian Mechanics and Special Relativity identify this vast class with the class of inertial reference systems: the Laws of Physics are formulated exactly in the same terms in every inertial reference system [3].
To this fundamental concept, elaborated by Galileo and supported by Newton, Einstein adds another axiom: light, and so every electro-magnetic (EM) radiation, move with the same speed in every inertial reference system. The speed of a body depends on which reference system is used to observe it [3]. "The real Einsteinian revolution is essentially based on the assumption that the speed of light is an exception: anyone who measures it will always find the notorious 300,000 km/s. In order to make such a hypothesis consistent with the rest of the Mechanics, Einstein is forced to submit the concept of simultaneity to an old criticism, arriving at the known results of Special Relativity: mass-energy equivalence; contraction of lengths and time dilation for moving objects. But there are two points that still strongly displease Einstein: the main one is perhaps the limitation, still intolerable for him, of the whole theory to inertial reference systems only. From this point of view, Special Relativity is no more relativistic than Newtonian Relativity. The other point is the difficulty of dealing with gravitational phenomena within Special Relativity " [3].
And so, under the pressure of this dissatisfaction, in the decade 1905-1915, Einstein manages to jointly resolve these two difficulties with the Theory of General Relativity [4]. It is indisputably evident that the invariance of physical laws (under arbitrary coordinate transformations) and the non-superability of the speed of light remove every trace of the action at a distance that characterized Newton's gravitation. Thus, the observer at a point in space-time, x, is influenced only by what happens in its immediate vicinity: in its proximity, spacetime appears flat (gravity disappears if we are in free fall like the astronauts in orbit ) and the general transformations of coordinates are reduced to Lorentz Transformations of Special Relativity. We can see General Relativity as the Law Invariance Theory for Lorentz transformations that depend on the point. The particles travel through the geodesics of space-time so that the dynamics of gravitational forces is completely determined by geometry. These concepts are elegantly summarized in Einstein's field equation: (1), where Rab expresses the Ricci Curbastro tensor (it is the curvature tensor of space-time, built starting from the Riemann tensor); R indicates the scalar curvature (or Ricci scalar); gab is the tensor metric; 8 derives from the fact that we are dealing with density (originates from 4, or Newton's expectation value), while the sign − expresses the contraction, or reduction in volume (in the deviation of the geodesics), consequent to the acceleration inwards (induced by the Rab tensor); G is the Newton gravitational constant and Tab is the energy-momentum tensor (measures the mass density of matter) [5]. Maiani states: according to Einstein's field equation, the Riemann tensor (geometry) must be equal to the tensor energy-momentum (forces), less than a factor determined by Newton's gravitation constant [1]. Penrose adds: "The quantities in the sin member of Eq. (1) Refer to certain measurements of this mysterious of spacetime curvature: those in the right member concern the energy density of matter. The famous Einstein equation E=mc2 tells us that energy is essentially equivalent to mass, so the terms on the right member also refer to mass density. We also remember that mass is the source of gravity. Einstein's field equation tells us, therefore, how the curvature of space-time (left member) is directly connected to the distribution of mass in the Universe (right member)" [5]. According to Quantum Fields Theory(QFT), Dirac field (Ψ), with a mass m and non interacting, is described by the (Dirac's) Lagrangian, ŁD: (2), where γμ is the Dirac matrix, which satisfies Clifford's algebra: { γμ, γν}= 2gγμνI; i is the imaginary unit, ̅ is the anti-particle. The magnitude (ŁD) is, by construction, invariant under the action of the Poincaré group. However, it presents a further symmetry, not associated with space-time transformations. In fact, the ŁD form remains unchanged if we rotate the phase of the field (Ψ) of a real angle (θ), that is if we perform the gauge transformation: . The real quantity that appears is the so-called gauge coupling parameter; it depends on the particular field Ψ on which the transformation acts, it coincides, except for the sign, with its electric charge. Whereas, the value of the continuous parameter θ univocally identifies each particular gauge transformation, as shown in Eq. (3) [6]. The gauge connection can now be established. As we know, the set of these transformations forms a Lie group, namely the symmetry group U(1), or group of Unitary transformations (U) of a complex variable (1). From a geometric point of view, it concenrn transformations analogous to the continuous rotations of the circle. With the difference that, while the circle is drawn in a bi-dimensional plane at real dimensions, the transformations of the group U(1) concern the rotations in the 2-dimensional complex plane: the latter is formed by two real dimensions, one of which is multiplied by the imaginary number i. The group U(1) can also be represented in terms of continuous transformations of the phase angle ( for example) of a sinusoidal wave [7].

GAUGE TRANSFORMATIONS and GAUGE THEORIES
As it is known, Maxwell's equations do not change; that is, they are covariant, so Weyl believed that it was possible to extend this covariance to the gravitational field too, as well as to General Relativity, thus trying to unify electromagnetism and gravity. In fact, working on the theory of continuous symmetry groups (the socalled Lie's Groups) and bearing in mind the Noether theorem [8], Weyl was convinced that the Conservation Laws are related to local transformations of symmetry, which gave the generic name of Eichinvarianz or gauge invariance (eich=gauge) under the change of measurement scale: or gauge, precisely; a term unfortunately rather obscure. However, eich means phase, or even scale or caliber, that is, a measure of length. In fact, Weyl sought invariance by dilatations, i.e. with real i, instead he found invariance in the case of real . Thus, Weyl conjectured that a gauge symmetry might also be a local symmetry of General Relativity. Then, in 1918, Weyl formulated a gauge theory [9] to be applied to General Relativity. Maiani adds: "In this framework, Weyl formulates its gauge theory of the Electro-Magnetic Interactions (EMI). Weyl postulates that the invariance for local coordinate transformations also extends to the calibration of physical lengths " [1]: dx → e(x) dx (4), With λ (wavelength) real function of the coordinates. Equation (4) shows that the scale factor, λ(x), (as saying gauge factor) is determined by the coefficients of a differential form, Aμ(x): (x) = ∫x Aμ(y) dyμ (5).
The integral must be performed on a path that, starting from the origin, arrives at point x. The path is arbitrary, and the same result is or isn't obtained for all paths, depending on whether the condition of integrability occurs: Fμυ(x) = Aμ υ − Aυ μ = 0 (6).
When this does not happen, there are new forces determined by Fμν and the corresponding equation of motion takes the form: δx  Fμυ (x) = e Jμ (x) (7).
Maiani points out: "It is an equation of the type of Einstein's equation, a geometric entity equaled by a dynamic entity, which determines the forces exerted on the matter that carries the quality associated with the current Jμ. Weyl identifies, of course, Jμ with the electromagnetic current, the constant e with the elementary electric charge, and Fμυ with the Maxwell tensor, which sums up the electric and magnetic fields: Aμ(x)" [1]. However, Einstein immediately replied that the laws of physics are not invariant under gauge transformations and that the Weyl's gauge theory was in conflict with Relativity Theory [1].
Nevertheless, after the development of Quantum Mechanics(QM), Fock and London modified gauge by replacing the scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. In fact, both Vladimir Fock,1926[10], in relation to the Schrödinger equation for the electron (generalized the Klein-Gordon equation), and Fritz London, 1927 [11], in formulating the superconductivity theory, observed that the minimal substitution of classical electro-magnetism: Hp → Hp − qA0 (8), p → p − qA (9), takes, in Schrödinger equation, to the substitution: i → i + eφ (x,t) (10), − i = − i + eA (x,t) (11), which allows us to make the Schrödinger equation invariant for substitutions: , on the wave function of the electron, Ψ(x), and on the EM field, Aμ (x).
In 1929 Weyl accepted the crucial introduction of the imaginary unit (i) in the exponent and proposed, indeed, that the invariance for transformations (12) and (13) should be the principle from which to derive the laws of electrodynamics, the principle which he gave the name of gauge principle or minimal principle. Weyl's 1929 work marks the beginning of modern gauge theories [1]. So, with his 1929 article [12] Weyl associates the transformation on potentials with a gauge transformation of the wave function(WF) [13]. In fact, Weyl had not completely surrendered to the severe objections challenged by Einstein, referring to his article published on the gauge theories in 1918 [9].

The MASS BREAKS the SYMMETRY
Therefore, taking inspiration from Fock's works (1926) on the electron's Wave Function equation of Schrödinger, or the London's works (1927) on superconductivity, in 1929 Weyl published another work in which he attributed great importance to the Gauge Theories [12]. This article also fully preserves the same parameters and mathematical procedures previously contested by Einstein and Pauli, as the assumption that "in an invariant gauge theory, all the particles should have zero mass like the photon" [1].
In fact, the downside of the Gauge Symmetry Theories, in our opinion, lies in the fact, really paradoxical from a logical point of view, that the introduction of a simple mass parameter, necessary to describe the intrinsic mass of a particle, is in contradiction with the existence of this symmetry: it is said, that is, that the mass breaks the gauge symmetry. According to Standard Model, the problem can be solved by assuming (as a dogma) that all particles have a null intrinsic mass and postulating the existence of a complex scalar field permeating the space. The re-introduction of the mass parameter causes the gauge symmetry to be no more explicit, but that is spontaneously broken: Spontaneous Symmetry Breaking(SSB) [14][15] [16].
It is, in this case, asymmetry hidden from the mass. In this regard, in 1956 Nambu attended a seminar by Schrieffer on Superconductivity. This theory, developed by Bardeen, Cooper, and Schrieffer (B-S-C theory), explained how certain crystalline materials, when cooled below a critical temperature, lose all electrical resistance, becoming Superconductors. Although charges of the same sign repel each other, the electrons of a superconductor suffer a weak mutual attraction. This happens because a free electron, passing close to a positive ion of the crystal lattice, moves it slightly from its position distorting the lattice. The electron continues on its way, but the lattice continues to vibrate, and this vibration produces a slight excess of positive charge, which attracts a second electron. The result is pairs of electrons with opposite spin and momentum (called Cooper pairs). These pairs of electrons behave like bosons, so they can accumulate in large numbers and condense into a single state, traveling in the lattice without any resistance. For Nambu, as Baggott reminds us, this theory did not seem to respect the gauge invariance of the electro-magnetic (EM) field since it did not respect the conservation of the electric charge. According to Nambu the superconductivity B-S-C theory was an example of SSB applied to the gauge field of electromagnetism. That is, the SSB is due to tiny fluctuations in the surrounding environment, which are part of the background noise. The SSB regards the minimum energy state of a system, called vacuum state. However, the possibility of coordinated motions of Cooper pairs, mediated by lattice vibrations, creates a lower energy vacuum state. In this case the gauge symmetry of electromagnetism U(1)Q is broken by the presence of another field, which quanta are represented by Cooper pairs. The laws describing the dynamics of electrons in the material remain invariant with respect to the local gauge symmetry U(1), but the vacuum state is no longer so. Nambu realized that, since the Cooper pairs exist in a lower energy state, it is necessary to supply energy to dismember them: the free electrons thus created would have an additional energy, equal to half of that necessary to separate a pair. This additional energy would appear as a mass. Therefore, Nambu wondered, it is just necessary to break the symmetry to have massive particles [7].
though they tried to argue that these particles could acquire a small mass, so as to identify them with the pions. The new massless particles are the so-called Nambu-Goldstone bosons.
These massless bosons, as Baggott writes, were subject to the same objections that weighed on the massless particles of the Quantum Fields Theory (QFT): any new massless particle predicted by the theory should have been ubiquitous, like the photon. Instead, these additional particles had never been observed. The SSB promised a solution to the problem of massless particles in Yang-Mills field theories, but at the price of introducing once again new massless particles, never observed. One problem was solved, and a new one was created [7].

BROUT-ENGLERT-HIGGS MECHANISM
So it was conjectured more or less at the same time, and independently by Englert and Brout [17], by Higgs [18], Guralnik, Hagen and Kibble [19], that particles would tend to interact, to mate with this complex scalar field, now known as Higgs field (HF), acquiring an energy at rest which is not null, which for almost all respects is analogous to a value of mass at rest, then describable as a parameter mass. As it is well known, the mechanism just described is the so-called Brout-Englert-Higgs Mechanism(BEH-M). The BEH-M requires the intervention of a permeating particle the HF, i.e. the Higgs Boson (HB) [18] [20].
To this purpose, it is interesting to note that the coupling between the various particles ("among bosons only those bearers of weak charge" [21]) and HF (steeped in weak charge) complies with the gauge symmetry and explains the presence of non-null rest masses.

Gauge Theories in Conflict with Relativity Theory
As it is known, three years after Einstein's introduction of his 1915 General Theory of Relativity, Weyl suggested a generalization in which the notion of length itself became ependent from the path [9]. However, "When Einstein got to know of the gauge theory, he informed Weyl that he had a fundamental physical objection. The spectral frequencies, for example, are not at all influenced by the history of an atom, as predicted by Weyl's theory. And even more fundamental, Weyl's theory is in conflict with the necessarily exact identity between particles of the same kind. There is, in particular, a direct relationship between the rhythms of clocks and masses of particles. A particle with rest mass m has a natural frequency mc2h−1, where h is the Planck constant, and c is the speed of light. In this way, in Weyl's geometry, not only the rhythms of the clocks but also the mass of a particle would depend on its history. Consequently two protons with different histories would almost certainly have different masses, according to Weyl's theory. This would violate another quantum principle; namely, all particles of the same type must be exactly identical. Indeed, Einstein's objection to Weyl's original gauge idea was based on the fact that the mass of a particle, and therefore its natural frequency, is directly measurable, so that it cannot be used as a gauge field in the required sense. This matter gets muddy in some modern uses of the gauge idea" [22]. As Penrose points out, Noether's theorem shows various limitations in the case of Gravitational Theory: when gravity is included, there must be the gauge invariance appropriate to gravity, i.e. the invariance with respect to the coordinates, using the mathematical formalism of tensors [22].
In fact, in Weyl's theory null cones retain the fundamental role they play in Einstein theory (they define the boundary velocities for massive particles and give us the local Lorentz group that must act in the vicinity of each point), so that a Lorentzian metric (eg, + − − −) g is still locally required in order to define these cones. There are, however, some additional structures to this structure of null cone (that is to say the conformal structure), and precisely a gauge connection -that Weyl introduced so that its curvature was Maxwell's tensor F (i.e., Fab). This curvature measures the discrepancy of the clocks' rhythms [22].
Penrose wonders: "What is the geometric nature of the bundle on which this connection acts? It is appropriate to think that the bundle is the vector bundle of the possible values of the complex field () in each point, where the freedom of phase (gauge) multiplication makes the bundle a U(1) bundle on space-time M (Minkowski's curved space). In order for this to make sense,  must be a complex field whose physical interpretation is, in a certain sense appropriate, insensitive to substitution: , where ei is a complex unit number (with  real), expressing a complete phase (a rotation in the complex plane, rather than a stretch), and i is the imaginary unit. This unobservable transformation is the famous gauge transformation where Ψ represents the wave function of an electrically charged particle (such as the electron). If the wave function describes a charged particle, then we can make gauge transformations of the form expressed by equation (14), where  is an arbitrary real position function, allowing us to change the way the phase varies!" [22]. The condition shown in equation 14 is called electromagnetic gauge transformation, and the fact that the physical interpretation does not depend on that substitution is known as gauge invariance. Thus, "the Curvature of our bundle connection is the Maxwell Fab• field tensor. This gauge transformation ↦ei is physical 'unbservable'" [22] which represents, however, a remarkable demerit note on these gauge transformations and gauge theories.
Note, in fact, that the idea of gauge connection should depend on the existence of a symmetry (which for electromagnetism should be the simmetry  ↦ ei ) which, like a dogma, "is supposed to be exact" [22] although "it is not directly observable" [22].
There is, however, no absolute scale for time and space measurements in the Weyl scheme. To this purpose, we read: "According to Weyl's theory, the way a clock measures time does not depend solely on its current position, but also on the previously positions. Likewise, the emission frequencies of a hydrogen atom will depend both on its current and past positions. It is like saying: the behavior of the atom will depend on its history, despite contradicting experimental evidence. However, Weyl's idea contained a fatal mistake, which Einstein clearly saw from the beginning" [23].
In fact, Einstein explained that the laws of physics are not invariant under gauge transformations, and the elegant electromagnetic field theory had to be abandoned [1].
Einstein had shown that the mathematical formalism introduced by Weyl was excessively incoherent and incongruous, as well as blatantly clashing with the experimental evidence. In fact, "initially, Weyl attributed the gauge invariance to the space itself. But, as Einstein soon pointed out, "this implied that the measure of the length of a ruler, or the hour marked by a clock, depended on their recent history. Thus, a clock, moved from one point to another of a room, would no longer mark the correct time" [7].
In short, the Mathematics supported by Weyl belied and contradicted the basic principles of the Theory of Relativity! It was really unacceptable for Einstein.
Pauli also was in full disagreement with the Weyl's gauge theory. In this regard, he immediately published two articles. In the first [24], as Sparzani tells us, Pauli pointed out a sign error ("a little oversight" [24]) in one of Weyl's formulas. In the 2nd article, however, there is a pitiless and dry criticism [3]. In fact, Pauli wrote: "In Weyl's theory, we continuously work with the intensity of the field within the electron. However, for a physicist, the latter is defined as a force acting on a test field, and since there are no test bodies smaller than an electron, the notion of an electric field internality in a mathematical point appears to be an empty function, with no content. It would be preferable to reaffirm that in physics, we must introduce only quantities that are observable in principle. Thus: would we not be completely off track if we pursued a theory of continuum within the electron?" [25].
Incidentally, as Sparzani reminds us, "The Mathematics used by Pauli refers to the tensor calculation developed by Gregorio Ricci Curbastro and his student Tullio Levi Civita. It is the same mathematical formalism suggested to Einstein by Marcel Grossmann. Modern textbooks use a more general and abstract context, that of the theory of differential varieties, in which some passages and formulations are more direct. On the contrary, the calculations elaborated by Pauli are rarely found in the most modern manuals" [3].

Gauge Theories in Conflict with Causality Principle
With regard to the Quantum Fields Theory (QFT), moreover, with reference to a Dirac (Ψ) field, of noninteracting mass m, as shown in equation (2), it is not clear whether the value of the continuous parameter θ uniquely identifies each particular gauge transformation represented by the aforementioned Eq. (2). In fact, if the angle θ had a physical meaning, the globality of the gauge transformations represented there, that is the fact that θ is constant in space-time, would entail a violation of the Causality Principle: it would, in fact, be necessary that the information of a phase change at one point spreads instantaneously throughout the space! [6]. Thus, it is necessary to understand whether it is possible to promote the examined symmetry to local gauge invariance, that is, have the opportunity to choose point by point which phase to assign to the field. To do this, let's consider the generalization of Eq. (3) to the case where θ is a smooth function of the variable x: . It is immediate to note that the second term of Eq. (2) is locally gauge invariant and does not need any modification. The first term of (2), on the other hand, requires particular attention, because, under the transformation illustrated by equation (3), it is transformed in a non-trivial way: i ̅ γμ∂μΨ → i ̅ e-ieθ γμ∂μ eieθ Ψ = i ̅ γμ∂μΨ -e̅ γμ ∂μ (16). Note that the extra term that appears in equation (16), and that breaks the gauge invariance of the Dirac lagrangian (Eq.2), is generated by the presence of a derivation operator (∂) [6].
We read: "The physical meaning of this gauge invariance formulated by Weyl lies in the possibility of assigning the phase to the fields in an arbitrary way, without changing the observable quantities. This way of thinking contradicts Causality Principle, since it requires to assign the phase of the fields simultaneously at all space-time points. It looks more physical to require the possibility of assigning the phase in an arbitrary way at each space-time point" [26].

Gauge Theories in Conflict with Baryonic Number Conservation Law
Furthermore, it seems interesting to quote Maiani: "The conservation of the Electric Charge finds its theoretical basis in the gauge invariance of Maxwell equations" [27], while "the conservation of the Baryonic Number is not associated with any gauge invariance and has always appeared as an artificial rule. However it applies with great precision" [27].
Actually, this leaves us perplexed because the gauge invariance does not coincide with one of the fundamental laws of Physics: the Law of Conservation of the Baryon Number. Yet this law is always preserved: "it applies with great precision" [27]. It is even possible to consider that maybe something "artificial" lies in the "rules", or dogmas, which are the basis of gauge theories, after all, according to Einstein and Pauli, that Mathematics is not up to standards. In fact, with regard to Standard Model, Maiani underlines: "Unfortunately, the approximate calculation methods available (the Perturbation Theory) are not completely reliable" [28].

DIVERGENCES in PERTURBATION THEORY
An approximation method is useful for finding the changes in the discrete energies and the associated wave functions of a system resulting from a small disturbance, or perturbation, provided the energies, and the wave functions of the undisturbed system are known. In this method, usually referred to as the Rayleigh-Schrödinger Perturbation Theory, the changes in the energies and the wave functions are expressed as an infinite power series in the perturbation parameter. The approximation, then, consists in neglecting terms in the infinite series after the first few terms. Approximating the series to the first n terms in the series, gives the nth order approximation [29].
Later, in the 30s of the last century, scientists began to notice that in the equations of perturbative development of the Quantum Electro-Dynamics (QED) divergences emerged, which were considered uneliminable. In fact, these equations resulted in zeroes and infinities! More precisely, the Perturbation Theory is a mathematical technique for finding an approximate solution to a problem that cannot be solved exactly, by starting from the exact solution of a related problem, used in modelling physical interactions between particles, etc. At that time, in fact, there was a widespread belief that the infinitives coming from the equations inherent to the Perturbative Calculation in the QED were absolutely ineliminable.
As we know, QED is a Quantum Theory of the Electromagnetic Field (EMF), i.e., a QFT. As it is known, the first formulation of a QFT describing the interaction between radiation and matter (i.e. between photons and electrons) is Dirac's [30]. Likewise, Heisemberg and Pauli also formulated one of the first QFT, and even they could not solve the relative field equations, which tended to infinite values [31]. Therefore it was not possible to write a solution of the equations in the form of a single mathematically compact expression, applicable in all circumstances. They had to resort to an alternative solution method: the so-called Perturbative Development (or Perturbation Theory). Adopting this technique, the equation is rewritten as a potentially infinite sum of a series of terms: x0+x1+x2+x3+… That is, the series begins with an expression of 'order zero', or xo, or unperturbed, corresponding to the total absence of interactions, so the equation is perfectly solvable. The other terms of the series, on the other hand, are perturbative: they represent corrections to the 1° order, as in the case of x1, or corrections to the 2° order (x2), or to the 3°(x3) and so on. The subsequent terms of the Perturbative Development make ever smaller corrections to the zero-order result, progressively bringing the calculation closer to the exact solution. Thus, the accuracy of the final result depends on the number of perturbative terms included in the calculation. However, instead of ever-smaller corrections, Heisemberg and Pauli found that some terms of the Perturbative Development 'exploded', tending to infinite values [7]. Applied to QED, these terms were identified with the so-called electron self-energy, due to the self-interaction of the electron with the quanta (i.e., photons) of the EMF generated by itself.
In our opinion, this represents a fundamental crossroads. In short, the common interaction electron-photon generates infinities. Why?
Because the photon is considered massless, therefore, the most elementary Algebra teaches that multiplying a value by zero we get 0, and dividing by zero we get ∞.This also occurs with the radius of the electron, considered null, i.e. equal to zero.
The QED describes all phenomena relating to electrically charged particles interacting through EM Interaction. It seems interesting to note that mathematically.
On the contrary, the QED presents the structure of an Abelian gauge theory, with the symmetry group U(1) where, physically, it means that charged particles interact with each other by the exchange of null-mass particles: the photons.
In fact, the gauge field, which mediates the interaction between the charged spin -½ fields, is the EMF. The spinorial QED, or QED Lagrangian (LQED), for a spin -½ field interacting with the EMF, is represented as follows: where ψ and its antiparticle (ψ) are the fields that represent charged particles (Dirac spinors: e.g. electronpositron field); i is the imaginary unity; M indicates the mass of the electron or positron; e is the coupling constant, equal to the electric charge of the bispinor field; Ⱥ is the covariant four-potential of the EMF generated by the electron itself; F is the EMF tensor, which represents the evolution of the free field, that is in the absence of additional potentials.
In brief, equation (17) describes the interactions between a quantized material spinorial field (i.e. the electric field) and a non-massive vector field that describes the EM radiation, i.e. the EMF managed by the photons, considered massless [32].
Thus, Oppenheimer, in 1930 (at the time in the team of Pauli), demonstrated that at the origin of the infinities there was the term expressing the interaction between the electric current and the EMF produced by the electron [33]. Namely, the self-interaction of the electron, considering at the second-order the processes in which the electron emits and resets a photon, causes an infinite shift (with quadratic divergence) of the hydrogen spectral lines. Obviously, this occurs because in the equations a point value for the radius of the electron (a) is introduced, thus a → 0 (which is as to give the value a = 0). Consequently, the calculation results in an infinite shift: for a → 0 diverges as 1/a2 [34], where a is the radius of the electron, considered size point, therefore equal to 0. Precisely, the EM energy of an electron(Eem), thought as a charged sphere, Eem=e2/4πa = ∞ (e is the electron's charge), is divergent in the limit a → 0 [26].
Oppenheimer writes: "The paper develops a method for the systematic integration of the relativistic wave equations for the coupling of electrons and protons with each other and with the EMF. It is shown that, when the velocity of light is made infinite, these equations reduce to the Schrödinger equation in configuration space for the many-body problems" [35].
Of course, there is something wrong: the speed of light will never be able to reach infinity values! Waller reviews Oppenheimer's calculations, finding the same abnormal results, and writes: "Using the example of a free electron, we discuss whether, according to the methods of the Dirac radiation theory, it is possible to calculate the interaction of a free electron with its own field. Also, in the non relativistic theory, fundamental difficulties seem to hinder them" [36].
The literature of the time is full of results similar to those found by Oppenheimer. At the 7th Solvay Congress (Paris, 1933), the polarization of the vacuum was explored, among other things. As is known, in this typical quantum phenomenon, the vacuum continuously generates pairs of particles, such as electron-positron. What happens is that positrons, surrounding the electron, create an asymmetry in the electron charge distribution. That is, as Barrow reminds us, a virtual cloud of positron reduces the charge of the electron [37], and the calculation of this effect highlights a new infinity, which is added to the infinity generated by the electron selfinteraction. Dirac proposes to mutually subtract these 2 infinities [38].
After the Solvay Congress, Pauli instructs Weisskopf to recalculate the electron self-energy (cause of the 1st infinitive highlighted by Oppenheimer), taking into account the production of electron-positron pairs (generated by quantum vacuum fluctuations) and correlated to the polarization of the vacuum: another cause of infinitives. However, the result was depressing: an infinite was always obtained. In fact, the divergence always existed, even if it was only logarithmic: , Where E is the electron self-energy, m its mass and a it's ray, considered as a point (thus 0). In equation (30), the null value of a appears twice in the denominators: we shouldn't marvel at the infinities! Obviously, this occurs because in the equations a point value for the radius of the electron (a) is introduced, thus a → 0 (which is as to give the value a = 0).
Thus, the calculation results in an infinite shift: for a → 0 diverges as 1/a2. It is not possible, and it is clear that there is an error, which certainly does not lie in the values of m or c; therefore it must be in the value given to a, that is to the radius of the electron, considered equal to a point, that is equal to zero. At the same time, as equation (18) shows, the energy of the electron tends to ∞.
On the contrary, as everyone knows, the electron rest energy is only equal to 0.511MeV/c2! Moreover, being massive particles, the electrons can in no way occupy avoid or point the volume of space, that is, equal to 0. To this purpose, Feynman comforts us: "Maybe the idea that two points may be infinitely close is incorrect, it is false the assumption that geometry will continue to be invariably unchanged [39]". He adds: "But if instead of including all the possible points of interaction until a 0 distance, the calculation is cut off when the distance between the points is very small, there exist defined values of the mass of the electron and of the its charge, such that the calculated mass coincides with the value of the mass of the electron measured experimentally, and the calculated charge coincides with the experimental value of the electric charge of the electron" [39].
Moreover, in Eq. (18), twice, the mass of the electron and the speed of light, both multiplied by zero (the point electron), reset to zero, they cancel each other out! It is not possible, it is clear that there is an error, which certainly is not in the values of m or c; therefore it must be in the value attributed to a, that is to the radius of the electron, considered equal to a point, that is equal to zero. No! That's no good. So 1934, the rumor began to spread that something was definitely wrong in the QED since, on the one hand, the Dirac equation could not explain the experimental data, while, on the other, the QFT even produced infinite results.
Therefore, various alternative proposals to the QFT were elaborated in those years. Some authors think that the mistake lies in the mathematical formulation of the classical theory that we want to quantize, namely electrodynamics. Oppenheimer and Furry are convinced that the difficulties and divergences in the calculations concerning the self-energy of electrons "are based on the illegitimate application of the methods of Quantum Mechanics (QM) to the EMF" [40] and show that a QFT manages to incorporate antiparticles (and thus the creation of positron-electron pairs), without resorting to the Dirac sea of infinite negative energy solutions.
At the same time, Heisemberg arrives at the first complete formulation of the QED, containing the quantized field equations for the electron, for the EMF, and for the interaction of the electron with its EMF, like electronphoton interaction. Similarly to Dirac, Heisemberg also adopts the technique of subtraction of the infinitives; nevertheless, the infinitives remain. Even the electrons, surrounded by the virtual cloud infused with positrons, which attenuate their electric charge, tends to lose the ability to react with other particles [37].
Oppenheimer added: "It is further shown that it is impossible on the present theory to eliminate the interaction of a charge with its own field and that the theory leads to false predictions when it is applied to compute the energy levels and the frequency of the absorption and emission lines of an atom" [33]. Perhaps, unconsciously, Oppenheimer had been prophetic.
Later, Dirac confirms the size-point character attributed to the electron radius: a concept widely shared by researchers. Dirac writes: "One of the most attractive ideas in the Lorentz model of the electron, the idea that all mass is of electromagnetic(EM) origin, appears at present to be wrong, for two separate reasons. First, the discovery of the neutron has provided us with a form of mass, which it is very hard to believe could be of EM nature. Secondly, we have the theory of the positron-a theory in agreement with the experiment so far as is known-in which positive and negative values for the mass of an electron play symmetrical roles. This cannot be fitted in with the EM idea of mass, which insists on all mass being positive, even in abstract theory. The departure from the EM theory of the nature of mass removes the main reason we have for believing in the finite size of the electron. It seems now an unnecessary complication not to have the field equations holding all the way up to the electron's centre, which would then appear as a point of singularity. In this way we are led to consider a point model for the electron.
Further reasons for preferring the point-electron have been given by Frenkel (1925). We are now faced with the difficulty that, if we accept Maxwell's theory, the field in the immediate neighbourhood of the electron has an infinite mass. This difficulty has recently received much prominence in quantum mechanics (which uses a point model of the electron), where it appears as a divergence in the solution of the equations that describe the interaction of an electron with an EMF and prevents one from applying QM to high-energy radiative processes. However, some new physical idea is now required, an idea which should be intelligible both in the classical theory and in the quantum theory. A possible line of attack is to modify Maxwell's theory so as to make the energy of the field around the singularity that represents an electron finite" [41].
In short, the method proposed by Dirac consists of a procedure of subtraction of the infinite, similar to the one used to subtract the infinities emerged from the calculations related to the vacuum polarization.

RENORMALIZATION
As you will be aware, in the perturbative development of QED, other divergences emerged from Feynman's diagrams. In fact, 'an integral on a loop', a closed path in a Feynman diagram, leads to clearly different expressions. These divergences are due to the "non-integrable" behavior of the integrating function for high momenta: these are ultraviolet divergences, correlated to vacuum polarization. Other types of divergence, due to singularities in expression, emerge in theories like QED that provide non-massive particles: the photons [42].
In this case, infrared divergences (that is correlated to low energy photons) appear for momenta tending to zero. Obviously, to give mathematical and predictive meaning to QFT, these problematic terms had to be removed. To this end, so-called renormalization techniques have been studied. Finally, resigned to the impossibility of eliminating the infinities emerging from the calculations concerning the Perturbation Theory, in 1938, Heisenberg introduced a fundamental energy into the equations, in order to make the electron no longer point-like [43].
In short, the infinities problem drives some physicists to reconsider the classical theories of the electron, hoping that once obtained a "correct" version of the classical theory it is then possible to proceed consistently with its quantization.
The solution to the problem, known as the renormalization method, was reached only at the end of the 40s of the last century. With Renormalization, in fact, we try to "give a meaning to the results of some theoretical calculations that at first sight seem to be unusable, since they consist of divergent integrals, therefore: infinite" [44]. A decisive step towards renormalization techniques came from the congress held at Shelter Island (New York, 2-4 June 1947), where the main topic was the presentation of the Lamb and Retherford experiment, now known as "Lamb shift" [45], in which, unlike what still predicted by the Dirac theory, the s1/2 e p1/2 orbitals are not degenerate.
The following year, in a Conference in Pocono Maior, Pennsylvania (March 30, 1948), Schwinger shows how every infinite can be made to disappear in the mass and in the charge of the electron, through calculations that must be relativistically invariant at every step [46]. Schwinger first calculates the anomalous magnetic moment of the electron and then the Lamb Shift, comparing its deductions with Feynman's. He had discovered another Renormalization technique. Schwinger's report lasted for almost 5 hours, supported by a mathematical formalism so complex and difficult that, as Baggott reminds us [7], only Fermi and Bethe managed to follow him to the end.
Tomonaga also described a Renormalization technique, similar to Schwinger's, published as early as 1943 in Japanese, and then in 1946 in English [47].
According to Tomonaga and Schwinger, a flaw in the QFT is the lack of relativistic covariance, both in the relations that connect the fields to different instants and in the field switches that are assigned to equal times, when it is well known from Special Relativity that the concept of simultaneity is no longer absolute. The transformation function that links the coordinates to different instants must instead depend on two threedimensional sections of space type (i.e. between any two points of a section the interval is of the space type), which replace the non-covariant sections assigned to constant time. Also, the switches between the fields can be assigned to these sections. We can then write the covariant reformulation of the evolution equation for a field,  (), assigned on a  space surface and specified by means of a set of switchable field observables on that surface: where i is the imaginary unit, ћ is the rationalized Planck's constant, c is the light speed in a vacuum, H(x) is the interaction energy density of the fields in a space-time x point.
In short, the covariant reformulation of QED allows to eliminate the various infinities, which for so many years had plagued theorists, and to obtain forecasts in perfect agreement with the measurements. Since 1941 also Wheeler and Feynman had been trying to solve the problem of the infinities emerging from the Perturbed Systems calculations until they proposed a formulation of the classical point size electron theory based on distance action (1949). Their proposal goes through a critical review of Dirac's electron theory, considering as problematic, in the context of classical field theory, the request that a particle acts on itself: which leads to the well-known divergences of the electron mass, hence infinite. They write: "We do not know enough about QFT and its possibilities to be able, on a quantum basis, to answer the question of whether such a direct self-interaction should exist. Quantum theory defines those measurement possibilities that are consistent with the complementarity principle, but the measuring devices themselves necessarily make use of classical concepts to specify the measured quantities. It is, therefore, appropriate to begin a reanalysis of the field concept by returning to classical electrodynamics" [48].
In this sense, the fundamental problem of classical physics to be retested is, just as the two authors explicitly state, that of the motion of a system of charged particles under the influence of electromagnetic forces. The starting point of the proposal by Wheeler and Feynman are the works of Schwarzschild (1903), Tetrode (1922), and Fokker (1929-32), which are part of the tradition of nineteenth-century theories (Ampere, Weber) of the distance action [34]. These works provide a "description of nature", as Wheeler and Feynman call it, in which no direct use of the notion of a field is introduced (fields are only derived quantities, the field does not exist as an independent entity with its own degrees of freedom). Each particle moves in accordance with a principle of stationary action, which encompasses all mechanics and electrodynamics, and, in this sense, constitutes the "natural and self-consistent generalization of Newtonian mechanics to the four-dimensional space of Lorentz and Einstein" [48].
With some surprise, Wheeler and Feynman discover that the attempt could succeed, if one takes into account an anticipated action next to the delayed one, with consequent apparent violation of the principle of causality.
To solve the problem, the two physicists believe that the asymmetry in the initial cosmological conditions causes the absence of anticipated actions. It is important to bear in mind that the technique adopted by Feynman for Renormalization is based on the relativistic QM and not on the QFT [49].

ISOSPIN SYMMETRY
The isotopic spin symmetry (or isospin symmetry) had been introduced by Heisenberg in relation to the surprising similarity of the masses of the proton and neutron (called "Nucleons" by Heisemberg [50]) and consisted in supposing that the nuclear forces were symmetrical for the proton and neutron substitution with arbitrary linear superpositions of these two states. Of course, this symmetry is not respected by the EM Interaction (EMI), which distinguishes the proton (positive electric charge) from the neutron (zero electric charges). In analogy with what happens for particle spin (hence the name of symmetry), symmetry implied that the nuclei occurred in multiplets of isotopic spin I, with 2I+1 states and electrical charges one unit apart, according to the rule: , where Q is the electric charge in units of the proton charge, B is the Barionic Number, and I3 is the third component of the isotopic spin, analogous to the magnetic quantum number of the angular momentum [1]. The surprise was that even hadrons respected isospin symmetry and presented themselves in multiplets, each characterized by an I value of the isotopic spin and by electric charges given by a formula analogous to Eq.(20): Q = I3 + ½ (B+S), I3 = -I, -I +1,…,+I, (21), where S is a new quantum number introduced by Gell-Mann to characterize the strange particles (S=0 for nucleons and pions, S=+1 for K+, Ko, S = -1 for hyper one Λ, etc.).
Equation (21) is known as Gell-Man and Nishijima's formula. At the beginning of the fifties, attention was focused on Nuclear Interactions. In the first pion beam experiments at the Chicago Cyclotron, Fermi had observed the first baryonic "resonance" and obtained the surprising confirmation of the isospin symmetry, which characterizes the particles sensitive to Strong Nuclear Interaction (SI). Fermi's observations showed convincingly that symmetry was not something accidental, but referred to a fundamental property of SI, of general validity.
The basic rule of isospin symmetry can be summarized in the substitution: where p is the proton, n the neutron, and U is any complex matrix 2x2 [1].
It is important to bear in mind that, as Maiani reminds us, this last substitution is similar to (15), but with 2 substantial differences: 1) The product of the matrices does not have the commutative property, which the product of the gauge factors in Eq. (15) have.
2) The transformations illustrated in Eq. (22) Are global transformations, unlike Eq. (15), which shows local transformations, where the phase attributed to Ψ(x) is different from one point to another, but without significant variations of the examined physical system. The transformation (15), then, is invariant, which implies exclusively massless particles! In transformation (22), instead, the p states refer to a definition of proton and neutron which must be shared, at a given instant of time, by all observers of the Universe and which is transformed by the U matrix in all points of the Universe, simultaneously [1].

YANG-MILLS' ISOSPIN SYMMETRY THEORY
Taking inspiration from the Isospin Symmetry introduced by Heisemberg [50], Yang and Mills propose to formulate an Isospin Symmetry Theory that does not suggest, to put it to Einstein, any "spooky action-at-adistance" [1]. We learn from A.A.: "We wish to explore the possibility of requiring all interactions to be invariant under independent rotations of the isotopic spin at all space-time points, so that the relative orientation of the isotopic spin at two space-time points becomes a physically meaningless quantity: the EM field(EMF) being neglected. We define isotopic gauge as an arbitrary way of choosing the orientation of the isotopic spin axes at all space-time points, in analogy with the electro-magnetic gauge, which represents an arbitrary way of choosing the complex phase factor of a charged field at all space-time points. We then propose that all physical processes (not involving the EMF) be invariant under an isotopic gauge transformation, → ',  '= S-1, where S represents a space-time dependent isotopic spin rotation. Let  be a two-component wave function describing a field with isotopic spin ½. Under an isotopic gauge transformation it transforms by: Where S is a 2X2 unitary matrix with determinant unity" [51]. In other words, the synthesis of the construction of the Abelian London and Weyl gauge theory is extended to a not-Abelian gauge theory [1]. To do this, Yang and Mills replace the one-dimensional unitary symmetry group U(1), to be considered as the set of rotations on the plane, with a compact Lie group, expression of a set of rigid movements in a multi-dimensional space. However, while U(1) is Abelian, or commutative (a series of rotations add up), the compact Lie group is not Abelian, giving rise to a much more complicated gauge theory. Yang and Mills, suggest that even the Nuclear Interactions can be described by a gauge theory: a false step, in our opinion. In fact, we read: "The main problem with this model is that the gauge simmetry prohibits the presence of mass terms for the vector bosons mediating the interaction. However, an interaction mediated by a null mass particles has to produce longrange effects, which, however, are completely absent in the phenomenology of Nuclear Interactions" [6]. In fact, the Yang-Mills Theory can offer the Maxwell EM theory in both a classic and a quantum version. In the classic version, the waves described are of zero mass, propagating at the speed of light. Whereas, if we try to unify electromagnetism and Nuclear Interactions, with the quantum version of the Yang-Mills theory, a problem arises: we have that every particle is a wave, which absolutely cannot be massless! In fact, the experimental evidences highlight the Nuclear Interactions associated with massive particles.

YANG-MILLS EQUATION
Yang and Mills introduce the reader to the new field they highlighted and called b field. Bold-face letters and type denote three-component vectors in isotopic space, not in space-time [51]. We get then to the equations describing the b field: "To write down the field equations for the b field, we clearly only want to use isotopic gauge-invariant quantities. In analogy with the electromagnetic case we, therefore, write down the following Lagrangian density: -¼ f  f. We shall use the following total Lagrangian density: One obtains from this the following equations of motion" [51]: γμ (∂μ -i εbμ) Ψ+mΨ= 0 (26), and where: Jμ = i ε̅ γμ Ψ (27). As it is known, the (25) is the famous Yang-Mills equation: it represents the motion equation of the b field, or the Yang-Mills field, that is the nucleonic strong field, which today we can call gluon field (or color field). "Yang-Mills equation is the equivalent of Maxwell equations, or Newton motion equations" [52]. Regarding the equation (25), it can be useful to remember that f describes the intensity of the Yang-Mills field; ∂/∂x specifies that this equation depends on the way the intensity of the field changes with space and time. In fact, being the derivatives of the spatial coordinates in the denominator, we have that as the distance increases, the intensity of the strength of the field decreases proportionately. The parameter  represents the charge; J is "the spin -1/2 field" [51]; b is the Yang-Mills b quantum, i.e. the potential of the b field, which can be identified with the quanta going through the Yang-Mills field.
Maiani says: "In Yang-Mills theory, as in electrodynamics and in General Relativity, the symmetry (invariance under local not-Abelian transformations) determines the interaction of vector fields with matter (the nucleons). The intensity of the interaction is fixed by a constant, g, completely analogous to the electric charge (e) that appears in the equation (12). Unlike electrodynamics, however, the vector fields (b  f) are themselves sensitive to not-Abelian transformations and therefore interact with each other in a way also completely determined by the symmetry and the interaction constant g " [1]. In fact: "b  f provides important difference compared to Maxwell's equations, since it emphasizes the dependence of the Yang-Mills field from itself" [52]. Sutton adds: "The remarkable paper that Yang wrote with Mills demonstrates for the first time how the symmetry of gauge invariance could indeed specify the behavior of a fundamental force" [52].
However, doubts began to arise. "Such doubts" as Maiani reminds us "were due in large part to the problem of the symmetry of the quarks (Qs) wave function in baryons and to the failure of attempts made up to then to observe Qs in high-energy collisions or in Nature, like stable particles, remnant of the Big Bang originating in our universe" [1].
As regards the first point, to describe the overall structure of spin and charge of baryons, it is necessary that the state of three Qs is completely symmetrical for the exchange of the Qs themselves, in contrast to the spinstatistical relationship, which requires that the particles spin ½ obey the Fermi-Dirac statistics and therefore have a completely antisymmetric wave function. The problem of the symmetry of the wave function of baryons, by spin and flovor, finds a natural solution if we assume that a Q of a given flavor has an additional quantum number, or color, which takes three values. It is possible to satisfy the Pauli principle, if we assume that the baryons are in the completely antisymmetric state in the new quantum numbers, an invariant configuration for color transformations (color singlet). In 1965, Han and Nambu [53] gave an elegant formulation of this hypothesis, introducing a SU(3) symmetry that operates on color indices and hypothesizing that the color symmetry was a gauge symmetry and the gluons (Gs) were Yang-Mills fields associated with the color itself. This theory was called by its proponents, Fritzsch and Gell-Mann, Quantum Chromo-Dynamics [54], i.e. the Yang-Mills quantum theory based on color [1].
Yang and Mills specify the isotopic spin(J ): "We define: J = Jμ +2ε bf (28). Equation (28) shows that the isotopic spin arises both from the spin-1/2 field (J) and from the b field itself. Inasmuch as the isotopic spin is the source of the b field, this fact makes the field equations for the b field nonlinear, even in the absence of the spin -½ field. This is different from the case of EMF, which is itself chargeless, and consequently satisfies linear equations in the absence of a charged field" [51]. "The Yang-Mills principle can be summarized in a slogan: force is curvature. Curvature is associated with surfaces. It represents the turning or rotating that takes place when some object is transported around the closed curve bounding a surface. An example of curvature is a non-zero magnetic field. The Yang-Mills equation constrains the curvature: it says that the divergence of the field F is the current J. There is an operator ∂A that plays the role of a divergence, and the equation is:

∂A F=J
(29). This scheme is a generalization of electromagnetism. In electromagnetism, the current consists of the ordinary current and charge. The curvature is a combination of electric and magnetic fields. The connection is made of electric and magnetic potentials. The electromagnetic(EM) example has one special feature: the vector space has complicated dimension N = 1, and so the matrices commute. This means that the non-linear term is not present in the EM case. The striking feature of the general Yang-Mills equation is that the matrices do not commute; the nonlinear term is a necessary feature, imposed by the geometry. A solution is then called a not-Abelian gauge field" [55].

YANG-MILLS EQUATION in CONFLICT with NATURE
Yang and Mills write: "The quanta of the b field clearly have spin unity and isotopic spin unity. We know their electric charge too because all the interactions that we proposed must satisfy the law of conservation of electric charge, which is exact. The two states of the nucleon, namely proton, and neutron, differ by charge unity. Since they can transform into each other through the emission or absorption of a b quantum, the latter must have three charge states with charges  e and 0" [51].
Again as Sutton tells us, "Since 1949 Yang tried several times to apply the gauge invariance procedures of the electromagnetism to the Isospin. These attempts, according to Yang, always led to a state of confusion and he got bogged down in the calculations at the same point: when trying to define the intensity of the corresponding field. Yang returned once more on these ideas in the summer of 1953 with Robert Mills. Together they crossed the barrier that had stopped Yang and discovered the equations for the field associated with the gauge symmetry isospin. Yang and Mills knew then the charge and the isospin of new field particles, but they had no idea of their mass, and acknowledged that this was a weakness in their theory" [52].
To this purpose, Maiani points out: "The problem is that of the mass (g) of the particles associated with the vector fields. In the approximation g = 0, these particles all have zero mass, just as for the photon in electrodynamics" [1]. However, this is "in conflict with the observation that in Nature, there are no other particles of zero mass beside the photon" [1].
And yet we read from the INFN(National Institute of Nuclear Physics) handouts: "What is the mass? The concept of mass has evolved over time hand in hand with the deepening of knowledge of Nature. 1) Classical mechanics (Newton 1687): mass = quantity of matter. 2) Relativistic Mechanics : mass = energy. 3) For us, today, mass is an intrinsic property of particles: mass = energy of a particle at rest" [56]. As well: "Mass is measured in terms of energy because mass is a form of energy. Remember: E = mc2 " [57]. Thus, an energy particle could never be considered massless. Yet, even highly energetic particles, such as gluon (G), for example, are considered massless. Indeed, Maiani adds: "In electro-dynamics the mass of the photon is zero in the limit e = 0, and this is maintained even in the presence of interaction, precisely because of the gauge invariance. Yang and Mills state that this argument cannot be extended to their theory, thus keeping open the possibility that the corrections of order g2 can push the mass of vector fields to values other than zero" [1].
And this seems like the right way to go! In fact, in the final part of their work Yang and Mills highlight the wellknown question of the mass gap problem inherent the b quantum: " We next come to the question of the mass of the b quantum, to which we do not have a satisfactory answer. One may argue that without a nucleon field, the Lagrangian would contain no quantity of the dimension of a mass and that therefore, the mass of the b quantum in such a case is zero" [51].
Instead, with current knowledge, the new field proposed by Yang and Mills, or b field, identifiable with a nucleonic field, does exist. It is the strong field: it permeates every atomic nucleus, as well as within each nucleon, containing the 3 valence Qs, the Gs, and the protonic sea. Obviously, it is not a fenced space, well defined, but the dynamic result of difficult balances. Thus, the existence of the nucleonic field implies that the Lagrangian, to which Yang and Mills refer, contains quantities of the dimension of a mass, and that therefore the mass of the b quantum must be 0. Just referring to this last concept, the A.A.conclude: "This argument is, however, subject to the criticism that, like all field theories, the b field is beset with divergences, and dimensional arguments are not satisfactory. A conclusion about the mass of the b quantum is, of course very important in deciding whether the proposal of the existence of the b field is consistent with experimental information. For example, it is inconsistent with present experiments to have their mass less than that of the pions, because among other reasons they would then be created abundantly at high energies, and the charged ones should live long enough to be seen. If they have a mass greater than that of the pions, on the other hand, they would have a short lifetime (say, less than 10-20 sec) for decay into pions and photons and would so far have escaped detection." [51].
That is to say that the condition sine qua non, so that "the proposal of the existence of the b field" proposed by Yang and Mills is valid and acceptable, imposes that the b quantum, is massive! Today we know that the b field proposed by Yang and Mills is a real, universally recognized field.
Yet, nevertheless, the gauge boson of this field, namely the b quantum of Yang-Mills, now identifiable in Gs, is still considered to be massless. Penrose says: "Yang and Mills had been anticipated in the years following the Second World War by Pauli, but his argument was rejected because gauge particles had to be massless" [22].
"Even before publishing their work, Yang presented the theory in a seminar at Princeton in February 1954 and was attacked by Wolfgang Pauli. As soon as Yang had written on the blackboard an expression designating the new field, Pauli asked him: what is the mass of this field? When Yang explained that it was a complicated problem and that he and Mills had not yet reached to precise conclusions, Pauli replied bitterly: this is not a sufficient excuse" [52].
In this respect, Casalbuoni says: "Still in a letter to Yang, Pauli said: I was and still am disgusted and discouraged by these zero mass vector fields!" [26].
Yang tells: "the idea was beautiful and had to be published. But what is the mass of the gauge particle? We did not have any firm conclusion, but only frustrating experiences, which showed that this case is much more complicated than electromagnetism. We tended to believe, for physical reasons, that the gauge particles having charge could not be massless" [58], but even "greater than the mass of pions" [59].
Therefore, Yang and Mills themselves were convinced, just for "physical reasons" [58], that the gauge particle of the b field, and represented by the b quantum, today described as G, could not be massless. Yang and Mills are convinced that the b quantum is at least greater than the mass of pions, although they have not been able to quantify its value [59]. "We must acknowledge to Yang and Mills to have realized that there were some dark sides in the problem of the mass of vector fields, such as to justify further investigations -It is said that Pauli discovered on his behalf non-Abelian gauge theories, but he had not published the results because he thought the question of mass was an insuperable obstacle. The success of isotopic spin symmetry suggested that the Strong Interactions (SI) theory was its natural field of application, and the spin 1 mesons discovered in 1961 were for some time identified with the SI gauge fields. However, some others, like J. Schwinger, indicate the weak interactions (WI) and electromagnetic interactions (EMI) as the natural field of application of the ideas of Yang and Mills " [1]. Thus, Gershtein and Zel'dovič in 1957 [60] and Feynman and Gell-Mann in 1958 [61] propose that we are of a vector nature, therefore due to the exchange of a spin 1 electrically charged particle, or intermediates vector bosons carrying WI: the W+ and W− particles. Thus, the isospin symmetry, so prominent in nuclear phenomena, represents the basis for a WI and EMI gauge theory, unifying them in an Electro-Weak Interaction (EWI).
The scheme on which to adapt these ideas is different from that of equation (22) and is based on matter doublets (hadronic matter and leptonic matter), which all transform at the same time, under the transformations of a new group SU(2), or weak isospin: , where: N=nucleon; p= proton; n= neutron; Le=electronic lepton; Le,μ = electronic or muonic lepton; e=electron; μ=muon; ν=neutrino; νe= electronic neutrino; νμ= muonic neutrino and U is a complex matrix 2X2.
As Maiani points out "The main obstacle on this line is represented by the mass of intermediate bosons which, far from being null as the Yang-Mills theory suspected, must be big enough so that these particles do not give visible effects in the weak decays of the neutron, muon, and other particles" [1]. Therefore, in 1961, Schwinger proposed to Glashow, as the theme of his thesis, to try to elaborate a theory that unifies the WI with the EMI, based on the Yang-Mills theory.
In this regard, Weinberg tells us: "The history of attempts to unify WI and EMI is very long. Possibly the earliest reference is E. Fermi(1934). A model similar to ours was discussed by S. Glashow (1961); the chief difference is that Glashow introduces symmetry-breaking terms into the Lagrangian, and therefore gets less definite predictions" [62]. In fact, Glashow proposed to limit its considerations to lepton doublets only and to insert ad hoc mass terms for vector bosons in the Lagrangian, assuming that the gauge symmetry could be explicitly violated by these masses (as the isotopic spin symmetry is violated by the mass difference proton-neutron) without losing its main virtues [63].
We read again: "Even if this hope turned out to be unfounded (the theory is not renormalizable and therefore mathematically inconsistent) the theory of Yang-Mills with ad hoc mass terms has been an important tool to explore the phenomenological properties of electro-weak unification. Glashow first identified the appropriate gauge group to describe the electroweak interactions, the SU(2) ⊗ U(1) group, with the consequent need for the existence of a neutral intermediate boson, the Z, in addition to the charged W bosons and the photon (the latter hypothesized by Schwinger, as EWI bosons). It has long remained mysterious how one could "give a mass" to the vector bosons in a theory with weak coupling (g ≈ e) " [1].

QUARK and CABIBBO ANGLE
As it is known, Gell-Mann [64] and Zweig [65], independently, propose that hadrons are simply aggregates of more fundamental constituents, to which Gell-Mann gives the name of quark (Q) [64]. Three types of Q are enough to reproduce the hadrons observed until then: the up Q (uQ), the down Q (dQ), and the strange Q (sQ). Therefore, the classification below the weak isospin represented in (30) is updated in the following schemes: However the negative beta decay (βd−) of the strange particles, as (where ῡe is the electronic anti-neutrino) corresponds to the transition uds → uud, or sQ → uQ, which Could not occur in this scheme because sQ would have isospin 0 and would not be coupled to W−, that is, the boson carrying the WI that governs the βd−. To this purpose Cabibbo [66] observed that WI may not respect the scheme (32), but requires that the dQ, with defined isospin, is a superposition of the quarks d and s with a mixing angle, since then known as Cabibbo angle [1]. In this case, the weak isospin scheme is: Comparing the decays of the baryons having strangeness with the neutron βd−, the value we obtain is: sin  = 0.225 (36).
As Maiani reminds us, "The classification in Eq. (35) is not yet satisfactory to extend the Cabibbo theory to a unified Yang-Mills theory. If we do this, the neutral boson, Zo, would produce processes with the change of strangeness, of the type: Ko → μ+ μ- (37), which are observed to proceed with much lower probabilities than the processes mediated by the W, for example, βd− shown in eq. (34)" [1]. In 1970 Glashow, Iliopoulos, and Maiani (GIM) hypothesized the existence of a fourth Q flavor, the Q charm (cQ), which would avoid the conflict just reported [67]. Thus, the Cabibbo scheme should be changed to a more symmetrical pattern: The structure of (38) indicates that quarks (Q) and leptons (L) must be in two identical "generations", with perfect symmetry between quarks and leptons. Moreover, the 3 A.A.(GIM) show that the existence of the 4th Q flavor, or cQ, suppresses the processes with the exchange of the Zo boson and change of strangeness (GIM mechanism), also deducing the probable mass of the cQ: 2GeV" [1]. Finally, the mass-gap problem of the Yang-Mills theory was solved with the Brout-Englert-Higgs Mechanism (BEH-M), linked to the spontaneus breaking of the gauge symmetry [17] [18]. The theory formulated by Weinberg [62] and Salam [68] adopts the symmetry introduced by Glashow in 1961 and incorporates the BEH-M solving the problem of the mass of vector bosons, but is still limited to lepton doublets, leaving unsolved the problem of the strange matter. On a formal level, the problem is closed in 1972 by 't-Hooft and Veltman [69], which show that the Weinberg and Salam theory, with the mass of intermediate bosons associated with spontaneous symmetry breaking (SSB), can be renormalized as the electro-dynamic. However, "something is still missing for physics, since it does not allow us to introduce any violation of the symmetry between matter and antimatter, or violation , in the WI, where by we mean the charge Conjugation (which allows us to change every particle in its antiparticle) and with theParity (which reverses the orientation of the coordinated axes). In this way, we must postpone the explanation of the violation observed in the decays of the Ko mesons to a new interaction, to be introduced ad hoc. Then, extending the construction of Cabibbo, in 1973 Kobayashi and Maskawa [70] show that the existence of a further generation of Qs and leptons allows the introduction of complex coefficients in the weak current, besides confirming the violation of " [1]. Thus, the scheme (38) should be extended to 3 particle families: ,

GLASHOW THEORY
The Yang-Mills Lagrangian is reduced, in the limit of the null coupling constant (g = 0), to a Maxwell Lagrangian for each guage field [71]: At this point Glashow, to avoid the presence of zero mass bosons, adds a mass term (M) [63]: In the case of charged fields (i = 1, 2), it is possible to define: And we get: Maiani describes equation (44): "The first line of (44) defines two 1 spin bosons, with electric charge ± 1 and mass M. As for the neutral fields, second and third line of (44) This matrix is not completely arbitrary because it must have a null auto-value, corresponding to the zero mass of the photon. We must, therefore impose: det.M = 0; ⇒ (M203)2 = M2M20 (46). We write the self-vectors of the matrix (M) illustrated in equation (45) as: Where Aμ is the electromagnetic field, and Zμ is a new electrically neutral vector field. The self non-zero value of M is simply given by its trace" [71]: From which we get: In short, Glashow knows that the bosons of a Nuclear Force cannot be massless. Otherwise, their range of action would extend to infinity! In short Glashow, in order to try to unify EMI with WI (as suggested by Schwinger), has to solve a very complicated problem. On the one hand, in fact, the QFT equations, related to the Gauge Theories, categorically impose that all the particles are massless. On the other hand, on the contrary, there emerges an absurd complication. In order not to collapse the whole theoretical construction of the gauge invariance, and with it the QFT, in obvious opposition with the Yukawa Principle [72], the bosons of a Nuclear Force are also considered to be massless, thus, like the photon, they should exercise their action for unlimited distances. Obviously, Glashow cannot accept these absurd concepts, in full conflict with physical reality and experimental events, so he manually introduces massive bosons, as illustrated in equation (44). This equation, however, violates the symmetry, a fundamental presupposition for gauge theories, and furthermore, it cannot be renormalized.

WEINBERG-SALAM THEORY
As we all know, three years after Glashow formulated his theory, it was invented ad hoc the BEH-M, a cumbersome mechanism, curiously asymmetrical. As Randall reminds us "it lavishes mass only on WI-sensitive particles; thus, among the bosons vectors of the fundamental forces, only the particles are carrying the WI acquire mass, while the photon and the gluon remain massless " [21]. This is how "Brout-Englert-Higgs's message is readily collected. In fact, in 1967, both Weinberg and Salam, independently, get to the same solution. The scheme is the one outlined by Glashow in 1961, starting from the Yang-Mills theory of 1954.
In the Weinberg and Salam scheme, the symmetry group is the same as Glashow, but the Action is perfectly symmetrical" [73]. We have, that is, that in the Weinberg-Salam model, some additional scalar fields have been introduced, whose condensed breaks the symmetry providing, at the same time, to the masses required. In this model, as it is known, "the starting point is the theory based on the symmetry SU(2)L  U(1)Y in its perfectly symmetrical version, i.e. without ad hoc mass terms for vector fields and for the electron field " [71]. The symmetry SU(2)L  U(1)Y indicates the symmetry group of weak isospin that unifies the EMI and the WIs. The subscript Y distinguishes this copy of U(1) from electromagnetism's, indicated with U(1)Q. To be precise, the interaction SU(2)L represents the weak isospin, while U(1)Y is the weak hyper-charge.
Maiani adds: "The Lagrangian follows from the classification below SU(2)L  U(1)Y of lepton fields: The corresponding Yang-Mills Lagrangian of the Electro-Weak Interaction (LeW) is therefore: Covariant derivatives and field tensors are given by" [71]: . As we know, the Yang-Mills Lagrangian describes fermions and vector fields, all massless. The novelty proposed by Weinberg and Salam was to introduce, in this Lagrangian, a scalar field, in turn, capable of inducing the symmetry breaking, but leaving the gauge symmetry of electromagnetism unchanged, as shown in the diagram: (54). To this purpose, Maiani points out: "On the scalar field we have little information and different possibilities. The choice of Weinberg and Salam allows the spontaneous breaking mechanism to also generate the mass of the electron and, subsequently, of the quarks (in the extension to the other nuclear particles) in order to take us to a completely realistic theory. The choice in question consists in introducing a doublet of SU(2)L, with Y = +1 " [71]: where  represents the Higgs doublet, which is equivalent to 4 real fields: At this point Weinberg and Salam add to the Yang-Mills electro-weak Lagrangian (LeW) "the BEH-M in order to give a mass to the gauge bosons (well, not really to all of them) and to the fermions: The added field, with 4 components, must be a multiplet of SU(2)L  U(1)Y in order to preserve the gauge invariance of the LeW (59): minimal choice. The potential chosen for LeW (58) is the usual (2 < 0,  > 0): if Q0 =0. This void breaks symmetry SU(2)L  U(1)Y but it preserves the invariance for U(1)em (if Q0, a charge of the Higgs boson, is 0). This guarantees the presence of a neutral boson, without mass (the photon), and of other 3 gauge bosons with mass: the particles W+, W− and Z°" [74].
Weinberg adds "The spontaneous breakdown of SU(2)U(1) to the U(1) of ordinary electromagnetic gauge invariance would give masses to three of the four-vector gauge bosons: the charged bosons W, and a neutral boson that I called the Z°. The fourth boson would automatically remain massless, and could be identified as the photon. Knowing the strength of the ordinary charged current weak interactions like beta decay which are mediated by W, the mass of the W was determined as about 40 GeV/sin, where  is the -Z° mixing angle. To go further, one had to make some hypothesis about the mechanism for the breakdown of SU(2)U(1). The only kind of field in a renormalizable SU(2)U(1) theory whose vacuum expectation values could give the electron a mass is a spin-zero SU(2) doublet (+ , o), so for simplicity I assumed that these were the only scalar fields in the theory. The mass of the Z° was then determined as about 80 GeV/sin 2" [75].

WEINBERG CONSIDERATIONS
Weinberg points out: "One of the essential elements of the Standard Model (SM) is the Symmetry between 2 of the 3 Forces included in the SM: the Electro-Magnetic Interaction (EMI) and the Weak Nuclear Interaction (WI). This symmetry unites the two Forces in a single electro-weak (EW) structure. One of the consequences of the EW Symmetry is that, if no other ingredients are added to the theory, all elementary particles, including electrons and quarks, are massless, and this is patently false. Therefore it is necessary to add something new to the theory: some new type of field or matter. Somehow the EW Symmetry, an exact property of the fundamental equations underlying particle physics, had to be broken: in other words it is not directly applicable to the particles and forces we observe in reality" [76]. In fact, Hawking specifies: "The Weinberg-Salam theory has a property known as spontaneous symmetry breaking (SSB). This means that those at low energies (the common energies) seem to be completely different particles, at high energies they all behave in a similar way. Thus the gauge bosons of the WI (W+, W− and Z° particle) and of the EMI (the photon), in the Weinberg and Salam theory, at energies much higher than 100 GeV, all behave in a similar way: there is a symmetry between the particles. Whereas, when the Universe began to cool down, at lower energies, like the current ones, this symmetry between the particles was destroyed: SSB, so the WI bosons acquire a large mass, so that the forces they carry have a very small range" [77].
Weinberg goes on: "as early as 1960-61with Nambu and Goldstone it was known that a symmetry breaking of this kind (SSB) is possible in several theories: this implied the existence of new massless particles (the Nambu-Goldstone bosons) which instead, it was known, do not exist. It was the independent studies of Brout and Englert, Higgs, Guralnik, Hagen and Kibble, all of 1964, that showed that in some theories these Nambu-Goldstone bosons, without mass, disappear, giving mass instead to the force mediating particles ( gauge bosons): this is what happens in the EW theory proposed by Salam and myself in 1967-68. What kind of matter or field breaks the EW Symmetry? There were two possibilities: 1) the existence of fields, never observed, which pervade the vscuum and which (as the earth's magnetic field distinguishes the north from the other directions) distinguish EMI from WI, giving mass to the mediating particles of WI and to other particles, but leaving photons (the mediators of EMI) massless. These fields are called scalars because, unlike the EM field, they do not identify any direction in ordinary space. Scalar fields of this type were introduced in the illustrative examples of symmetry breaking (SB) used by Goldstone and then in 1964 by the various A.A. just mentioned. Salam and I used this SSB to elaborate the EW theory, assuming that the breaking was due to scalar fields of the type described, pervasive of all space (an SSB of this kind had already been hypothesized by Glashow, Salam, and Ward, but not as exact property of the equations of the theory, for which they were not induced to introduce scalar fields). One of the consequences of the theories in which symmetries are broken by scalar fields (including the models considered by Goldstone or in the cited 1964 articles, as well as the EW Salam theory) is that, although some of these fields serve only for giving mass to the mediating force particles, other fields appeared in nature as physical particles observable in accelerators and particle colliders. Three of these scalar fields were used to give mass to the W+, W− and Z° particles, i.e. the heavy photons that in our theory carry WI. A 4th scalar field remained outside, which showed as a physical particle, that is, a concentration of energy and momentum of the field itself: the Higgs particle.
But there was always a 2nd possibility: 2) there could be no new on the pervasive scalar field, nor any Higgs particle. The symmetry could be broken by strong forces, called Technicolor Forces, acting on new type particles, never seen so far because too heavy" [76].
How if these very heavy particles identified with a gluon(G), the boson carrying Strong Interaction (SI)? There is an incomprehensible and unjustifiable asymmetry, in our opinion, between the two different known nuclear forces. They have bosons with antipodal masses: on one side the WI, carrying very heavy gauge bosons, between 80 and 91 GeV, and on the other hand, the SI, conveyed by bosons considered massless, although it also operates in the very restricted space of a nucleus or nucleon. It is absolutely unjustified, an apparent contradiction, physically and mathematically unacceptable, that a massless particle shows a range of action On the contrary, as everyone knows, a massless particle should have an infinite range of activities, like the photon. And instead, the experimental evidence shows that this is not the case at all. Moreover, a massless G is in open and unacceptable contrast with Yukawa Principle [72], according to which the mass (m) of the boson carrying a fundamental force must absolutely be inversely proportional to the range (R) of the force it conveys: Where h is the Planck's constant, and c is the speed of light in the vacuum. Moreover, the same BEH-M would be incongruously asymmetric with respect to these two different nuclear forces. In fact, this mechanism gives a considerable mass to the WI bosons, leaving the G massless! Why? Due to the fact that the Higgs field (HF) would give mass only to WI-sensitive bosons. That's why G and photon (P) would remain massless.
Weinberg adds: "So these were the alternatives we had in front of us: scalar fields or Technicolor? The discovery of the new particle represents a point in favor of the scalar fields as the origin of the symmetry breaking, and against the technicolour Forces. However, much remains to be done" [76].
How about, on the other hand, the decay products detected at CERN in 2012 were not those of the hypothesized Higgs boson (HB), but those of another particle?
A particle of that weight, between 125 and 126.5 GeV [78], could very well be the G, if it were not massless. Moreover, from the literature, we learn that many products and channels of decay are in common between W and Z ° bosons, G, and HB.
In short, it is is clearly unacceptable to consider massless a particle as energetic as the G, moreover operating in the very narrow range of 10-13[cm], both from the physical and mathematical point of view, as well as from the logic. Among other things, there are numerous A.A. who say, scrupulously, that we cannot yet be sure that the particle detected at CERN is just the HB. For example, Maiani writes: "The possible discovery of the Higgs boson (obligatory to underline possible)" [79].
It is known that experiments carried out at the Petra of Hamburg in 1979 [80] showed that the mass of G is zero, along with the requirements of the Standard Model, according to which in gauge theories bosons are massless. However, we believe that the zero mass attributed to G is patently incongruous and inconsistent according to the simplest and most basic concepts of arithmetic. In fact, the massless G would deny one of the basic principles of Special Relativity: the Mass-Energy Equivalence Principle (MEEP): (62). According to MEEP, a massless G implies an energyless G! In fact, considering the gluon mass as zero, we would have: E=0⋅c2, and therefore E=0, as to say that the boson of a nuclear force, considered the most energetic boson, is massless and energyless, where the bosons of another nuclear force, the WI, are highly massive! It is really against the reality of the facts: the MEEP of Einstein categorically forbids that the G can be massless: in that case, ex abrupto, its energy (which is enormous) would instantly vanish. Anyone who claims that G is massless states at the same time that the most famous equation in the world is not true, but is misleading, wrong.
On the contrary, Yang and Mills, and so many A.A., knew that the bosons of a Nuclear Force could not for any reason be massless (in this case, their range of action would extend to infinity). For this reason, in fact, Glashow, since he couldn't find a mathematically congruent solution, in contrast to the gauge theories and the Quantum Fields Theory (QFT), forced the issue and introduced ad hoc massive WI bosons (see equation 44). Weinberg's activity was based on the same principles. However at the beginning, he tried to assign a mass to the bosons of the other nuclear force: the SI. We read: "Weinberg had spent a couple of years studying the effects of the Spontaneous Symmetry Breaking (SSB) in SI described by a SU(2)SU(1) gauge theory. As Nambu and Jona-Lasinio had discovered a few years earlier, the result of the symmetry breaking was that protons and neutrons acquire a mass. Weinberg was convinced that the Nambu-Goldstone bosons so created could be identified, to a certain approximation, with the pions" [7]. In fact, Sutton writes: "Steven Weinberg thought it promising to use the ideas of symmetry breaking in a Yang-Mills theory to describe SI. In the beginning, as he tried to assimilate the particles with and without mass, which appeared in his theory, with particles of the strong interaction, his efforts seemed in vain " [52].
Baggott adds: "Weinberg had tried to apply the Higgs Mechanism to the SI, and now he realized that the mathematical structures he had tried to use for SI were precisely what was needed to solve the problems of WI and its heavy bosons" [7]. The mathematical difficulty encountered by both Glashow and Weinberg and Salam, in including hadrons in their unified theories, emerges when one tries to extend Cabibbo's theory, see equation (35), to a unified Yang-Mills theory. To this purpose, Maiani points out: "If we did so, the neutral boson, Z°, would produce processes with strangeness change, of the type K°→μ+μ-(equation 37), which are much less frequent than the processes mediated by boson W. Moreover, this was the reason that had prevented Glashow and, subsequently, Weinberg and Salam, from including hadrons in their unifying theories" [1]. Namely, Weinberg, formerly Glashow's fellow student, "knew well that, if the masses of the W and Z° particles were added by hand, as in Glashow Electro-Weak Theory SU(2)SU(1), the result was a nonrenormalizable theory. He, therefore, wondered if breaking the symmetry with the Higgs Mechanism, besides giving mass to the particles and eliminating the unwanted Nambu-Goldstone bosons, a renormalizable theory could have resulted. The still remained the problem of neutral currents, that is, the interactions due to the Z° particle, of which there was no experimental proof. Weinberg decided to avoid the problem by restricting his theory to leptons. Weinberg no longer trusted either hadrons (the particles subject to SI) nor the strange particles, which had become the main terrain of exploration on WI. Then, as Weinberg recounts: 'one day in the fall of 1967, I realized, while I was driving the car, that I had applied the right ideas to the wrong problem'. He realized that the massless particle he needed was the photon, and that particles with mass were the particles of the weak field" [7].

OPPOSING VIEWS on the SPONTANEOUS BREAKING SYMMETRY
Although the SSB is the prevailing theory, various physicists and mathematicians, even authoritative, do not approve it. To this purpose, we read that in the Standard Model, it is assumed that WI and EMI are unified in electro-weak (EW) theory, where there is a special symmetry that connects the particles W+ W-and Z ° to the photon (P). It seems that this EW symmetry is very odd and thin, since pure electromagnetism is invariant for reflection, involving both left-and right-handed components. In contrast, WI only involves left-handed parts of the particles. Moreover, it seems that the P is clearly distinct among all the bosons of the theory since it is a massless particle [22].
Penrose points out: "Actually, the mass of P, if not 0, should be <10−20 electronic masses for good observational motives, thus it is <5·10−26 of the measured mass of bosons W and Z. In addition, the bosons W have an electric charge, while the P does not have a weak charge. It would seem to emerge the impossibility of a complete symmetry between all gauge bosons. Moreover, the first point to understand is that in Feynman Diagrams, there is much more hidden symmetry than what is immediately apparent; in fact, if viewed appropriately, they exhibit symmetry U(2), i.e. EW symmetry. The asymmetry we see in the real world, compared to these particles, is born in EW theory just because Nature chooses that certain particular combinations are realized as real free particles. But what about the other asymmetry, related to Feynman Diagrams, so that the W and Z particles can only attach to the left-handed lines of the particles, whereas the P attaches to both left-and right-handed? What criteria does Nature adopt in allowing us to find certain particulates as free particles, and not others? In the case of a free particle, it must be a mass self-state, so we need to know what determines the mass of the particles. In this case, we cannot expect complete symmetry over U (2).
In other words, the mass implies some sort of symmetry breaking (SB). Such asymmetry is the result of a spontaneous SB(SSB), which is supposed to have occurred at the very first stages of the Universe. According to EW theory at the very high temperatures of the universe immediately after the Big Bang, the EWsymmetry, like U(2) symmetry, was exactly valid, so that the W, Z, and P particles were completely equivalent" [22]. At those temperatures, definitely > 1016K, the kinetic energy and momentum of the P were very high [81], so in the relativistic sense, the P might have gained a considerable mass! "But already at ≤1016K, at  10-12 seconds after the Big Bang, the W, Z, and P were frozen by this SSB process, so only P remains massless while the others gain mass. Maybe it is the Higgs Boson(HB) to give mass to these particles, as well as to itself and quarks. And how? Really great and ingenious ideas " [22], Penrose comments.
Witten adds: "This proposal of the spontaneous breaking of electro-weak symmetry, or SSB, though simple and rebuttable with known facts, probably does not tell us the whole story" [82].

PROBLEMS of the SYMMETRY BREAKING in the PRIMORDIAL UNIVERSE
Penrose chases: "I question the reality of SSB! There are various difficulties in this idea of SSB. So, about 10-12 seconds after the Big Bang, throughout the Universe the temperature fell just below critical value; at this point, a special choice was made (W+, W-, Z°, and P) from the whole variety-G with U(2) symmetry of possible set of gauge bosons. We do not expect this to happen in exactly the same way throughout the space, at the same time throughout the Universe, but in some regions a particular choice will be made, whereas in others there will be different choices. The G-space of the possible gauge bosons is, at each point of the space-time completely U(2)-symmetrical, before the symmetry reduction occurs. As implied in the fibrate concept, there isn't any particular way to make an identification between the G-space in a certain point and the G-space in another point completely different. Therefore, there isn't a rule that tells us what element of G in a point is the 'same' element of some other element of G in another point. It seems to us that this gives us the freedom to observe the notion of 'same' as the one provided by the particular choice that SSB offers us. According to this point of view, the particular set (W+ W-Z° and P), which is frozen in a point, can be identified with the corresponding (W+ W-Z ° and P) in any other point. Thus, it seems that we should not have that kind of 'inconsistency' between symmetry breaks in different points, which occurs with the iron magnetization domains. However, this point is in open contrast to the idea behind the gauge theory, according to which not only the G-spaces are the fibers of a fibrate (BG), whose base space is the space-time, but where the particular theory of gauge, in this case the unbroken EW theory, is defined in terms of a connection on this fibrate. This connection defines the locally significant identification (parallelism) between the various G-spaces when we move along any curve space-time. In general, this identification is not globally consistent when we move on closed circuits (due to the curvature of connection, which expresses the presence of a non-trivial gauge field). In any case, the randomness involved in symmetry breaking (SB) in different points implies that local parallelism between the G-spaces will not, in general, be consistent with the choices made in SSB" [22].
In short, following the description of the Standard Model(SM), we find that the breaking of the EW symmetry is totally asymmetric, since the SSB (related to the "phase transition" triggered by the lowering of the temperature of the primordial universe) also alters the symmetry of the Higgs Field (HF). That is, the breaking of the EW symmetry means that only the W and Z° bosons acquire mass, while the P will remain massless forever.

ASYMMETRIC BEHAVIOR of the HIGGS FIELD
Why do we have such a dichotomous and asymmetric behavior, in a model-based primarily on symmetries? According to SM, the more a particle interacts with the HF, the greater it's mass. The photon(P), on the other hand, does not interact with the HF at all so that it will remain massless. But how is it possible to state it with such a determination? Based on what preexisting phenomenon, or assumption? How is it possible to confirm and prove this particular behavior of the HF in favor of some particles, compared to others, closely related? Why can't we apply the mathematical formalism used in favor of the bosons W and Z° [17][18] to P too?
Unless we try to think that there may be another type of BEH Mechanism(BEH-M), working likely in that HF portion, asymmetric as compared to the HF, which gives mass to the bosons of the WI. This asymmetric portion of the HF might interact with the Ps so that even these can gain mass (though very small), and without breaking the symmetry. It could be assumed that in such circumstances, the temporary acquisition of mass by the Ps would overshadow symmetry.
In short, following SM criteria, before the phase transition (resulting in SSB), the bosons of EMI and WI were equivalent, the two forces were unified, and the HF behaved ubiquitously homogeneously, without asymmetry. Then, with the primordial phase transition and consequent SSB, also the symmetry of the HF is altered, which starts to behave differently, i.e. asymmetric, so that it gives mass only to the bosons of the WI and not to the Ps.
In integration with SM, and to try to justify the massive particle behavior many times shown by P, we dare to think that , through a BEH-M, the asymmetric portion of HF may succeed in give mass to P. In this case, it would be necessary to understand whether P and the W and Z ° particles gain mass through a single HB, or two distinct HB occur: one interacting with particles with no weak charge, nor electric charge, nor color charge, as the P, whereas the other is well known [20]. In this regard, Randall states: "We have no certainty about the precise set of particles involved in the BEH-M. For example, if the breaking of the EW symmetry was to be attributed to 2 HF, rather than to one" [21].
This may be in accordance with our assumption (if we considered SSB as real), as well as having a consistent and congruent (symmetrical) application of BEH-M to SM, so as to also explain the mass of particles such as P, as a result of SSB. In conclusion, why this diversity of behavior, so that BEH-M would interact with the weak field and not with the EM field (EMF)?As it is known, EMF is a quantum field capable of preserving a local gauge symmetry, which persists even after partial transformations of the field itself. Likewise, it seems more appropriate to assume that with the lowering of the primordial universe temperature and the subsequent phase transition, the HF behaved symmetrically with respect to the pre-existing electro-weak Interaction, so as to induce also the SSB of the EMF, so as to give a mass parameter to the P (though of very modest entity), just as the SSB of the electro-weak field gives that big mass to the bosons W and Z°.
Therefore, it should not be surprising that the P can carry a mass, a dynamic mass, given by the HF, using the same mechanisms described by the SM in order to explain the remarkable mass the bosons of the WI acquire [83]. In addition, as for the mathematical description of the SSB of the electro-weak fields, also in the case of the EMF's SSB, just separated from the electro-weak field, there is a similar mathematical formalism, in which the Lagrangian (or Hamiltonian) defining the physical system would be invariant with respect to a group transformation, such as rotation or translation. In this regard, we report the Lagrangian globally invariant gauge (L): where φ is a scalar field vector, and T is the matrix that indicates the generators of the group O(n), that is, the n-dimensional orthogonal group. Randall adds: "However, there are other models that hypothesize more complex Higgs sectors, with even more articulated consequences. For example: Supersymmetric Models provide a higher number of particles in the Higgs sector. In that case we would always expect to find a Higgs Boson(HB), but its interactions should be different from those deducible by a includes the only model that one Higgs particle " [21]. Therefore, it is not possible to exclude a priori that another HB, other than that found at CERN, may possibly allow the P and/or hadrons to gain mass, according to a BEH-M analogous to that proposed by SM.
To this purpose, according to the SM, the HF is a weak isospin doublet; that is, it has two components, which appear indistinguishable from WI. These two components have respectively electric charge +1 and 0: the HB is associated with the real part of the neutral field, which, by coupling with itself, i.e. with its own field (HF), acquires mass. On the contrary, the P, considered insensitive to WI, is massless [84]. Therefore, if the concept of the SSB is true, and in agreement with Randall (which suggests the possibility of 2 different Higgs sectors) we could imagine that the P can also acquire mass. In fact, according to the symmetries on which the SM is based, the elementary particles should all have zero mass and travel at the speed of light. Due to the so-called spontaneous breaking of this symmetry (SSB), they interact with the HF, acquire mass and move at the finite speed [84].
On the other hand, Ugo Amaldi is also rather puzzled and writes: "Even if HB identified at LHC had all the intended properties, physicists would never say that SM is entirely satisfactory" [85]. The SM is not, in fact, able to explain why HF's interactions with matter fields (which determine the great mass differences between the particles) are so different from one case to another [85][86].
Even Feynman was very upset by the problem of particle masses, and so he wrote in 1985, that is 23 years after the theory proposed by SM: "I am convinced that at the fundamental level the origin of mass values is a very serious and interesting problem, to which an adequate solution has not been found yet" [39]. Witten adds: Solving the riddle of how this electro-weak symmetry breaks can determine the future direction of particle physics [82].
In short, along with Witten and many other authors, it seems that there is a need for a new Physics, yet to be understood, able to describe in what ways and by what precise mechanisms the particles can gain mass, though, in our opinion, according to with Einstein [2] and Maiani [1], each particle has a its own intrinsic mass, whose value is proportionally equivalent to the energy quantity of the particle itself expressed by its Zero Point Energy.

ON the MASSLESS PHOTON and PHOTON's MOMENTUM
We learn from Feynman: "That light carries energy we already know. We now understand that it also carries momentum, and further, that the momentum carried is always 1/c times the energy. The energy(E) of a lightparticle is h (the Planck's constant) times the frequency(ν): .
We now appreciate that light also carries a momentum equal to the energy divided by c(the light in vacuum), so it is also true that these effective particles, these photons, carry a momentum (p) : The direction of the momentum is, of course, the direction of propagation of the light. So, to put it in the vector form: We also know, of course, that the energy and the momentum of a particle should form a four-vector. Therefore it is a good thing that the latter equation has the same constant (h) in both cases; it means that the Quantum Theory and the theory of Relativity are mutually consistent" [87]. The latter, in our opinion, is an important clarification made by Feynman. On the contrary, are just the gauge theories and QFT mutually inconsistent with Relativity Theory! Therefore, the Eq.(64) shows the energy(E) of a light-particle. The energetic values of each photon(P), without considering its oscillating frequency, corresponds to the Planck's constant(h), which is just an energetic value, corresponding to 6.626 10-27 [ ]. The P, of course, goes with the speed of light, this value (c) knows too, it is 299792.458(± 0.4)Km/sec [88].
Let's now consider the Eq.(62), related to the MEEP. That's how Einstein commented upon his MEEP: "The value of the considered mass refers to the value of an inertial mass" [89].
Let's apply Eq.(62) to the P, keeping in mind that one of the three parameters is well known, that is, c, the speed of the P in the vacuum. The 2° parameter is the Energy of the P, which is described by Eq. (64). Besides, as Chandrasekhar reminds us "it is useful to consider a fundamental consequence of the quantum nature of the matter: the lowest energy possible for a system cannot be null, that is zero, but it needs to have a value different from zero, it is called Zero Point Energy(ZPE)" [90]. On the other hand, still for the MEEP, to an "energetic" particle, carrying energy, forces, etc., should correspond a mass equivalent to the energy carried, divided c2 [91]. To this purpose, Feynman writes: "Energy and mass differ just for a factor c2, which is merely a question of units, so we can say energy is the mass. Instead of having to write the c2, we put E = m" [87]. Since there is no zero energy, for the ZPE, there should not be any particle carrying energy, with a zero mass. Thus, there should not be real particles, having any energy, with a zero mass. If there are, they should "subtend" a tiny mass, a Zero Point Mass [92]. Thus, in the case of a P at the inertial state, that is when it interacts with another particle, so it stops running. At least for that infinitesimal moment, it will oscillate much less. We will never be able to know how much! We will never be able to know with accuracy how much an interacting P can oscillate, that is what could be the number of oscillations [cs] at that moment. Let's indicate this unknown value with 10n[cs],which is an uncertainty factor. The P stops running when hitting another particle, as it happens during a measurement, so it will not oscillate as when it was running, though it never stops running completely: it is the Heisemberg Uncertainty Principle to deny it, since in this case, we would know simultaneously the position and the momentum of the particle [93] [94]. Thus also in the inertial state the oscillating frequency (ν) of the P can never be 0, but always 1s, that is  one oscillation per second (if not even ½ oscillation per second, or a fraction of its). Thus, if we want to consider the Energy of the P in its inertial state, indicated with Eo, we should have: . What we get is that the inertial mass of the P corresponds to 10−48+n-grams. Thus, if the value of n was 100, that is one oscillation per second, mo would be 10−48 [g]. Whereas if n was 103 oscillation per second, we would have mo =10-45 [g]. Of course, in all cases, it is an extreme value, but it is  0. Besides, as we know, one of the characteristics of the P is to travel most of the time, so it also gets a momentum (p). To this purpose, Fermi writes: "The P too, as other particles, is a corpuscle, a light's quantum and has a its own momentum (p), through which transfers all its energy to the hit particle" [96]. Feynman chases: "Each P has an energy and a momentum (p)" [87]. This p is represented in the de Broglie's formula [97]: where  is the wavelength of the considered P (or other particles). The mean wavelength of a P in the optical band corresponds to ≈510-5 [cm] [98] and its p is: Let's see how heavy an electron is: its mass corresponds to 9.110-28 g, comparing these values, emerges that a running P is much heavier than an electron [99]. ((In short, other than P massless! It is the opposite: with these masses carried out by Ps, we can better understand, and justify the light pressure action or "photonics pressure", or radiant pressure. In this respect, Feynman adds: "An Electro-Magnetic Field has waves, which we call light; it turns out that light also carries a momentum (p) with it, so when light impinges on an object it carries in a certain amount of p per second; this is equivalent to a force, because if the

ON the POINT MODEL of the ELECTRON
As mentioned in the paragraph on 'Divergences in Perturbation Theory' (2.2), one of the causes of these divergences, notoriously characterized by the infinities emerging from the equations related to the Perturbative Calculus, lies in having considered, according to Quantum Electro-Dynamics (QED) and Quantum Fields Theory(QFT), "the electron's centre as a point of singularity" [41]. Hence, as Dirac reminds us, "using this point model of the electron appear divergences in the solution of the equations describing the interaction of an electron with an electromagnetic field"(EMF) [41]. As shown in equation (18), in fact, in these equations, the point-size of the electron's radius(a) corresponds to a zero value (a → 0) hence, with an at the denominator, infinities will always emerge. In that case, quoting Feynman: "the equations will burst in our hands" [39].
In this regard, the problem of infinities first arose in the classical electro-dynamics of point particles in the early 20th century. As it is known, the mass of a charged particle should include the mass-energy in its electrostatic field: EM mass. Now, assume that the considered particle, or quantum object(QO), is a spherical shell of radius a: for example, an electron. To this purpose, Casalbuoni reminds us: "The size of the electron is a serious problem in classical physics (partially addressed by Einstein and Lorentz and more seriously by Mie), but Bohr's atomic theory treated the electron only in terms of its coordinates, as if the electron were pointlike. This facilitated the transition to the idea that the electron could be thought of as an object without spatial dimensions. In 1925 Frenkel adopted this point of view explicitly: 'the internal equilibrium of an extended electron becomes an insoluble puzzle from the point of view of electrodynamics. I really think that such a puzzle (and all related issues) is an academic problem. It comes from an uncritical application to the elementary parts of matter (the electron) of a division principle, which, when applied to compound systems such as atoms, eventually leads to smaller particles. Not only are electrons indivisible physically but geometrically too. They have no extension. Internal forces between the elements of an electron do not exist illuminated object is picking up a certain amount of p per second, its p is changing and the situation is exactly the same as if there were a force on it. Light makes a pressure when it collides with an object; this pressure is very small, but with sufficiently delicate apparatus it is measurable" [87]. This phenomenon is interpreted as an "energetic" phenomenon of the Ps (it would be only energy without mass). We are talking about a light's pressure action, pressure effect. However, as it is known, it was first pointed out by Iohanne Keplero in 1619 the concept of Radiation Pressure to explain the observation that a tail of a comet always points away from the Sun [100]. In fact, Feynman writes: "I want to emphasize that light comes in this form: particles. It is very important to know that light behaves like particles, especially for those of you who have gone to school, where you were probably told something about light behaving like waves. I'm telling you the way it DOES behave: like particles. Light is made of the particle" [87].
He adds: "When light is shining on a charge, and it is oscilling in response to that charge, there is a driving Force in the direction of the light beam. This Force is called Radiation Pressure or Light Pressure (F). Let us determine how strong the Radiation Pressure is. Evidently, it is that the light's force (F) on a particle, in a magnetic field(B), is given by: F = qvB (81), And it is at right angles both to the field and to velocity (v); q is the charge. Since everything is oscillating, it is the time average of this forceF" [87].
That is, in these cases, the intimate light mechanism happens through a "push effect" on electrons. This push effect can be interpreted as a real mechanic effect, rather than energetic [101].
because the electron has no such elements. The electromagnetic interpretation of the mass must be eliminated" [13]. In fact, Frenkel writes: "The electron will thus be treated simply as a point" [102]. "This interpretation was also carried out in the new Quantum Mechanics (QM) where the electron was simply described by its coordinates. On the other hand, it was possible to study the analog of electromagnetic energy through the expectation value of the corresponding operator. This was done by Jordan and Klein (1927) and by Heisenberg and Pauli (1929), who achieved the same divergent result as the classical case. In 1930 Oppenheimer used Dirac's perturbation theory to calculate the electromagnetic energy of an electron (Eem), with a a radius, thought as a charged sphere: In this equation, in fact, the electron Eem it is divergent in the limit a→0 " [13]. It is a strange result: unacceptable, in our opinion, since a point electron would get an infinite energy, as well as an infinite equivalent mass! Which absolutely isn't. To this purpose, Casalbuoni says: "This result casts sinister shadows on the Perturbation Theory" [13]. Unfortunately, "the problem of infinities in the classical electrodynamics of point particles" [103] comes in all equations as the (82), where the radius (a) of the electron point, placed in the denominator, is considered null (a→0).
In fact, "the mass of a charged particle should include the mass-energy in its electrostatic field (EM mass)" [103], i.e. Eem represented in Eq. (82). In these circumstances, "the particle is a charged spherical shell of the radius (a). The mass-energy in the field becomes infinite as a→0. This implies that the point particle would have infinite inertia, making it unable to be accelerated" [103]. In literature, we find: "Elementary particles are considered to be point-like, but not point particles" [104]. The latter seems an interesting clarification. Steinmann adds: "QED, or relativistic quantum field theory in general, is not based on the notion of point particles, as one sees stated so often and yet so erroneously. A point particle is the idealization of a real particle seen from so far away that scattering of other particles is as if the given particle were a point. Specifically, a relativistic charged particle is considered to be a point particle at the energies of interest if its interaction with an external EMF can be accurately described by the Dirac equation. Both electrons and neutrinos are considered to be point-like because of the way they appear in the standard model. Point-like means that the associated bare particles are points. But these bare particles are very strange objects. According to renormalization theory, the basis of modern QED and other relativistic field theories, bare electrons have no associated EMF, although they have an infinite charge (and an infinite mass), something inconsistent with real physics. They do not exist. The bare particles are points = structureless formal building blocks of the theories with which (after renormalization = dressing) the physical = real = dressed = observable particles are described. The latter has a nontrivial EM structure encoded in their form factors. Physical, measurable particles are not points but have the extension. By definition, an electron without extension would be described exactly by the 1-particle Dirac equation, which has a degenerate spectrum. But the real electron is described by a modified Dirac equation, in which the so-called form factors figure. These are computable from QED, resulting in an anomalous magnetic moment and a non-zero Lamb shift removing the degeneracy of the spectrum. Both are measurable to high accuracy, and are not present for point particles, which by definition satisfy the Dirac equation exactly. The size of a particle is determined by how the particle responds to scattering experiments, and therefore is (like the size of a balloon) somewhat context-dependent. The context is given by a wave function and determines the detailed state of the particle" [104]. Scharf writes: " It is often said that the electron is a point particle without structure in contrast to the proton, for example. We will see in this section that this is not true. The EM structure of the electron is contained in the form factors" [105]. An electron has two form factors, a magnetic and an electric one. The electrical form factor is the Fourier transform of electronic charge distribution in space. The electronic form factor determines the charge radius of a particle. The relations between form factors for spin 1/2 particles and terms in a modified Dirac equation describing the covariant dynamics in an EM of a particle deviating from a point particle are given by Foldy.
According to Foldy, EM form factors of an elementary or compound particle characterize the charge and current distribution of the particle (in the nonrelativistic case simply via a Fourier transform) and its response to an external EMF. The anomalous magnetic moment shows directly as a coefficient in the modified Dirac equation. On the other hand, the spectral resolution of the single-particle problem in a constant electric field leads to a spectrum from which the Lamb shift can be read off [106]. An intuitive argument for the extendedness of relativistic particles (which is difficult to make precise, though) is the fact that their localization to a region significantly smaller than the de Broglie wavelength would need energies larger than that needed to create particle-antiparticle pairs, which changes the nature of the system. Note, however, that even point particles (which satisfy by definition the Dirac equation exactly) have a non-zero Compton wavelength. The terminology ''particle size'' is not used in a well-defined way. The form factor of the electron can be computed perturbatively from QED, and it can be indirectly determined by experiment, e.g., through the observation of the anomalous magnetic moment and the Lamb shift. (A point particle has no anomalous magnetic moment and no Lamb shift since it satisfies the Dirac equation exactly). "The information contained in the form factor is only about the free particle in the rest system, defined by a state in which momentum and orbital angular momentum vanish identically. In an external potential, or in a state where momentum (or orbital angular momentum) doesn't vanish, the charge density (and the resulting charge radius) can differ arbitrarily much from the charge density (and charge radius) at rest. For example, for a hydrogen electron in the ground state, the charge density (which must essentially cancel the proton's Coulomb field far away from the atom) is significant in a region of diameter about 10−11cm (a small multiple of the Bohr radius), while the charge radius at rest is probably (in view of the above partial results) < 10−13cm" [107]. Other authors write: "According to QED, an electron continuously emits and absorbs virtual photons, and as a result, its electric charge is spread over a finite volume instead of being point-like. Thus, because of the binding-energydependent cutoff used to regularize the divergence, the electron charge radius depends on its surroundings. Thus the electron appears to be a compressible substance" [108].
We can also read: "If you're a physicist, and someone asks you 'how big is an electron?', then the most canonically correct thing to say is: 'There is no concept of an electron size other than the spatial extent of the electron wave function. The size of the electron wave function is the electron size'. So, if such a person were pressed to give a numerical value for the size of the electron, they might say something like: 'Well, most electrons in the universe are bound into atoms. So the typical 'size' of an electron is about the same as the typical size of an atom'. The typical size of an atom is given by the Bohr radius (a B): Where, is Planck's constant, is the vacuum permittivity, is the electron mass. Just to remind you how this length scale appears, you can figure out the rough size of an atom by remembering that an electron bound to an atom exists in a balance between two competing energies. First, there is the attractive potential energy that pulls the electron toward the nucleus, and that gets stronger as the electron size (which is about the same as the average distance between the electron and the nucleus) gets smaller. Second, there is the kinetic energy of the electron . The typical momentum of the (83), electron gets larger as gets smaller, as dictated by the Heisenberg uncertainty principle, , which means that the electron kinetic energy gets larger as the atom size shrinks: [109].
In the balanced state, that is an atom, and are about the same, which means that . Solve for , and you get that the electron size( ) is about equal to the Bohr Radius(aB), which numerically works out to about 1 Angstrom, or 10-8 cm:  aB  10-8 cm (84). Furthermore, if you're going to think about the electron as a tiny charged ball, then there is one thing that should bother you: that ball will have a lot of energy. To see why this is true, imagine the hypothetical process of assembling your tiny charged ball from a bunch of smaller pieces, each with a fraction of the total charge. Since the pieces have an electric repulsion from each other, and since you are bringing them very close to each other, the ball will be very hard to put together. In fact, the energy required to "build" the electron is , where is the electron size [109].
So the smaller the electron is, the harder it is to build, and the more energy gets stored in the form of electric repulsion between all the pieces. The large self-energy of the electron also means that if the electron were to get "broken", then all the tiny pieces would fly apart from each other, and release a tremendous amount of energy. But, in fact, by the end of the first decade of the 20th century, people already knew that a single electron stored a lot of energy. Einstein's famous equation, , suggested that even a very small piece of the matter was really an intensely concentrated form of energy. One electron represents about Joules (511,000eV). So one of the early attempts to estimate a size for the electron was to equate these two ideas. Maybe, the logic goes, the large "mass-energy" of an electron is actually the same as the energy stored in the electric repulsion of its constituent pieces. Equating those two expressions for the energy gives an estimate for that is called the classical radius of the electron( e ): m (85).
So that's our first estimate for the intrinsic electron size: about 10-13 cm. This classical radius, by the way, works out to be about the same size as the typical size of an atomic nucleus [109]. As it is known, the electron also has what we now call a "spin". Basically, the concept of spin comes down to the fact that the electron is magnetic: it has a north pole and a south pole, and it creates a magnetic field around itself that is as large as about 1 Tesla at a distance of 1 Angstrom away, and that decays in strength as . Another way of saying this same thing is that the electron has a "magnetic moment" whose value is equal to the Bohr magneton: (as it turns out, this magnetic moment is essentially the same as the magnetic moment created by the orbit of an electron around a nucleus). At first sight, this magnetic-ness of the electron does not seem like a problem. A charged sphere can create a magnetic field around itself, as long as the sphere is spinning. A problem arises, though: namely, if the electron is very small, then in order for it to create a noticeable magnetic field, it has to be spinning really quickly. In particular, the magnetic moment of a spinning sphere with charge is something like , where is the sphere's rotation frequency. If the sphere is spinning very quickly, then is large, and the equator of the sphere is moving at a very fast speed .
We know, however, that nothing can move faster than the speed of light (including, presumably, the waistline of an electron). This puts a limit on how fast the electron can spin, which translates to a limit on how small the radius of the electron can be if we have any hope of explaining the observed electron magnetic field in terms of a physical rotation. Thus, if you set , and require that , then you get that the size of the electron ( ) must be bigger than: ≥ m (86). In other words, if you hope to explain the electron magnetic field as coming from an actual spinning motion, then the size of the electron( ) needs to be at least 10-10 cm. This is about a thousand times larger than the classical electron radius ( e ), as shown in equation (85). Coincidentally, this value is called the "Compton wavelength", and it has another important meaning. The Compton wavelength is more or less the smallest distance to which you can confine an electron. In fact, if you try to squeeze the electron into an even smaller distance, then its momentum will become so large (via the uncertainty principle) that its kinetic energy will be larger than . In this case, there will be enough energy to create (from the vacuum) a new electronpositron pair, and the newly-created positron can just annihilate the trapped electron while the newly-created electron flies away. Another set of experiments looks for corrections to the way that the electron interacts with the vacuum. At a very conceptual level, an electron can absorb and re-emit photons from the vacuum, and this slightly alters its magnetic moment. If the electron were to have a finite size, then this would alter its interaction with the vacuum a little bit, and the magnetic moment would very slightly change relative to our theories based on size-less electrons. So far, however, experiments have seen no evidence of such an effect. The accuracy of the experimental observations places an apparent upper limit on the electron size of about meters [109].
In this regard, Gabrielse writes: "The standard model of particle physics, which included QED theory, predicts the relationship between the value of the electron magnetic moment that we measure in Bohr magnetons (called g/2), and the measured value of the fine structure constant, α. The most accurate calculated values for the constants are tabulated in a recent review [110] (of determinations of α). A very small value of the additional standard model correction αhadronic must be included, but it is small enough that this relation between g/2 and α comes essentially from QED theory: 2 = 1 + 2 ( ) + C4 ( )2 + C6 ( )3 + C8 ( )4 + C10 ( )5 +…απτ + αhadronic + αweak (87).
The difference between the g/2 that we measure, and the g(α)/2 value that we calculate using an independently measured α in the QED formula, is very small. The most up-to-date result, from a review of electron g/2 measurements, is: | δa | < 8.3 ⋅ 10-12 m (88), At the one standard deviation level. The standard model predicts the EM moment in Bohr magnetons to a remarkable level of precision.
If the electron is composed of constituent particles bound together by some unknown attraction, then we would expect that the standard model formula displayed above would not accurately predict the measured magnetic moment. Antiprotons and protons, for example, are not at all well described by this equation. As is well known, this is because antiprotons and protons are not the point particles with no size that are assumed in deriving the formula. They instead have a measurable size as a consequence of being the spatially extended bound state of three quarks" [111]. To this purpose, in the remote possibility that the electron may not be an elementary particle, there could be the possibility that the electron is made up of 3 sub-particles or leptoquarks (probably each provided with a different colour), also with fractional electric charge, of 1/3, closely joined together by an extremely intense lepton force, completely analogous to Strong Interaction (SI), or Color Force, according to which also the lepto-quarks are eternally confined within the 'electron. Furthermore, the odd number of lepto-quarks also respects the Angular Momentum Conservation Law and the Nuclear Spin Statistics [112].
Gabrielse adds: "The established limit on δa given above sets a limit on the size of the electron (radius R) and upon the rest energy (m*) of the particles out of which the electron is made. A 'chirally invariant model' [113] in which the electron mass (m) is made smaller than the typical nuclear mass by suppressing the lowest order, suggests any remaining difference between g/2 and g(α)/2 would be of second order in the small ratio m/m*. The result, as shown in equations (89): m > 177 GeV/c2; R < 1 ⋅ 10-18 m (89), where that mass of the constituents of the electron, if there are any, must be remarkable large compared to the 0.0005 GeV/c2 rest mass of the electron. The binding energy would need to be spectacularly large. Equivalently the limit can be written as an extremely small limit upon the radius R of the electron. The electron is smaller than the extremely small size that this measurement would be able to detect. If the measurement accuracy with which we measure the electron magnetic moment was the limit on the electron radius and constituent mass, then we could set a much more stringent limit, namely that m* > 1 TeV/c2. This limit precision will only be attained, of course, if someone figures out how to independently measure the fine structure constant as accurately as we do. Hopefully, this will happen over the next years. These are surprising limits for a measurement done with no large accelerator and carried out at a temperature that is only 100 mK above zero. However, a search for a contact interaction at the LEP storage ring [114] probes for electron structure at the 10 TeV energy scale, in which case R < 2 ⋅10-20 m. The electron is a remarkable particle indeed! The electron's ingredients, if any, must be unbelievably massive, and the electron's incredibly small size still remains undetectable" [111].
But let's keep going and see if there are any other concepts of an "electron size" [108]. Perhaps you don't like the answer that "the size of the electron wavefunction is the electron size, because the electron wavefunction can be different in different situations, which means that this definition of size isn't really an immutable property of all electrons" [108].

REMOTION of MASSLESS PHOTON
As reported in paragraph (2.11), from our simple calculations, taken from Planck-Einstein's formula E=hν (shown with the equation 64) and Einstein' MEEP (equation 62), emerges that the photon(P) is not completely massless since, even in its minimum energy state, or zero-point energy(ZPE) or inertial mass (mo), it shows a mass value which is not null, but corresponding to: mo = 7.372 10−48+ n [g] (as shown in equation 76), where n indicates the oscillation number per second of the considered P. This is certainly a very small value, of no value in our macroscopic world and without the slightest meaning in our daily life. However, although it is infinitesimal, it is still  0, so it can assume, in our opinion, its value, its role, both in the sub-atomic world and in the mathematical formalism.
Moreover, if we take into account the value of the energy charge of the P, we must calculate its momentum (p), obtainable from the de Broglie's formula (p=h/), as shown in Eq. (77). Considering the mean wave length of a P of the optical band, we have p =1.325⋅10−22 [gcm/s], as shown in equation (80). It is really surprising! A common luminous P carries a mass-energy value over 5 orders of magnitude greater than the mass-energy carried by an electron [115]: other than massless P! It is obvious, therefore, that we are going to replace this last value of the P with the massless P inserted in all equations of the Perturbation Theory and of the Quantum Fields Theory(QFT), including the Yang-Mills equation. What do we expect? It is obvious: the disappearance of divergences and infinities. Likewise, even the calculation of the electron self-energy will not give null results anymore. No! With a P-value no more massless, the zeros disappear. They appeared whenever one tried to multiply the electron mass-energy with the quanta of his field, i.e. with Ps! In short, with this value of P other than zero, all divergences emerging from the equations of Perturbative Calculus, QED and QFT disappear.

REMOTION of POINT SIZE of ELECTRON
The second cause of divergences and infinities, emerging from the equations concerning the Perturbation Theory, the QED and QFT, lies in having considered, starting from Frenkel (1925) [102], the electron size reduced to a point so that its radius (a) is zero: a→0. However, the admission of the hypothesis of a point-like electron (zero radius) generates serious mathematical difficulties, due to the self-energy of the electron, tending to infinity. As is well known, the issue of the radius of the electron is a challenging problem of the modern theoretical physics. As the electron has no known substructure, it is assumed to be a point particle with a point charge and no spatial extent.
It is obvious that, when this null value of the electron ray appears in the equations (see equations 18 and 82), the common arithmetic operations must inexorably give, as results, values equal to zero or the infinities! That is, the energy of a point-electron becomes infinite! Actually, the state of things and the most elementary physics categorically prohibit the electron from acquiring infinite energy. As we all know, the electron inertial energy or electron Zero Point Energy(ZPE) [90][92] corresponds to 0.511 MeV/c2. This value can increase, up to double and more, when the electron is particularly accelerated. For example, the Stanford linear accelerator (SLAC) can accelerate an electron to roughly 51 GeV, giving the electron a massenergy value of 5 orders of magnitude greater than its inertial mass-energy. In this case, the relativistic momentum acquired by the electron will also be 100,000 times greater than its ZPE state, but never its energy (and thus its mass) will be able to tend towards infinity, as imposed by equation (82) if we consider a→0.
Logic and mathematical consequences: Eq. (82) is not valid and is physically incongruous, as it distorts the state of things, the reality of the world. The cause of this, of course, is the null (point-like) value attributed forcibly and improperly to the physical dimensions of the electron. Whereas, let's stop giving, as a dogma, this null value of the electron radius (which does not hold up with real and experimental evidences), and let's calculate the value of an (electron radius) from Eq. (82). For convenience and completeness, in respect of Einstein's MEEP, we replace Eem with mc2: .
With reference to equation (90), according to Casalbuoni, "a is called the classical electron ray to be compared with the Compton wavelength (com) " [13], since a is also defined Compton ray or Lorentz ray: com = ℏ ≈ 3.8 ⋅ 10−11 cm (91).
As we all know, in fact, an inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is ℏ/mc, which is known as the Compton wavelength or Compton ray. The values difference expressed by the equations (90) and (91) is in 2 orders of magnitude, but certainly, they does not show null values for the electron, its radius, and com.
In short, the experimental evidence shows that the electron is not structureless, dimensionless, i.e. a point particle, with a null radius (a). On the contrary, like all fermions, the electron has its own radius and its wavelength ()which, like the cross section, varies in a ratio inversely proportional to the acceleration applied to the 'electron.
In fact, since an electron behaves as a wave, at a given velocity, it has a characteristic de Broglie wavelength. As follows, for the 51 GeV electron above (SLAC), the wavelength is very small, till 2.4 ⋅10−15cm: small enough to explore structures well below the size of an atomic nucleus. If instead, the electron is accelerated as to get an energy equal of 177 GeV, as shown in equation (90), the electron ray will 10−16cm [111]. Should we accelerate the electron up to 10 TeV, as in LEP storage ring [114], according to Gabrielse, the electron radius(R) reduces to: R < 2 ⋅10−18 cm [111]. In short: other than point-like electron radius, as saying null (= 0)! In this case, a point electron should also have a wavelength () of approximately zero.
It is well known that de Broglie proposed, without experimental data, to give all particles the same property as waves. He gave each particle its own wave length () depending only on the momentum (p) of the particle itself [97]. It should be borne in mind, in fact, that to indicate the dimensions of a particle, it is enough to describe its diameter or its wavelength. In this regard, we read: "Conversely, an electron is an elementary particle, which is typically thought of as pointlike: at most the Planck Length (1.62×10-35 meters). But wait! Particles aren't really small round objects. A particle's "size" is best thought of as its wavelength, which is given by the de Broglie equation: λ=h/mv, where h is Planck's constant, m is the electron's mass, and v is its velocity. This can also be written: λ=hc/pc, where c is the speed of light, and p is the electron's momentum. It turns out that for electrons in the hydrogen atom, for example, λ≈10-10 meters: about 100,000 times the diameter of a proton" [116].
The change of the electron ray value in wavelength (Δ) depends on the angle of the recoil(θ), as follows: Equation (92) shows easily that the value of the electron wavelength (Δ) will never be zero, that is Δ → 0, since the electron's mass itself gets infinities values. [117]. It is another simple response in our opinion that, also from an arithmetic point of view, the electron ray value cannot be considered zero, that is null, as a pointlike.
We read from Chandrasekhar that "the dualism wave-particle has been demonstrated a number of times, not only for the electron but also for protons, neutrons, atoms, and molecules. This dualism is a universal and fundamental property of the matter" [90]. Lloyd states: "A consequence of the wave nature of Quantum Mechanics(QM) is that each (quantum) state corresponds to a wave, and waves can be superimposed" [118]. In fact, QM equations imply a universal presence of superimposed states. This dualism wave-particle has also been demonstrated for the nature of light. This question can be solved with the QM living to the particlesrather, to quantum objects (QO) -a wave function of their own, indicated with Ψ, which describes correctly both their wave and particle character [119].
As known, the wave function(Ψ) is a mathematical function which depends on time (t) and on the position (x) of the particle it is referred to. "The function Ψ(x) is usually called the wave function because it more often than not has the form of a complex wave in its variables" [120]. Feynman adds: "The Ψ for a single particle is a 'field', in the sense that it is a function of position" [120]. It is obvious, therefore, that if a QO, i.e. a particle like an electron, represented in QM by its Ψ, occupies a field, it is a contradiction to consider zero this same field.
The Ψ has all the properties of de Broglie associated wave related to the particle itself. In fact, it can also be indicated as de Broglie wave. As de Broglie reminds us [97] like all the particles, also the electron is an oscillating QO, provided with its own . That is, like all QO, the electron is both a particle and an oscillating wave at the same time: the latter represented by . It follows that the electron, just because of its continuous oscillation, must necessarily occupy a space in which to oscillate, thus it is incongruous and inconsistent, both mathematically and physically, to continue to attribute a null space for the electron, that is its radius (a) tending to zero: a→0.
To this purpose, "The Heisemberg Uncertainty Principle states that no particle can be completely motionless (since it is not possible to know two complementary parameters of a particle at the same time), it will at least oscillate around a plane: in this case we will talk about Zero Point Motion" [92]. So, the QM forbids that the space occupied by a particle can be null, that is of a value equal to zero, as instead, one would like, according to gauge theories, that a→0.
According to de Broglie, any particle with a momentum (p) seems to be something periodic, oscillating as a wave, with a universal relation between the  of the particle and modulus p of its momentum: p = h/, as shown in Eq. (77). Thus, it is clearly and unquestionably deduced that the electron can never have null dimensions, as a point, since in this case, even its wavelength would have null values and dimensions (→0). But this is not possible. It is conspicuously denied by Eq. (77) since the momentum of the electron would acquire infinite values! A simple consequence: the wavelength of the electron, and thus its radius, can never be considered as point-like dimensions, that is, null.
We wonder: is there a physical-mathematical solution allowing not to meet such divergences, and being an alternative to the Renormalization process?
Probably, we can avoid bringing out the infinities, as a result of various equations treated by the QFT, if we attribute to the electron, albeit approximately, its real ray value. What is this value? It is not at all easy to determine, first of all, because it is not a fixed, constant value, since the electron wavelength (), i.e. the extension of the particle in the available space, varies in a ratio inversely proportional to its acceleration.
As it was known, in the context of classical physics, the radius of the electron is 2.8⋅10−13cm, supposing that its negative electric charge is evenly distributed in a spherical volume. A very recent publication now reveals that it is not just a hypothesis: very accurate measurements, carried out by the Advanced Cold Molecule Electron Electric Dipole Moment (ACME), in collaboration with Harvard University, indicate that the electron is indeed spherical [110].
It should also be mentioned the right value suggested by de Broglie, who calculated "the radius of the electron superimposable on the wavelength () of an X-ray, equal to 10−8 −10−9 cm" [121].
We musat also keep in mind that the observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−20 cm [122]. The upper bound of the electron radius of 10−16 cm [111] can be derived using the uncertainty relation in energy. However, the terminology comes from a simplistic calculation that ignores the effects of QM.
Moreover, from QM we learn that there is no object in Nature, no sub-atomic particle, like saying no quantum object(QO), no length (and hence no particle radius) that may be less than the Planck length ( ), where: m, where G is the gravitational constant. The / , that is, represents the smallest possible length for which, the smallest electron ray (R) can never have a completely zero dimension. It must always be, in any circumstance, R ≥ 10−33cm! Certainly, we do not believe at all that a particle like an electron, carrying an electric charge and an energy-mass equal to 511 keV, can occupy a null dimension, a null space, that is zero. No! Not at all. It would be like saying that the electron energy-mass had vanished into thin air. No! We would make the same mistake as for the neutrino: massive particle proposed by Pauli [123], in order to compensate for the mass gap problem emerging from the neutron decay [124], but then considered massless (according to massless Weyl spinor and to Standard Model), until the demonstration of the neutrino oscillation (carried out at the Super-Kamiokande [125]) has imposed, the acceptance of a mass also for the neutrino (although minimal).
Besides, according to QM, the wave function(Ψ), that is, the quantum state of the particle, represents the way in which we can find the particle when it does not interacts, when it is not disturbed, measured , observed. Thus, indicating with t the time, and with x1,.....xN the possible positions or space coordinates of the considered particle, we have the formula: .
Hence, before we search the particle, that is before we measure it, the particle is spread throughout the employable space, as if for each point, there was associated a precise value of probability density we have to find. In according to QM, before the measurement the wave or particle aspects are not at all outlined: the Ψ square of the modulus of the Ψ, that is |Ψ|2, has to be interpreted as a distribution, as the density of probability to find the particle, its quantum state, in one of the several possible positions. In other words, the square modulus of a QO is a measurement of the probability (and only the probability) that the QO is, in a specific moment, in a certain position in the space [22]. QOs do not have defined properties until we observe them until we make a measurement. We can just presume approximately their structure and behavior, but we have no certitude.
Before the measurement, the phase of Ψ gives to the QO its "wave-like character", since the Ψ is diffused in the space occupied by the particle the Ψ is referred to. This condition of the Ψ, indicated as unitary linear phase U, or U Process, has been brilliantly described by Schrödinger through his famous electron's wave function equation [126] [127]. This U Phase corresponds to the "Process 2" described by von Neumann [128].
Therefore, QM does not provide us with any useful information on any QO, inherent in its phase of linear evolution U, that is, referable to the whole time it lives and travels undisturbed, delocalized, and propagating as a superimposition of quantum states. The U Phase persists until one tries to interact with the QO, i.e. with the particle, i.e. when one tries to measure it. In this case, the measured QO immediately meets a quantum jump, resulting in the collapse of the Ψ (|Ψ>) of the examined QO: Wave Function Collapse, whereby the particle undergoes Reduction of the state vectors (R process). With the R Process the state vector, represented by |Ψ>, jumps to another stated vector, let's say | >, which represents one out of two or more orthogonal alternative possibilities: the other can be |q>, | X >, etc..., which depend on the kind of observation, the kind of measurement carried out. The jump of the WF, or Wave Function Collapse, induced by any measurement, is represented as follows: |<Ψ| >|2 (94). Therefore, only during the R Process, extremely short, we can pick up the particle. Only during the R Process, the QO, its Ψ, concentrates in a well specific point in space appearing as a corpuscle: now it is localized, just as a corpuscle. In short, it is particularly important to note that whenever a subatomic particle interacts with the surrounding matter, its Ψ faces a quantum jump, passing immediately from the U Phase to the R Process: this creates a drastic change of the QO behavior (from undulating-like to corpuscular-like). In other words, according to QM we will never be able to have information about the aspect and the property of a QO, until it is observed. It is thought that before the measurement, the electron could be found potentially in one of the several points of its wave volume, each corresponding to probability amplitude, to a probability density. This is really in clear and blatant conflict with continuing to consider that an electron can occupy a null volume, like saying that its radius (a) is zero, that is: a → 0, causing the known divergences in the gauge theories and QFT equations.
It is likely that, before the measurement, the electron is not determined, and, along with Lloyd [118], it should be characterized by a superposition of quantum states. Every time a measurement is carried out, the observed particle undergoes a probabilistic reduction of the state vector, indicated as Reduction Process, or R Process, which corresponds to the "Process 1" described by von Neumann [128].
With the measurement the collapse of the particle'wave function(Ψ) takes place, so now our particle will be detectable in a precise point, and at the same time the other probability amplitude, will disappear, according to them the particle could be spread on other points in the space it could occupy. In fact, the Wave Function Collapse is also called Amplitudes Reduction. When the Ψ of the electron collapses, it is delimited in a specific point: the particle is localized, its position is detected. The electron will now show completely as a particle, and it is, in fact observed in its corpuscular aspect. A corpuscle is, indeed, something concentrated in a precise point of the space. As Penrose reminds us: "It is clear that the Ψ is something more real than a simple probability wave. Schrődinger equation gives us this entity (both charged and non-charged particles), a precise evolution in time, an evolution that depends critically on how the phase changes from a point to another. If we ask a Ψ where the particle is, carrying out a position measurement, we have to expect we will lose this information on the phase distribution. After the measurement, we have to restart with a new Ψ . If the result of the measurement says that the particle is here, the new Ψ has to be a very high crest in that position, but then it disperses quickly according to Schrődinger equation" [22]. Thus, the measurement induces the collapse of the Ψ particle we want to examine so that it will pass from an undulating behavior to a corpuscular aspect [129]. The measurement, in fact, produces a significant changes in the physical properties of the observed particle of the measured QO, as well as on its morphological configuration.
Before being hit by EM radiation, in according to the QM the particle is a mathematical quantity known as a quantum state, or (|Ψ>), that should contain all the information necessary to describe the considered quantum system. When it exists in this phase (U phase), not disturbed, the particle will not give any information concerning its look and contents. To this purpose, Prigogine asks himself: "Does an unobserved nature, different from observed nature?" [130]. It seems so! In fact, as far as we try to see it, the observed particle immediately changes its look, its quantum configuration, and its trajectory [131]. Therefore we can only try to imagine: it says that the particle occupies a volume, it goes like a wave, in a combination of several overlapping quantum states and widespread, spread in the whole space it can occupy, space that according to Penrose [22] should be the Hilbert Space. Therefore the Hilbert Space should be a real, objective space: the space to be occupied by a QO. Furthermore, it should not be overlooked the principle on which the operation of electron microscopes is based.
As we know, "an ordinary light microscope uses photons of light, which are equivalent to waves with a wavelength of roughly 400-700 nanometers. But if you want to see finely detailed things that are "smaller than light" (smaller than the wavelength of photons), you need to use particles that have an even shorter wavelength than photons: in other words, you need to use electrons. In an electron microscope, a stream of electrons takes the place of a beam of light. An electron has an equivalent wavelength of just over 1 nanometer, which allows us to see things smaller even than light itself (smaller than the wavelength of light's photons)" [132].
Yet, according to the gauge theories and the QFT, the electron, if it were not for the providential Higgs Mechanism, would be massless and occupy a null volume, since the electron radius, r→0. In this regard, we read: "However, one might think that no electrical charge is actually punctiform and that the problem is simply due to a mathematical abstraction " [133]. Passera adds: "With reference to the problem of infinities, just think about the energy of the electric field of a charged sphere, which radius (r) tends to zero: r→ 0; i.e. the energy →∞, diverges, such as 1/r. For the theory of Special Relativity, part of the mass of the sphere comes from the (divergent!) energy contained in the surrounding electromagnetic field(EMF)" [133].
Finally, let's try to calculate mathematically and physically the actual value of the electron ray. In this regard, we consider the value of the electron energy-mass density in its state of minimal energy, or inertial mass (mo), which in the cgs metric system corresponds to 9.109383⋅10−28 [ This is the value that, in our opinion, should be inserted in the equations of the Perturbative Calculus and QFT (QED included) to represent the radius of the electron (a), replacing the null value, which has been considered so far. It is obvious that no longer dividing by a zero value, infinities and divergences will disappear.
It is clear that, as in all material particles, the more the electron is accelerated, the more its wavelength will be restricted, but without never reaching zero or close to zero values!

On the MASSLESS YANG-MILLS b Quantum
Yang and Mills were convinced, just for "physical reasons" [51], that the gauge particle of the b field, and represented by the b quantum, today described as gluon(G), could not be massless. This is in agreement with our line of thought and our calculations, involving a no massless G. Also with regard to the value of this mass we are not in disagreement with Yang and Mills, who consider it at least greater than the mass of pions, which as Yang himself says corresponds to "134.97 MeV for the °, and to 139.58 MeV for  " [59]. As it is known, with the introduction of the concepts of Gs and quarks (Qs), the problem of Qs symmetry in baryons arose, for spin and flavor.
This problem finds a natural solution if we assume that a Q of a given flavor has an additional quantum number (the so-called color), which takes three values. It is possible to satisfy the Pauli Exclusion Principle, if we assume that the baryons are, in the state completely antisymmetric in the new quantum numbers, an invariant configuration for color transformations (color singlet) [1].
Maiani adds: "In 1965, Han and Nambu gave an elegant formulation of this hypothesis, introducing a SU (3) symmetry that operates on color indexes and hypothesizing that the color symmetry were gauge symmetry and gluons were Yang-Mills fields associated with the color itself "[1] [134]. Han and Nambu state: " It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically under the constraint that the Nishijima-Gell-Mann relation remains intact" [134].
In short, even if Yang and Mills have not quantified the mass of the b quantum, they are still convinced that it has a mass at least higher than pions'. Otherwise, a b quantum with a mass similar to or lower than pions' would have already been highlighted easily [51]. This is another crucial point which, in our opinion, further contributes to denying those who want to continue to consider the G massless, i.e. the b quantum of Yang and Mills, and so all the other "particles having charge" [58] which, as Yang states, "could not be massless " [58].

YANG-MILLS MASS GAP PROBLEM
At this point, one may wonder: what, then, is the mass of Yang and Mills b quantum, now identifiable with G? Well, the Yang-Mills b quantum, following their reasoning, must have a mass-energy density certainly higher than pions'. In fact, being the vector of a nuclear force, therefore acting exclusively within the very restricted nuclear space, the mass of the b quantum cannot be too lower than the values found for the Weak Nuclear Interaction (WI) bosons, corresponding to 80.4 and 91 GeV/c2.
But reflecting further, if it had similar values, the G, or Yang-Mills b quantum, would have already been detected. The next step, of experimentally detected massive particles, correspond to the values given, at CERN, to the Higgs Boson (HB): 125-126.5 GeV/c2 [78]. In this case, as previously mentioned, there can be no absolute certainty, in the Galilean scientific sense, that the decay products, highlighted at the CMS and ATLAS detectors, cannot be attributable to a possible massive G, like a massive Yang-Mills b quantum. Otherwise, by accepting the coexistence of HB and, therefore, the BEH Mechanism, in agreement with Randall [21] we could hypothesize the existence of another HB, capable in this case of giving mass to particles sensitive to Strong Nuclear Interactions (SI). Thus, we could also imagine a slightly larger mass, conveyed by the b quantum. In such an eventuality, we must wait for experiments to be performed at even higher energies. The next step, among the particle surveys carried out, corresponds to that of the top Quark (tQ): 177 GeV/c2. Even in this case, are we 100% sure that those decay products belong to the tQ?! Considering a massive b quantum, it cannot be completely excluded that those decay products are not referable to G. Therefore, it is likely that a possible massive Yang-Mills b quantum can carry an energy-mass density between 125-126 and 177 GeV/c2 (maybe even a little further). What would be its radius of action, i.e. its range?

1)
At this point, we start from the alleged lower value, which should correspond to the range of decay products detected at the LHC [78]. In this regard, we made some calculations to evaluate the possible radius of action of the HB. It was very simple, as we knew the mass. Therefore we applied the well-known Yukawa Principle, represented in Eq. (61). In this respect, one wonders: where does the HB take all this mass-energy? From his field, that is the field in which it is immersed: the Higgs field(HF). According to QFT, the higher the value of the mass of the particle, i.e. the more the energy (ΔE) taken from the field, the sooner (Δt) the energy must be returned to the field itself [135]. This is an inviolable rule of Quantum Mechanics, dictated by the Heisemberg Uncertainty Principle (HUP) [ Eq. (98) shows that time and energy are inversely proportional. That's why the higher the energy value borrowed, as saying subtracted from the field, the sooner this energy must be returned. To this point, we take into account the principle of equivalence mass-energy(MEEP), represented in Eq. (62). Therefore, by replacing the value of E in Eq. (62) with that of Eq. (98), we obtain: Eq. (99), as Fermi reminds us, "it is the time in which the boson issued may remain in free space. If then it is assumed that its speed is the maximum speed at which a particle can move, that is the speed of light (c), it is seen that the maximum distance (d) it can reach, before being recalled to weld the debt, is given, as order of magnitude, by the product of time (t) for the maximum rate at which the particle can move" [136], namely: So, the value expressed by Eq.(107) represents the maximum limit of the HB range, i.e. the maximum distance (d) passable by HB, before it returns the energy to the field in which it is immersed, namely the HF. Our calculations reveal a range of HB really very small, slightly smaller than 10-15[cm], but this value is justified by the considerable mass that the HB acquires. This is certainly a very small value, which shows a very marked space limitation of this boson, but these are the rules imposed by Quantum Mechanics(QM) through one of its most profound concepts: the HUP. Therefore the range of HB will never exceed the distance expressed by Eq. (107). Otherwise, the HUP would be violated [135].
This very narrow range of HB (a particle with a considerable mass) is perfectly congruent and in full agreement with the Yukawa Principle, according to which the range of a fundamental force must be inversely proportional to the mass of its bosons. We have further confirmation of these concepts from the bosons of the WI, whose mass is notoriously lower than that of HB.  [83]. As it is easy to see, even with nuanced mass differences, the range changes. Thus the most massive particle, the Z°, has a barely narrower range [137].
And yet, the SI boson, i.e. the G, also operating exclusively at the intra-nuclear level and intra-nucleonic, i.e. in the same spaces in which the WI operates, is considered massless! Indeed, on all university Physics books, we find that the G, operating in the very narrow space of a nucleon, is massless. Fermi and Yukawa would backfire in the tombs.
Likewise also the Yang-Mills b quantum, now identifiable with G, is considered massless, although Yang and Mills themselves, like so many authoritative AAs, were firmly convinced of the massiveness of this particle [51] [58] [1] [63] [76]. Then why did they accept this compromise?
Because the mathematical formalism of gauge invariance is used, a formalism in which the mass of particles tilts the equations: the mass breaks the symmetry. In order to deal with the problems, as is known, the gauge theories require that all the particles are massless. Then, since 1964, with the invention of the BEH Mechanism and the HF, various particles can acquire mass by reacting with this field, but not all: only those sensitive to WI. Therefore the G, b quantum, being sensitive to the SI, but not to the WI, remain massless! It is indeed unacceptable, in our opinion, to continue to support the notion that a particle such as the G, operating exclusively within a space ≤ 1 fermi, can be massless. Thus, a massless b quantum is also in open conflict with the HUP.
To this purpose, Feynman says: "No one has ever found (or even thought of) a way around the HUP. So we must assume that it describes a basic characteristic of nature" [120]. Besides, as Hawking adds: "HUP is a fundamental, inescapable property of the world" [77].

2)
As reported, the next step, among the mass particle surveys greater than the HB, is the top Quark (tQ) 177.16 GeV/c2. A very high value, which really left the researchers baffled, since they did not expect such high values attributable to a Q.
Even in this case, knowing the mass of the particle, it is very easy to obtain the action radius (dtQ): . This is the maximum distance that a particle with such a mass can travel, before having to return the energy loan to the field in which it is immersed. If a possible massive b quantum (or G) were to have values corresponding to those detected for the tQ, it would operate in spaces really very small, barely above the size of the quarks, considered around ≈10- 16 [cm].
Furthermore, comparing the distances (d) that can be traveled by particles of different mass, which are the particles with mass corresponding to that of the HB or of the tQ, we can note that Yukava Principle is perfectly respected. The heavier particle, in fact, shows a minor radius of action.
In short, we believe that the most likely and appropriate solution is to consider the range of the At most, we can consider the possibility that the G, as to say the b quantum [134], may have a mass slightly higher than that detected for the tQ, but not too much further. Otherwise, its range would be narrowed beyond acceptable limits. Indeed, it is difficult to imagine that the range of b quantum can be <6.5−6 •10-16[cm], that is, always close to the size of the Q.
Thus, if the mass of G corresponds to ≈190.786 GeV/c2, its radius of action (dbq) is: dbq = 6.5 •10-16[cm] (110). It is less likely that this range is even shorter, however, if the b quantum had a mass of ≈206.7 GeV/c2, its range of action would drop to: dbq = 6 •10-16[cm] (111). Consider that this range corresponds to the space occupied by 6 Qs arranged in a row and contiguous, which is not possible, since between the Qs, as among other fermions, there is always a space, something interposed between them, which separates them. We find it incongruous, we repeat, to go down to even lower ranges, since too-narrow spaces would not be sufficient, in our opinion, so that the SI has space and time to carry out its various tasks. This is because, in compliance with QM and Yukawa Principle,as the boson mass increases, its range and lifetime will decrease in parallel [72][136].

LIFETIME of a MASSIVE b quantum
Also, let's see if our theory and our calculations are still in agreement with Yang and Mills for the lifetime of the b quantum, in their "less than 10−20 sec" [51]. We believe that the most likely range for a massive b quantum is between 125 and 177 GeV/c2. However, even considering for the b quantum a higher mass, roughly equal to ≈191 GeV/c2, in cgs metric system it corresponds to ≈3.4⋅10−22 [g]. At this point we insert this value into Eq.(99) and get the possible value of the lifetime(t) of the G: t =  (115). This value inherent the lifetime of a G(i.e. the Yang-Mills b quantum ) corresponds exactly to the maximum term of the loan granted under the HUP, as it is shown by Eq. (98). Thus, the life of the G corresponds to an infinitely short time, about 3 orders of magnitude lower than the common decay times governed by the SI, and 3 orders of magnitude lower than the time it takes light to cross an atomic nucleus. Therefore, based on its extremely short existence, even the most powerful particle accelerators it is really difficult to detect the trace of a G! However, although in agreement with the forecasts of Yang e Mills, it is a really too short time.
In fact, about there are bosons, Fermi points out: "its speed is the maximum speed at which a particle can move, that is the speed of light" [136]. On the contrary, a less massive G would have a slightly longer lifetime. Therefore, it seems more reasonable to expect a massive b quantum between 125 and 177 GeV/c2.  (115) and (116), obtaining: t = 2.7323(±0.5643) 10−26 sec (117). Therefore, these should be the lifetimes of the b quantum provided with a mass likely between the weight of the HB and that of a particle barely heavier than the tQ. The HUP does not allow the lifetime of the b quantum to go beyond these limits, since the G must immediately return the energy loan. Consequently, even the distances (dbq) this particle travel, as represented in the equations (107)(108)(109)(110), do not exceed the very limited space of 10-16 [cm]! At this point, we need to make a consideration: very probably, it is precisely the very short lifetime of the b quantum (probably among the shortest found for a particle) to determine the Qs Confinement, the Gs Confinement, and the Colors Confinement! And why? It is obvious: because the very short lifetime of b quantum is associated in parallel the reduction beyond measure of the space granted and practicable by the Qs and Gs, since the HUP imposes that the energy debt must be repaid in the shortest possible time.
On the contrary, a massless b quantum (or G) exerts its action for infinite spaces! This is imposed by the mathematical formalism adopted by the gauge theories and the QFT: it is really paradoxical.
Therefore it should not cause denial and/or surprise the thesis of a massive G (supported, inter alia, by the presented calculations): a concept that, through the methods indicated above, is not in conflict with the Standard Model, just as no surprise aroused from the finding that the gauge bosons of the electroweak force were of considerable mass. We consider, that is, more than adequate and likely to expect a more symmetrical and equal behavior of the BEH-M towards the weak field and strong field. On the other hand, it has often been considered the possibility of extending the BEH-M also to the Yukawa potential and therefore to the strong field, so as to be able to explain the mass of the fermions too, without breaking the gauge symmetry, but through a sponaneous symmetry breaking (SSB).
In this way, the HF could also be responsible for the mass of the fermions and related bosons, through the extension of the BEH-M to the strong field and to the SI. That is, when the HF acquires a non-zero Vacuum Expectation Value, continuing to maintain the gauge compatibility, it induces the SSB of chiral symmetry, with consequent appearance in the Lagrangian of parameters describing, in field mode, the mass of the corresponding fermion and bosons carrying the SI, i.e. the mass of G. This should not be considered as an unrealistic, or science fiction hypothesis. In fact, "Steven Weinberg considered promising to use the ideas of symmetry breaking in a Yang-Mills theory to describe SI.
At first, while trying to assimilate the particles with and without mass, which appeared in his theory with known strong interaction particles, his efforts seemed in vain. Then, as Weinberg recounts: I realized that I had applied the right ideas to the wrong problem. He realized that the massless particle he needed was the photon, and that the particles with mass were the particles of the weak field.
Along with Weinberg, the Electromagnetic and Weak Interactions could be described in a unified way in terms of an exact gauge symmetry, but spontaneously violated" [52].
Yet we believe that the path is initially undertaken by Weinberg, that is, the application of an SSB to Yang and Mills Theory to describe the SI, was not at all "wrong", but equally valid. In other words, it would be much more natural, simple, and symmetrical that the SSB was applicable to both WI bosons and SI bosons. In this case it would be enough for the HF to behave in a more symmetrical and equal manner, interacting with both the WI-sensitive particles and those sensitive to the SI: either through a single BEH-M or through two different BEH Mechanisms.
In this way, the possible contrasts between the conjectures of the gauge theories and Einstein's MEEP would disappear, as well as the manifest contradictions between a massless G and the Yukawa Principle. A G massless, in fact, would give SI an infinite range of action! Likewise, as occurs with the SSB of the electroweak field, in the case of probable SSB of the strong field the Lagrangian (or the Hamiltonian), which define the considered physical system, is invariant with respect to a possible transformation group, as could be a rotation or a translation of the system itself. Thus, the geometric figure represented by likely equilateral triangle (with the 3 valence Qs at the top) or the plausible sphere-like aspect of each Q, are to be considered perfectly symmetrical, since, if they are subjected to a spatial transformation, their shape will not change: they have a geometric symmetry, perfectly in agreement with the Standard Model.

QUARK CONFINEMENT
Our calculations, concerning the possible range and lifetime of the Yang-Mills b quantum, may help us try to answer the question of the confinement of Quarks(Qs). Since the Qs are kept together by Gs, the distance between a Q and the other can never exceed the radius action of G! It is categorically prohibited by the Quantum Mechanics(QM) through the HUP: the energy debt must be repaid in due time. We find it very important to emphasize that the G's range should represent the keystone of the Qs Confinement phenomenon.
In order not to exceed the distance of that radius, and having to stay more or less equidistant between them, the 3 Qs contained in a nucleon, should aim to draw very roughly a geometric figure in the shape of triangle, rotating on itself together with the nucleon, and tending to shrink -in a more or less uniform wayfor that brief moment in which the Qs are a little closer one to the other. Obviously the 3 Qs would occupy the vertices of that triangle, and the radius of the G would constitute the sides. Since the maximum value of that radius is the same among the various Qs, we infer that the geometric figure drawn should approximately correspond to an equilateral triangle, obviously characterized by a perimeter continuously deformable. In this way, each Q can exchange the G likewise with the other 2 Qs. As a result of the rotation of the hosting nucleon and its intrinsic angular momentum, similarly to the centrally arranged atomic nucleus, it seems more justified to imagine this triangle of Qs rotating quite centrally within the hadron, thus leaving a huge space in the rest of the nucleon. It is as if this triangle constituted the core of the nucleon, with the difference, compared to the atom, that there are no orbiting particles as electrons: likely this space is filled by the so-called protonic sea [21], i.e. a sea of Gs and virtual Qs. .An equilateral triangle very roughly, whose sides, as mentioned above, they should match the G's or b quantum's range. It is obvious that we do not know this value (dbq). We can try to deduce it from the experimental evidences, according to which this range could correspond to the one calculated for the HB, or the tQ, or an intermediate value which, in this case, would be: dbq=8.4414(1.4414)10- 16[cm], as shown in Eq. (109).
In fact, we think it is very likely that the b quantum's range is included between these values. However, just to leave some wider margins, we can consider that the maximum G's range, acceptably congruous, should correspond to that of a massive b quantum equal to almost 207GeV/c2, as expressed in Eq. (110). Therefore, we consider the intermediate value between that of Eq. (110) and that of Eq. (107) .
In short, it really seems to us that it is the action radius of the G to confine the Qs in a very narrow space, more or less decentralized within their nucleon, arranging them probably in vertices of an equilateral triangle. A highly deformable triangle, both because of its rotational motion, and because of the varying intensity of the force exerted by the SI, in relation to the continuous variation of the distance between the Qs, being provided with a relative autonomy of motorcycle (exclusively within that space). Very small space, and that always remains such, at least as far as its maximum limit is concerned, which can never exceed the G range, as illustrated in Eq. (110).
If we compare this equation with Eq. (127) it immediately emerges that the space in which the Qs are confined corresponds to just about 1.6% of the space they occupy within the nucleon. So there is an analogy with the atom, even if the nucleus occupies about  0.0000001% of the atomic space, a space composed for > 99.99% by vacuum (excluding the orbiting electrons). Inside the nucleons, on the other hand, the space in which Qs do not operate, and which should correspond approximately to 98.4% of the nucleonic cavity, is not really empty but, as reported above, it is filled by "a sea of Gs and virtual Qs " [21].

ASYMPTOTIC FREEDOM
We get that the Qs interact only weakly when they are very close, yet they cannot be separated: no Q has the ability to break away from its hadron. Thus, the 3 Qs confined within the nucleon has a certain autonomy of movement. This independence of the Qs at very short distances, within a very small area, defined asymptotic freedom, has been described simultaneously by Gross and Wilczek [138][139] and Politzer [140]. As it is known, the calculation of the A.A. concerns the determination of the transferred high momentum trend, q2, of the actual constant of a non-Abelian theory as the Quantum Chromo Dynamics (QCD). The result shows that the actual constant, g2, asymptotically tends to zero as q2 increases: .
The interactions between Qs, extremely intense at the energies and moment transferred from the order of q2 ~1 GeV2, decrease in intensity and tend asymptotically to a situation in which the Qs behave as if they were free: a property admirably defined asymptotic freedom of the Strong Interactions(SIs).
The result, at the time completely unexpected, justifies the scale relations in the cross-sections observed at the end of the 1960s and in the early 1970s in the electron-proton and neutrino-proton diffusion processes with high transfer momentum [1].
In this regard, Maiani points out that the scale laws, already identified by Bjorken [141], are interpreted by Feynman as indicative of the fact that -in the diffusion of the electron or neutrino -the Qs behave as if they were free from the interaction. Feynman, in fact, had observed that about 50% of the proton momentum is carried by electrically neutral particles, which in QCD are just associated with the Gs [134]. The experimentation of the electrons and neutrinos diffusion processes, at large transferred momenta (deep inelastic scattering), has led to the conclusion that the proton (and every other hadron) participates in the processes with great transferred momentum. As if it was made of an incoherent whole of Gs, Qs and Q̅ s of the different flavors, each characterized by a function, Pi (x), which gives the probability of finding in the proton a Q of a given type i = u, d, s,…, g, with a fraction x of the momentum of the proton itself [1].
As known, the Pi (x) is called functions structure of the proton. The logarithmic approach to asymptotic freedom, Eq. (129), causes that the scale relations are affected by logarithmic corrections, which have been accurately calculated by different authors, including Altarelli, Parisi [142] and Dokshitzer [143], and then compared with experimental data, at energies gradually increasing.
Quigg adds: "We can think of a hadron as a bubble, in which Qs are imprisoned. Inside the bubble, Qs move freely, but they cannot escape. It is believed, that is, that the SI, or Color Force (CF), undergoes a shielding action, since the virtual color charges (represented essentially by Q Q̅ pairs) fill the intra-adronic vacuum.
In other words, a Q, which also carries a color charge, attracts the opposite colors, thus surrounding itself with a shield, which reduces its effective charge with distance. However, this shielding effect is counteracted by the so-called masking effect, whereby a Q radiates and continuously reabsorbs Gs, which take its color charge to considerable distances and change its color. The charge takes its complete action only outside the space it occupies. Therefore the masking increases the force acting on a true Q as it moves away from the 1° Q towards the border of the region with color charge. The net result of shielding and masking is that at short distances the SI, based on the color charge, is weaker, while at longer distances it is more intense" [144].
The QCD equations [145] state that Qs are perennially confined and asymptotically free. QCD gives us a model of SIs described by a non-Abelian gauge Lagrangian, very similar to that describing the electromagnetic field: where F(a) is the non-Abelian gauge-covariant curl. It represents the tensor (curl) describing the intensity of the strong field (it is considered an antisymmetric tensor). The  iq fields are the Dirac  and anti - (Ψ̅ ) spinors, indicating respectively Q and Q̅ color multiplets, of which i characterizes the color (the new parameter introduced with QCD) and q the flower;  are Dirac matrices; m is the Q-mass matrix; the symbol  expresses all the possible spatial positions occupied by the particles, while (D)i j is the gauge covariant derivative: And where gs is the coupling constant of the SI, or CF, fabc are the group structure constants of the group of symmetry, called SU(3). The coefficients λaij are coupling constants in matrix form, called Gell-Mann matrices. These matrices obey to the following commutation relation: (133).
In this regard, it seems useful to point out that the strong field, placed inside the hadrons, is a quantum field (in itself symmetrical, as all matters pertaining to the Fundamental Interactions) capable of preserving a local gauge symmetry, which persists even after a partial transformation of the field itself.
As it is known, leptons do not carry any color, that is they have a null color charge. Thus they are not sensitive to Color Force(CF) or SI. The color is a label per gli stati di una rappresentazione SU(3) esatta [149]: Let's consider, for istance, that a red Q scatters on a green Q: what happens is that the color is echenged and always kept. As for the Gs, we expect 9 Gs (3 colors X 3 anti-colors): However, Gs are orthogonal combinations of the 9 states. The combination: It is colorless (we get it as a color singlet), so it is not sensitive to SI, or CF: we remain with 8 Gs [149]. In fact, according to the QCD a gauge theory is applied with a symmetry group SU(3), in which Qs are triplets belonging to this group, in which also Isospin and the strangeness of the considered particles are considered, so that hypothesis, according to Gell-Mann, was also called Eightfold Way, since it includes 8 independent generators (the 8 Gs). These Gs, or bosons of the SI or CF, are indicated with Aa in Eqs. (131) (132) and correspond to Yang-Mills fields. This is the representation of Eightfold Way: We now know that even the color is confined and that only the color singlet states are able to exist as free particles. A massless G, color singlet, would give rise to a long-range SI [149]. But these forces have never been observed, so we have to admit that there are only 8 physical Gs, color carriers, so they are Permanently confined! Furthermore, since also Qs carry color, Qs are permanently confined too! Thus, free particles can be made as follows [149]: Therefore, as can be seen from Mathematics, the asymptotic freedom is characterized by a certain movement independence of Qs, but only for very short distances. This derives from the fact that when Qs are very close together, the SI, or CF, almost completely loses its strenght. Why? It could be assumed that this is a consequence of the shielding and masking effects, previously exposed, in their turn supported by the very valid mathematical formalism of the QCD, evidenced by the Eqs. (129) (130).
They are all satisfactory explanations. Yet it is as if something is missing. In short, on the one hand, we have the asymptotic freedom phenomenon, illustrated by a congruous and elegant mathematical formalism, sufficient on its own to explain the phenomenon; on the other hand, we wonder: what is the objective, physical, concrete reality that underlies it? What is the exact physical mechanism for which the SI strenght is not homogeneous throughout its action radius? Or: how can Qs, alone, without any help, have enough strength and power to get rid of the extremely intense grip exercised by the SI?
One could reply: it is due to Qs, which, if too close to each other, cannot be further gathered. Or, one could answer: it is an intrinsic characteristic of the SI, so that as the inter-nucleonic distance, the spaces in which the SI operates, decreases, likewise the strenght of the SI decreases. And yet, they are unsatisfactory answers: they seem more like a description of the asymptotic freedom effect, rather than an explanation of its cause. Otherwise, a more elaborate explanation could consist of the decrease of the so-called antiscreening effect.
As it is known, in fact, each G carries both a color charge and an anti-color charge. The net effect of virtual Gs polarization in the vacuum is not to hide the field, but to increase it and to influence its color: antiscreening effect. Thus, approaching further a Q, the antiscreening effect of the surrounding virtual Gs decreases, so the contribution of this effect can determine a weakening of the effective charge as the distance decreases. This description is persuasive and appropriate. However it does not seem to provide us with an exhaustive explanation of this peculiar behavior of the SI, since it does not outline and does not clarify what the physical cause is unless there is the intervention of some other phenomenon, currently unknown, completely unrelated to the SI.
Summing up, from the analysis of the Qs Confinement, it clearly emerges that the main limitations of the SI are its action radius, since this nuclear force won't be able to take another way, having to immediately remedy the energy debt. Therefore, even if the too narrow space between contiguous particles (Qs or nucleons) should more or less casually re-enter the field of action of the SI, with great surprise, we will see that the strenght of the SI is overwhelmed, since it fails to further gather the particles to each other. This seems to us to be the keystone; this is the real enigma! At this point, one might wonder: is it possible that there is something between the nucleons, or between the Qs, which at first goes unnoticed, but when the particles gather each other excessively this something begins to be felt, showing a clear and energetic repulsive action? But if so, what is it?

LEVY INTERACTION
To this purpose, we would like to quote the so-called N-N Force or Levy Interaction [150]. It is a repulsive force, which prevents the excessive approach of two nucleons, indicated as N-N. It is known, in fact, that the particles cannot approach each other beyond a given distance (do), below which a repulsive force appears: Levy Interaction (LI) [150]. In this regard, Wigner and Eisenbud point out: "There is experimental evidence that the SI is repulsive at a distance very small among the nucleons. A particular potential, which was originally proposed on the bases of the mesonic theory of nuclear forces, and that gives a fairly good description of the systems with two bodies, it is LI. This force is intensely repulsive at very short distances. Between two nucleons, LI is strongly repulsive from distances (do) equal to " [151]: .
As known, a very similar phenomenon occurs within the nucleons, between Qs. In this respect, therefore, according to Gross-Wilczek [139] and Politzer [140], we cannot exclude that there is a repulsive effect,analogous to the one hypothesized for the LI (probably operating within the intra-nuclear space, between nucleons too close together) -even in the intra-hadronic area. It starts beeing detected when the particles present in this field, i.e., Qs, subjected to the very intense attractive action exerted by the intra-nucleonic SI, or Color Force(CF), come too close till prevailing over the SI, and therefore rejecting.
In this respect, Pacini adds: "Among the nucleons, regardless of their charge, there is a very powerful attractive force, the SI, which prevails on the Coulomb Force (repulsive between protons) when the distance between the two interacting nucleons is ≤10−13 [cm], that is 1 fermi. But by compressing the nucleons enough, the force becomes repulsive again! In fact, the intervention of this force places a limit on the further reciprocal approach of the nucleons, limit corresponding to ~ 0.30 fermi, beyond which there is a saturation barrier" [152]: where Rf indicates a Repulsive force, whose radius of action appears to be superimposable (just a little shorter) to LI's. Similarly to Levy, also what Pacini described, seems possible to occur, with the same modalities, within the nucleons, among the Qs, according to the description of the asymptotic freedom phenomenon, by Gross, Wilczek, and Politzer. Therefore, both Levy's description [150], and Gross, Wilczek [139] and Politzer's [140], use an elegant mathematical formalism, which shows the disappearance of the SI when the nucleons, or the Qs, are too close together.
This represents the crucial point, in our opinion, to try to deepen this peculiar phenomenon, able to override, eventually, the most powerful known force: called SI! We feel that the Mathematics, which describes asymptotic freedom so brilliantly, does not fully clarify, however, the real, objective, physical cause that generates such phenomenon, but it just shows us the effects.
In our opinion in the asymptotic freedom, it is not the CF(ie the SI) to loose strenght, but it is overwhelmed by another force, a Repulsive Force (Rf), quite distinct from it, and that, in certain circumstances, proves to be even more powerful than the SI itself. We reiterate, in our opinion, this is the heart of the phenomenon.
To this purpose, Pacini goes on: ""But there is more: to be convinced of this Rf, which acts as 'repulsive', as for trains, between the two particles, we should think that without it, the atomic nucleus would not hold up and would tend to shrink more and more" [152].
This statement gives a primary and absolute value to this Rf: without its very strong repulsive action, the world would not be as it is! Obviously, one wonders: where does this Rf come from? What is it?
In this respect, Pacini can help us. He writes that this Rf "is a saturation barrier, it acts as 'repulsive', as for the train bumper, between the two particles" [152], nucleons or Qs.
But this requires another question, which is fundamental in our opinion: what is the nature of this saturation barrier, acting as a Rf? Let's try to build its the profile.

IDENTIKIT of the INTRA-NUCLEAR and INTRA-NUCLEONIC REPULSIVE FORCE
First of all, this Rf manifests its presence both when the inter-nucleonic space shrinks excessively, according to Pacini (so it is present inside the atomic nuclei, or Heisenberg Isospin field [50]), and when the Gs are too close together (so it is also present within the intra-nucleonic space).
At this regard, we read: "This mysterious repulsive energy antigravity, or 5th Force, should also act against the Gs, thus succeeding in overriding the SI when the Qs tend to get too close to each other, that is when they almost touch each other, but not really: that is there is always some space between the Qs. The space is apparently empty, but actually it is occupied by the 'thickness' of the 5th Force"[153].
Therefore, the profile of our intra-atomic Rf is enriched: it is associated with the very famous thus, the mysterious, anti-gravity, repulsive energy, or 5th Force, Quintessence, better known as Dark Energy (DE).
At this point, one may object: it is wrong! The DE is not confined within atoms. DE was discovered in the infinite intergalactic spaces, identified with that energy of repulsive, antigravitational action, responsible for the accelerated expansion of the Universe [154] [155].
It's right. However, DE was considered to be, above all, in close correlation with the vacuum energy. Let's try to better understand what "vacuum" and vacuum energy mean.

VACUUM ENERGY
Margherita Hack reminds us: "Let's imagine to consider a region of space and take away all matter, radiation and every other kind of substance. The resulting state is called the vacuum, which is something different from null. The vacuum has the lowest energy of any other state, but not necessarily zero. According to Relativity Theory, every form of energy influences the gravitational field, and therefore the energy of the vacuum becomes an important ingredient. It is believed that the vacuum is the same throughout the universe, and consequently the energy density of the vacuum is called cosmological constant (Λ). While matter can be thickened or dispersed during the evolution of the universe, Λ is a property of the space-time" [156]. "In the Universe, there is another mysterious force, never directly observed, called 'vacuum energy' or 'negative pressure', or simply 'strange energy': this force is opposed to the force of attraction of gravity, accelerating the expansion of the Universe" [157].
This antigravitational counter-pressure, exerted by the energy of the vacuum, also for many other physicists, is equivalent to the DE , which Hack points to as a 5th Fundamental Force, corresponding to cosmological density [156]. Hawking writes: "Besides matter, the universe can contain energy of the vacuum, the energy present also in an apparently empty space" [77].
In fact, in agreement with Chandrasekhar, in the description of physic systems, the vacuum is considered as the minimum energy state, or Zero Point Energy(ZPE) [90] [92], which only in some cases corresponds to the almost total absence of particles or waves. "It was thought that the interstellar and inter-galactic spaces were expanses of vacuum, but then with the theory of quantum fields(QFT) it was stated that space is never really empty, but is pervaded by quantum fields present everywhere: the various particles are, in fact, excited states of these fields. Space appears empty when the fields are at the lowest energy level, whereas space comes alive with visible matter and energy when the fields are excited. Wheeler said: empty space is not empty, but it is the kingdom of the richest and most surprising Physics. For Hawking, DE is the central problem of Physics" [158].
Barrow states: "The quantum revolution has shown us why the old concept of vacuum as an empty box was unsustainable. From then on, the vacuum was simply the state that remained when everything that could be removed from the box had been removed. This state was by no means the absence of anything: it was only the lowest possible energy state. There was always something remaining: an energy of emptiness that permeated every fiber of the universe. It is never possible to achieve a perfect vacuum. Any small perturbation or attempt to intervene on the vacuum would increase its energy. The omnipresent, un-eliminable vacuum energy was revealed and proved to have a tangible physical presence. Einstein showed that the universe could contain a mysterious form of vacuum energy. The Uncertainty Principle (HUP) and quantum theory have revolutionized the concept of the vacuum. Saying that in a box there are no particles, that it is completely free from any mass and energy, is in contrast to the HUP, as it assumes to have complete information on the motion at any point and on the energy of the system at a given instant of time. The entity of the uncertainty is precisely the so-called ZPE" [37].
Still, with regard to the vacuum energy, it should be specified that "in classical physics, the vacuum is identifiable with the absence of energy. In contrast, in the Quntum Mechanics(QM), the HUP prevents a measure of the vacuum state energy from giving exactly a zero value. Because of the HUP the number of particles contained in the vacuum state cannot be null, but it is forced to undergo random fluctuations: the quantum vacuum must, therefore, be imagined as a dynamic state, rich in all the particles -called virtual (that is, of very short life)-which are produced due to unavoidable quantum fluctuations" [158]. Barrow points out: "The quantum vacuum can be conceived as a sea made up of elementary particles of all types and their antiparticles, which appear and disappear continuously. We focus our attention only on the Electro-Magnetic Interactions: there will be a great ferment of electrons and positrons. Electron-positron pairs materialize from the quantum vacuum, and then immediately they annihilate each other, disappearing. If the electron and positron have mass (m), Einstein formula (E = mc2) tells us that their "creation" requires an energy (E) of 2⋅ mc2, which must be borrowed from the vacuum" [37]. This implies that the vacuum, the quantum vacuum, contains a fair amount of energy since it is able to lend it! This is a salient point, in our opinion. That is, it is agreed that the quantum vacuum teems with energy, including DE.
Moreover, among the known energies, as reported by numerous A.A., that widely present in the vacuum, or quantum vacuum, is the Electromagnetic Radiation (EMR), as to say: photons (Ps).
Moreover, the EMR is qualified to be part of the quantum vacuum energy; EMR is just a quantum energy: in fact, it is conveyed by light's quanta! To this purpose, Randall comforts us: "The intra-atomic space is crowling with EMR" [21].
Moreover, as Hack reminds us "The primordial plasma was subject to two opposing forces:1) Gravity (GI) and 2) Radiation Pressure (or Photonic Pressure). The former tends to compress the gas until the Photonic Pressure reverses its motion, producing elastic oscillations. Since compression heats the plasma, it results in the warmer and colder regions observable in the cosmic microwave background(CMB), shown by the presence of two peaks. Since both the baryonic matter and the Dark Matter are subject to the Gravity Interaction(GI), while only the baryonic matter is subject to radiation pressure, it is possible to determine from the properties of the peaks that the baryon matter is ≈4% of the critical density and the Dark Matter ≈23%.
On the other hand, the fact that the universe is flat means that the critical density must be equal to 1. It follows that the remaining 73% consists of energy density" [156]: the so-called DE. "But which energy? It is supposed to be an energy, discovered in recent years, called vacuum energy, which causes an accelerated expansion of space, while it was always expected that the GI would decelerate the expansion. This means that there is a force acting against gravity. The energy that causes the acceleration of the expansion is the vacuum energy and, since energy and matter are equivalent, it probably provides that 73% density necessary to bring the density of the universe to the critical value, compatible with the observations that establish that the universe is flat " [156].
In short, the laws of QM tell us that the apparently empty space is full of particles of every kind, which appear and immediately disappear, generating a repulsive force very similar to that which would be generated by the cosmological constant (Λ), as to say: DE.

DARK ENERGY
The Dark Energy(DE) is a sort of intrinsic energy of space. DE is a form of invisible energy, unknown and repulsive, that does not dilute with the expansion of the Universe, and does not interact with ordinary matter. DE is a hypothetical form of energy that exerts a negative, repulsive pressure, behaving like the opposite of gravity. "DE is a field of energy, or property of unknown space, capable of opposing gravity" [159]. We do not know the origin of the DE.
We read from CERN, "DE makes up approximately 68% of the universe and appears to be associated with the vacuum in space. It is distributed evenly throughout the universe, not only in space but also in time -in other words, its effect is not diluted as the universe expands. The even distribution means that DE does not have any local gravitational effects, but rather a global effect on the universe as a whole. This leads to a repulsive force, which tends to accelerate the expansion of the universe. The rate of expansion and its acceleration can be measured by observations based on the Hubble law. These measurements, together with other scientific data, have confirmed the existence of DE and provide an estimate of just how much of this mysterious substance exists" [160].
In the NASA captions, we find: "DE is a truly bizarre form of matter, or perhaps a property of the vacuum itself, that is characterized by a large, negative pressure. DE is the only form of matter that can cause the expansion of the universe to accelerate or speed up" [161].
In short, among the various proposals put forward by physicists and cosmologists, in order to identify the DE, there are two hypotheses most followed: 1) The Cosmological Constant(Λ), which represents a constant energy density that fills the whole space homogeneously. 2) Scalar Fields, such as Quintessence and modules, i.e. dynamic quantities: Quintessence is a dynamic field, which energy density can vary in time and space (the contributions of the Scalar Fields that are constant in space are usually included also in Λ). The Λ can be formulated to be equivalent to the radiation of the empty space, or the vacuum energy. Scalar Fields that change in space can be difficult to distinguish from Λ, since change can be extremely slow.

Contexts and Operating Modes of DE
The widespread opinion is that Λ represents a force of repulsion among the masses, able to act only between huge masses and over very great distances. However, we do not think it always works this way. We have clear evidence that the DE also acts on very short, intra-atomic distances since it is considered to coincide with the energy of vacuum, vacuum also presents inside the atom [21] and represented by the electromagnetic field (Randall [21]).
In short, we have various examples of probable operating contexts of the DE, often very different from each other, both in terms of the extent of the space in which it operates, and with regard to the intensity of the energy with which it operates, and with regard to the methods and times in which it carries out its action.

1)
The most well-known context in which the DE is supposed to carry out its repulsive action is represented by the exterminated sidereal spaces in which, with deep surprise, in 1998, an acceleration of the expansion of the Universe was found [154] [155]. This acceleration has been attributed to a repulsive, anti-gravity action most likely carried out by a mysterious, elusive, impalpable form of energy, called precisely DE. In this case, according to the calculations of Perlmutter team [155], the energy density of this repulsive force, or DE, is ~ 10-29g/cm3.
We find it very important emphasize that this repulsive force, or DE, operating in the immense cosmic spaces, in our opinion it coincides with the repulsive force (Rf) reported by Levy, Eisenbud, Wigner, and Pacini operating at the intra-nuclear level.
Yet, despite its very small value, the DE arrives to represent as much as 68.3% of the entire mixture of massenergy that permeates the cosmos[162] [163]. It is interesting to point out that, in this context, DE has carried out its action for exterminated distances and since the Big Bang(BB).

2)
Rovelli adds: "What happened before the BB? In the Loop Theory, which combines QM and General Relativity, based on the proposal by Martin Bojowald, who applies the Loop theory equations to cosmology, we come across a surprising result: the history of the Universe continues backward over time and does not stop at the BB. It goes further back: the BB was a rebound (bounce) from a previous contraction (or Big Crunch). This 'bounce', says Bojowald, is due to the density of the contraction material, which when it becomes high comes into play the QM producing a kind of Repulsive Force (not entirely dissimilar to the repulsive force of quantum origin that prevents electrons from falling on the atomic nucleus) which bounces the contraction universe, thus giving rise to expansion, to the BB. In fact, the universe expands from a central region, from a very limited space, to very high density. Proof of this is the CMB, which is spread throughout the universe and is a direct trace of the great initial warmth of when the cosmos was very compressed. Near the BB the matter is so dense, entering a region where the QM can not be neglected " [164]. In line with this concept, Ashtekar described with an elegant mathematical formalism that the quantum properties of space-time bring out something new: a Repulsive Force, which would have produced the rebound (bounce) of our universe, manifested with the BB, consequent to the violent Big Crunch of the previous universe [165].
Therefore, the BB represents the oldest context of the anti-gravitational, repulsive action explained by the DE, which, in our opinion, results completely identifiable with the Repulsive Force mentioned by Rovelli, Bojowald, and Ashtekar. That is, the BB may be the effect of a bounce from a previous contraction (Big Crunch) [164] [ 165]. Bounce due to the progressive increase in the density of the matter-energy in contraction, by an overwhelming Gravity Interaction(GI), such as to reach a compression and density limit, until the QM intervenes and triggers a real explosion. In this context, the situation is completely reversed (compared to context 1): at the time of the BB, the space in which the DE operates is not the entire Universe, but a very limited space, even less than a point according to Lemaitre. Also, regarding the time, we are at the antipodes.
In the first example the DE is operative from ~ 13820 thousand years. At the time of the BB, the action of the DE lasts only fractions of billionths of a second. Moreover, the energy intensity of the DE shows abysmal differences: compared to the modest one of the first example (7 orders of magnitude lower than the energy of visible light), the energy with which the DE triggered the BB must have been far greater than that carried by the most energetic γ Ps [81].

3)
Another context in which the DE operates, is represented by a trial of strength that goes on uninterruptedly in the depths of the stellar cores between GI and DE. The gravity (GI) and the DE, represented, in this context, by the Radiation Pressure of the photons, can fight for a long time as it happens in the star's core. In this regard, Feynman points out: "When light is shining on a charge, and it is oscilling in response to that charge, there is a driving Force in the direction of the light beam. This Force is called Radiation Pressure or Light Pressure (F). Let us determine how strong the Radiation Pressure is. Evidently, it is that the light's force (F) on a particle, in a magnetic field (B), is given by F=qv⋅B, and it is at right angles both to the field and to velocity (v); q is the charge. Since everything is oscillating, it is the time average of this,F. We know that the strength of the magnetic field is the same as the strength of the electric field (E) divided by c (the velocity of light in vacuum), so we need to find the average of the electric field, times the velocity, times the charge, times 1/c: But the charge q times the field E is the electric force on a charge, and the force on the charge times the velocity is the work dW/dt being done on the charge! Therefore the force, the Pushing Momentum, that is delivered per second by the light, is equal to 1/c times the energy absorbed from the light per second! That is a general rule, since we did not say how strong the oscillator was, or whether some of the charges cancel out.
In any circumstance where light is being absorbed, there is a Pressure. The momentum that the light delivers is always equal to the energy that is absorbed, divided by c: That light carries energy we already know. We now understand that it also carries momentum, and further, that the momentum carried is always 1/c times the energy. The energy(E) of a light-particle is h (the Planck's constant) times the frequency(ν): E=hν" [87].
From an authoritative source, we read: "In ordinary stars such as our Sun, the inward force of gravity is balanced by the outward hydrodynamic pressure of the hot gasses and, to a lesser extent, by the radiation pressure of photons" [166]. Thus, the photons (Ps) contribute to counter-balance the huge gravitational pressure, which pushes from the outward, external layers of the star to the internal layers. In order to perform this action, this compression, Ps have to "base it on something", as though they had an equivalent mass (equivalent to the energy of the Planck's grain, the light quantum, divided c2) [95]. That is, it could be the equivalent mass of lots of billion of billion.. of Ps, which summed up may contribute, together with the "hydrodynamic pressure of the hot gases", to prevent the Sun from collapsing or the collapse of the other stars, at least for a long time [119].
Ps, therefore, have a mechanic effect, probably a mass effect acting as "counter-pressure" to the considerable GI expressed by the remarkable gravitational mass, which inexorably pushes towards the inside of the star [126].

4)
As for the Inflationary Phase, Randall adds: "The measure of certain gravitational effects indicates the presence of something that is even more mysterious than Dark Matter: it is what is called DE. This DE that permeates the Universe is very similar to the energy that precipitated inflation, but today its density is much smaller than the energy that long ago presided over inflation" [21]: we pass, in fact, from γ Ps of unimaginable energy to very weak microwaves.
This is in perfect agreement with our hypothesis, both regarding inflation and DE [81]: it is a very significant confirmation that DE can be constituted by Ps! The concepts just reported by Randall, are in full agreement with what was proposed by Alan Guth. With his 'Inflationary theory', Guth hypothesized that a negative pressure field, similar in concept to DE, could have led an Inflationary Phase in the primordial Universe [167]. Inflation postulates that a repulsive force, qualitatively similar to DE, has caused a huge and exponential expansion of the Universe immediately after the Big Bang(BB). However, the inflation must absolutely have taken place at a much higher energy density than the energy density of the DE we observe today.
It has not been described if there is a relationship between DE and Inflation. However, in our opinion, the relation does exist: they are both conveyed by electro-magnetic radiation(EMR), but with extremely different energies, as we discussed in a Symposium held in Cambridge(Ma) [81].
These concepts are not in disagreement with what Amendola reported, so "as primordial cosmic inflation may have been induced by a "particle", or rather by a field, called inflation, so the recent acceleration could be due, instead that to Λ, to the hidden work of a field/particle called DE or Quintessence (again Aristotle!) or simply: scalar field. Like all fields, it extends and spreads throughout the space and has its own dynamics. Like all particles, DE has a mass, too" [168].
It is just what we stating: the particle that should carry the DE, i.e. the DE Particle (DEP), must have a mass, corresponding, in our opinion, right to the dynamic-mass carried out by the interested photon's momentum, as shown in Eq.(80) and as we discussed in a Symposium held in Suzhou(China) [131]. Let's come now too short and very short distances, which, we believe, the DE, or Photonic Counter-Pressure, should operate also.

5)
According to Randall and Barrow, just to mention a few of authors, another operating context of the DE is the intra-atomic space. As Randall reminds us: "As for the world of the atom, probably the most amazing thing is that the atom essentially consists of empty space. The atomic nucleus has a radius of more than 4 orders of magnitude smaller than that of the electronic orbits. The volume of the nucleus is ≈10−12 of the volume of the whole atom. An atom is mostly empty, but within this vacuum there is of course an Electro-Magnetic(EM) Field, although virtually no real matter is present" [21], but there is energy: the so-called vacuum energy, which is none other than DE which, along with Randall, is actually represented by the Ps continuously exchanged between electrons and nucleus.
Randall's statement awarded, among other things, of the Honorary Citizenship of Padova, just as Hawking, Weinberg, and Witten [21], can provide an authoritative and winning asset to our hypothesis. We believe, in fact, that the so-called vacuum energy, that is DE, is nothing transcendental and mysterious: nothing but a form of Photonic Pressure, namely a Photonic counter-Pressure, and the particle that carries this DE is probably the photon(P).

6)
Other sites where emerges the action of a repulsive force (Rf), having all the characteristics and operational modalities of the ubiquitous DE, are represented by the intra-nuclear space. In agreement with Levy, Eisenbud, Wigner, and Pacini, in fact, this Rf is likely represented by the Levy Interaction, and/or by the barrier mentioned by Pacini. In this respect, it is an electromagnetic radiation(EMR) barrier, in our opinion, which represents the DE, which is Radiative counter-Pressure. In other words, we believe that this barrier consists of a multitude of Ps thickened and crammed together, but without exceeding the limit of the 'compressibility of the radiation', although the Ps are bosons, in extreme conditions of pressure and concentration they are no longer subject to the Pauli Exclusion Principle (PEP) [169].
To this purpose, let's analyze with Feynman, one of the most expert in the secrets of light, the Compressibility of the EMR: "We may give one example of the kinetic theory of gas, one which is not used in chemistry so much, but is used in astronomy. We have a large number of photons(Ps) in a box in which the temperature is very high. The box is, of course, the gas at a very hot start. The sun is not hot enough; there are still many atoms, but at still higher temperatures in certain very hot stars, we may neglect the atoms and suppose that the only objects that we have in the box are Ps. Now then, a photon has a certain momentum p, which is a vector. This p is the x-component of the vector p, which generates the kick, and twice the x-component of the vector p (2px) is the momentum which is given in the kick. Thus we find that the Pressure (P) is: where n is the number of atoms in volume V, and νx indicates the number of collisions, that is n=N/V (N is the total number of atoms). Then, in the averaging, it becomes n times the average of pxνx (the same factor of 2) and, finally, putting in the other two directions, we find: That is the pressure times the volume is the total number of atoms times 1/3 (p·v), averaged. Now, for photons, what is p·v? The momentum(p) and the velocity(v) are in the same directions, and v is the speed of light, so this is the momentum of each of the objects, times the speed of light. The momentum times the speed of light of every photon is its energy(E): E=pc, so these terms are the energies of each of the photons, and we should, of course, take average energy, times the numbers of photons. So we have 1/3 of the energy inside the gas: PV = U/3 (photon gas) (140), where U is the total energy of a monoatomic gas. U is equal to a number of atoms times the average kinetic energy of each. So we have discovered that the radiation in a box obeys to the law: (where V is the volume and P is the Pressure of the photonic gas). So we know the Compressibility (C) of the radiation! That is what is used in an analysis of the contribution of radiation pressure in a star, that is how we calculate it, and how it changes when we compress it" [87].
As known, this phenomenon is interpreted as an "energetic" phenomenon of the Ps (it would be only energy without mass). However, we are talking about a pressure action, so it should not be unreal to think it is something "real", material, concrete, to produce the pressure effect. That is, in these cases, the intimate light mechanism happens through a "push effect" on electrons. This push effect, reported first by Iohanne Keplero (studying the tail of comets [100]), should be interpreted as a real mechanic effect, rather than energetic.
We must make a reflection: the latter equation gives us a limit, beyond which the radiation cannot be further compressed. Therefore, the fundamental notion of the Compressibility limit of the EMR may help us to understand the possible nature of the barrier mentioned by Pacini [152]. We believe that the secret of the consistency of this barrier, which raises a wall so compact to be able to hold off the intense Strong Interaction (which would inexorably tend to join the nucleons) resides in the even though minuscule mass-energy density conferred to P by the Planck's Constant(h).
Without considering the frequency of the involved P, it still remains its h, which is not zero, but 6.62610−27 [ergs]. Of course, it is a very small, infinitesimal value. However if we think that in a very small space there can be crammed into billions and billions (the PEP allows it), it is formed over time, under the compressive action, continuous and inexorable of the intra-nucleonic Strong Interaction(SI), or Gluonic Interaction, as a buffer of Ps that, in our opinion, becomes progressively more and more incompressible. In this regard, the equation (141) shows with extreme precision the limit value of the radiation Compressibility.
Therefore, when density becomes excessive, and the spaces between the particles are extremely reduced, the incomprehensibility of the light and the consistency of Ps come out. Thus the repulsive action takes over, that repulsive, anti-gravitational force represented, governed and managed by Radiation Pressure, as to say by the Photonic counter-Pressure.
However, at this point, one might ask: how is the presence of the Ps justified within the atom? They should be the remitted Ps trapped in the 'recombination' phase, which occurred ~ 380˙000 years after the Big Bang [156] when the Ps energy fell to <13.6 eV. A confirmation of this concept is provided by the atomic explosions, which emit in the atmosphere an amazing quantity of visible light, really blinding (whose average energy is 2.48 eV), in addition to other EMRs. In our comfort, Randall states: "The intra-atomic space is swarming of Ps" [21]. In short, with the 'recombination', that is, with the formation of atoms, probably a large number of Ps are incorporated too, no longer able to break the link between the electron and the proton in a hydrogen atom (whose binding energy is 13.6 eV).
Thus, this repulsive force (Rf) that acts within the atom, already signaled by Levy, could represent and show probably another mode of action, and of operational place, of DE.
Therefore, in this different modus operandi, the DE carries out its action conveyed by sufficiently energetic Ps, thus demonstrating, therefore, that the energy density of the DE vary according to the context in which it operates.

7)
What happens inside the atomic and nuclear space, as previously described, can also occur inside a nucleon, that is in the intra-nucleonic space. We read, in fact, that "this mysterious repulsive energy antigravity, or 5th Force, should also act against the gluons, thus succeeding in overriding the SI when the Qs tend to get too close to each other, that is when they almost touch each other, but not really: that is there is always some space between the Qs. The space is apparently empty, but actually it is occupied by the 'thickness' of the 5th Force" [153]. We think that the thickness is represented by a large number of Ps, probably too crowded each other, crushed by the Qs in progressive approach (by the SI or gluon force).
This thickness of the repulsive force, probably interposed between Qs, may represent just the physical substrate responsible for the peculiar asymptotic freedom phenomenon. Thus, in the end, the Qs can no longer be compressed further and can no longer be in an increasingly narrow space. This is in disagreement with the PEP, according to which all the bosons can thicken in infinite quantities.
At least for the Ps, we must think that there is a limit for the PEP, a limit imposed by the Eq. (141).
In this context, the presence of the Ps, even within the nucleons, dates back to the primordial nucleosynthesis, which started 3 minutes and 46 seconds after the Big Bang [98]. In fact, with this process, many highly energetic Ps were trapped inside the nucleons.
The demonstration of what we support is provided, this time, by the nuclear explosions, which free a lot of light, similar to the atomic explosions, as well as an abundant emission of highly energetic radiation.

CONCLUSIONS
Although they have been studied for over 50 years, QCD and other Yang-Mills theories, in the absence of SSB, still present two important aspects that are not completely clarified [170]: 1) It is necessary that in the spectrum of the theory, there is a mass gap, that is, that the difference between the energy of the vacuum state and the first excited state is different from zero. In other words, the lightest of the particles predicted by the theory must have a strictly positive mass, to explain the short range of strong nuclear forces [170].
2) The theory must exhibit the Qs Confinement, i.e. the force between them must not diminish with their distance. This means that the separation of two Qs requires an infinite amount of energy, and therefore we cannot observe free Qs, but only their bound states, called hadrons, which are all color singlets [170].
"At present, there is no mathematically rigorous proof that the structure of a pure Yang-Mills theory possesses a mass gap and the Color Confinement. The importance of this problem is underlined by the fact that in 2000, a prestigious scientific institution, the Clay Mathematics Institute of Cambridge (Massachusetts), included it among the seven most important mathematical challenges of the third millennium"[171].

POSSIBLE PHYSICAL EXPLANATION of the ASYMPTOTIC FREEDOM
In short, when the distance between the Qs is too small, for us may be the thickness of the force (a sort of 5th Force) interposed between the Qs [153] to act as a buffer, triggering, like a spring, (therefore we talk about the DE also as an elastic force) a repulsive action, of mutual removal of the Qs.
In our opinion, this thickness behaves in a similar way to the Pacini's saturation barrier [152], which very likely is interposed between nucleons within the intra-nuclear space. We believe that this repulsive force, or Quintessence, is represented by a multitude of Ps that, crammed into an increasingly narrow space, and not further compressible, begin to exert an expansive counter-pressure.
It is interesting to note that, in such circumstances, the repulsive action of the DE, that is, the Photonic counter-Pressure, in our opinion, performs those tasks attributed to asymptotic freedom.
Moreover, also from this context, we deduce that, without the work and the intervention of the DE, the structure of ordinary matter would not have been as it is, or it would not have been there at all! It is as there was something in the intra-nucleonic space, which reveals the effects of his presence only when the Qs gather excessively. It is as if this something could not be further compressed among Qs too close together, starting to perform a counterpression, like a repulsive force. Moreover, the hypothesis that there may be something else inside the gluonic field is not pure fantasy. As Barrow remind us, the seemingly empty space is full of EMR [37], how to say that the so-called vacuum always contains, and in any case, an electro-magnetic field (EMF) that is Maxwell's field. To this purpose, Penrose adds: "Maxwell EMF delivers energy. For E=mc2, it must also have a mass. Maxwell's EMF is, therefore, also mattered! Now we must certainly accept this notion" [172].
It is pleonastic to specify that Maxwell's EMF is constituted and operated by Ps. We also know that the intrahadronic space is not completely empty; it contains the protonic sea [21]. Maybe in addition to the repulsive photonic barrier, these virtual Qs and Q̅ s (that is to say with a very short life), of which protonic sea swarms, contribute to the repulsive action emerging among Qs too close together, and therefore counterbalancing the SI action for excessively short distances. It is like saying that the so-called shielding effect is not mediated by immaterial, short-lived particles, but by real multiplets, by real quantum objects. We reiterate: it may also appear as an unlikely hypothesis, but we cannot exclude it with certainty. If we integrate this phenomenon, the explanation will be more exhaustive and complete, to better understand why the action of gluon (G) weakens almost completely for too short distances between Qs (despite its very intense strenght).
For all the above reasons, we believe that the Dark Energy(DE) is conveyed by Ps, also of different energies, engaged in various tasks, sometimes peculiar and/or unusual, whose common denominator is represented by the impossibility of being compressed and thickened beyond a determined limit [173].
In short, the counter-pressure triggered by DE most likely represents the most immediate physical and real manifestation of an (auxiliary) force or potential energy that appears on occasion when circumstances require it. That is, contrary to the 4 Fundamental Forces, it is as this potential 5th Force, initially present as vacuum energy, represented essentially by electro-magnetic fields swarming of Ps (in accordance with Barrow, continuously exchanged by electrons and ephemeral positrons, generated by the quantum vacuum [37]), was taking shape, structuring in case of necessity, when the compressive action exerted by the Gravity Interaction(GI) becomes excessive, particularly intense, we could say overwhelming, until a counter-reaction takes place, i.e. the Photonic counter-Pressure. To trigger this counter-pressure, for us, is the mass-energy density of a very compact wall of Ps, no further compressible, compressed up to the limit point dictated by the mathematical formalism expressed in Eq.(141), after which the repulsive action is immediately triggered. It is like saying that, in the end, Ps are saved in time: what saves them is the limit to their compressibility, elegantly illustrated by Eq. (141). In this way, we try to embody the possible physical real structure that could be at the bottom of the peculiar asymptotic freedom phenomenon, already masterfully illustrated, from a mathematical point of view, by Gross, Wilczek [139] and Politzer [140].

POSSIBLE PHYSICAL EXPLANATION of the CONFINEMENT of Qs, Gs and Colors
Whereas, when the Quantum Mechanics(QM) plays a crucial role, through the Uncertainty Principle(HUP), it is a phenomenon represented by the confinement of Qs, thus, in our opinion, these particles have remained eternally confined, since the dawn of time in an extremely narrow space. It is just because of the enormous amount of energy that the Q has borrowed, the emitted G will not be able to cover a long way: the energy debt has to be paid immediately, and the flow path of G, ie, of its range, will be determined by the short existence that QM gives to the G.
However, we shouldn't be surprised: it is not the only case of significant energy loans. The Weak Interaction(WI) runs on credit too. In that case, too Qs borrow the large-scale energy. The W boson, in fact, whose mass corresponds to 80.4GeV/c2, is much heavier than the Qs which emitted it: for the accuracy, it is ≈16000 times heavier than down Q and ≈40000 times heavier than up Q! Obviously, according to the rules of QM, to a gauge boson of such a mass will correspond a very short life and, as a consequence, an extremely small range, along with Yukawa [72], Fermi [136], Hawking [77], Quigg [144](just to mention a few authors). In fact, our calculations show that the action radius of the particle W corresponds to 1.54247510-15[cm] [137].
However, while the WI performs other functions, the SI, or Color Force(CF), seems to be there just to keep chained, and confined, the Qs and Gs (and their colors) inside the hadrons, giving shape and substance to the matter. It has been so from immemorial time. In fact, since the first protons and neutrons were formed, that is ≈1/100 of second after the Big Bang, as Weinberg reminds us [98], Gs and that 3 Qs (and their colors) have been trapped, confined inside, and never got out [34].
Trapped by whom, confined by what? By the very intense strength exerted by the CF on Qs, but in primis by the range and very short lifetime of G which, inexorably, has never allowed Qs to move away from each other by a distance greater than that radius! This also coincides perfectly with the fact (and could also explain it) that a color charge carrier (as Q or G) can never appear alone, but always and only combined, so that combining 3 Qs of different colors, a neutral color hadron particle is obtained.
That is why, in normal conditions, it has never been possible to observe a Q or the trace of G isolated. No! Gs and Qs (and their Colors) are closely and mutually bound and confined inside hadrons. Qs are not the only ones to be confined inside hadrons from the beginning of the Universe: this also applies to the Gs, of course, since they are born from Qs. In this way, therefore, you can explain both the Quarks Confinement, the colors Confinement and the Gluon Confinement! Furthermore, we tried to quantify the possible value of the Qs Confinement. As shown in Eq. (109), this value (dbq) maybe ≈ 8.4414(1.4414) 10- 16[cm], that is the possible radius of action of the intra-nucleonic SI, and therefore corresponding to the maximum distance that can ever separate, two Qs: Qs spatial Confinement.
In fact, we believe that the most likely and appropriate solution is to consider the range of the Yang-Mills b quantum (dbq), or gluon range, among those calculated for the HB(equation 107) and tQ (eq.108). In fact, it should be kept in mind that the value of the energy-mass density of Gs it can never be constant, stable: never. In fact, it is easy to deduce that the energy (and so its equivalent mass) carried by Gs can vary widely. This should depend on the moment in which the G is detected, since at that moment, randomly, it may have already gathered, or not yet completed, the full amount of the loaned energy (through the Qs or from the field where it is immersed).
We read from Wilczeck: "Protons and neutrons are very complicated dynamic balances of Qs, anti-Qs(Q̅ s) and Gs. The biggest part of their mass -and as a consequence also of the mass of the matter, including our brain and our body -comes from the pure energy of these objects in constant motion, basically mass less according to the equation m=E/c2. At least at this level, we are ethereal creatures" [174]. Thus, most of the nucleonic mass is supplied to them by the Gs in them: other than massless G! Similarly to Wilczek, Randall says: "In a nucleon, the mass (therefore, according to Einstein, the energy E = mc2) is given not only by the Qs, but also by the bonds that hold them together" [21]. And these bonds, as we all know, are represented just by Gs. Randall reiterates: "Protons have a composite structure. In fact, they contain the Qs, held together by the Gs. The question is that the mass of a proton does not come entirely from the mass of the Qs that constitute it. Its mass is mainly due to the energy involved in holding the proton together" [21], which, as is known, is held together precisely by the SI, through the energy of Gs (continuously exchanged among Qs).
On the other hand, it does not seem appropriate to neglect what Quigg maintains: "The Higgs Field(HF) does not explain the origin of the whole mass. It has been a while since we actually understood the source of most of the proton mass (for example). Most of the mass -you and I included -comes from the SI" [144], as to say from its bosons: Gs! There is a vast Literature considering that the real mass of nucleons is supplied almost exclusively by Gs, or rather by the energetic mass carried by G., We read, for example: "The proton and the neutron are each composed of 3 constituent Qs. The problem is that Gs are particles of zero mass and that the masses of the uQ and dQ are respectively about 400 and 200 times smaller than the mass of protons and neutrons. To understand the origin of the mass of protons and neutrons, we go back to the well-known Einstein equation E = mc2, which expresses the energy of a body of mass m when it is at rest (thus lacking in kinetic energy). This equation acquires a perhaps deeper meaning if rewritten in the form: m=E/c2. Through this equation, the theory of Relativity explains how the mass of a physical system is nothing but its quiet energy. Mass is, therefore, a measure of the energy of the system. In the case of a nucleon at rest, we find that its mass-energy originates only in a small part from the sum of the Qs masses. The remaining part is provided instead by the energy of the cloud of Qs and Gs: Qs interact with each other exchanging Gs, and these in turn can generate virtual pairs Qs Q̅ s which, after a very short time, annihilate each other producing other Gs. The inside of a nucleon, therefore, appears as a dynamic tangle of strongly interacting particles, which appear and disappear continuously. The energy of this cloud constitutes most of the mass of the nucleons and, therefore, the mass of all the matter that surrounds us" [175], including all living beings: mass and matter where the Higgs Boson(HB) and the BEH-M have nothing to do with it, since the G does not interact with the HB.
In short, though Gs are considered massless, they carry a great quantity of energy; thus, according to MEEP, they must have an equivalent-mass. The G seems to have a zero mass because it is "constantly in motion", as underlined by Wilczeck, so it can show us only its undulation aspect. In fact, a well-known principle of Quantum Mechanics (QM), the Bohr Complementarity Principle, states that each quantum object can show both its corpuscular and wave-like behavior but, conditio sine qua non, only one at a time: never simultaneously! [176]. Therefore, until the G is in motion, it can show only its wave side. On the contrary, in the very short time the b quantum interacts, we may indirectly detect some aspects of its corpuscular behavior. Only the corpuscular aspect of a particle can give us the probable mass of a particle. Since the G is always in motion, its mass is concealed under its undulation aspect, we may say under its energetic aspect.
Actually, we believe that it is physically very complicated, if not impossible, to be able to detect a G, i.e. the Yang-Mills b quantum. And why? Because of the G very short lifetime, which, from our calculations, is equal to 2.7323(±0.5643) ⋅10−26 sec, as shown in Eq. (117). ≃ 10−23 sec (142).
That is, the typical decay times managed by SI are of the order it takes the light to cross the resonance, which has a linear dimension of order R [177], corresponding to the radius of the atomic nucleus (being a nuclear force).
Well, in our opinion, here a reflection is obligatory: comparing the equations (117) and (142), a difference of 3 orders of magnitude immediately stands out. That is, the Yang-Mills b quantum lifetime is as many as 3 orders of magnitude shorter, compared to times that are already infinitely short themselves. That is, the creation and disappearance of a G are resolved in less than one-thousandth of the time taken by light to pass through a proton, a light nucleus! Or: the whole life-time (so to speak) of the Yang-Mills b quantum resolves in less than a thousandth of the time necessary for an operation managed by the SI. They are really times beyond reach, like saying: inaccessible.
At this point, one wonders: are we able to examine and study physical phenomena that occur in such short times? We really don't think so! In other words, they are virtual phenomena. This could also be said for b quantum, i.e. G, a massive G -which can never be massless -is to be considered a virtual particle; obviously, this does not mean that G does not exist, but that it exists for such a short time that we cannot access it in time. For this reason, we believe that we will never be able to study b quantum directly, nor will we ever have enough time to detect it.
We may say that the b quantum, the G is Temporally Confined by its very short lifetime! Therefore, a new parameter may be added: the This automatically results in the immediate Remotion of Infinities and Divergences from the equations of the Perturbative Calculations, of the Gauge Theories and of the QED and QFT. This also involves the removal of the zeroes from the equations concerning the electron self-energy, i.e. the interaction of the electron with Ps. This is completed, as discussed in the paragraph 2.13.2, if the obvious contradiction of the point electron is eliminated (another inappropriate and incongruous cause of the aforementioned infinities and divergences)! These, we reiterate, are the 2 fundamental stages to try to solve the Yang-Mills Mass Gap Problem. In fact, once it has been made the Remotion of Infinities and Divergences in equations of the QFT and Perturbative Calculations, the Symmetry Breaking caused by massive particles fails!

GAUGE THEORIES IN CONFLICT WITH MATHEMATICS AND REALITY
With our proposal to try to solve the Yang-Mills Mass Gap Problem, we no longer have to submit to the dogma, imposed by the gauge theories, according to which all the particles, by themselves, must absolutely be massless (in order not to run into divergences). A rule notoriously accepted unwillingly, by a number of authoritative scientists, as declared for example by Glashow [139], Weinberg [140], or by Yang and Mills themselves [51] [59]. As known, in fact, "in an invariant gauge theory, all the particles should have 0 mass like the photon(P)" [1]. However, along with Maiani "this is in conflict with the observation that in Nature there are no other zero mass particles but P" [1].
Thus, a manifest contradiction emerges between gauge theories and Reality. Moreover, "an important observation must be done: the gauge symmetry is not compatible with a mass of the gauge field ≠ 0. In fact, a mass term is not invariant under the gauge transformations. Therefore this symmetry can be valid for the description of a zero mass P, but not for example, for the mediating bosons of the Weak Interaction (WI), which have masses of the order of 100 GeV " [178].
Also in this case a contradiction emerges from the gauge theories, since our calculations show that the P is not at all massless, so it should be deduced that the 'symmetry valid for massless P' [178] or is not valid at all, or it is also valid for a massive P and, consequently, for the WI bosons (with no need of BEH-M). Maiani adds: "The mass is an intrinsic property of a particle" [179]. We can still read: "The gauge theory presented by Weyl [12] has a serious lack: it describes a zero mass particle, violating in this way every experimental observation" [180].
This point, in our opinion, is of fundamental importance, but unfortunately, it has always been more or less overlooked, since it did not get the attention it deserved. So much so that it continues to maintain, without benefiting of the magic BEH-M, that any particle continues to be massless. That is, particles as heavy as the top Q(tQ), or the gluon(G), or the bosons of the WI, or even the nucleons and pions, do not have their intrinsic mass! Indeed, "great and ingenious ideas" [22].
As it is known, the Lagrangian (L) of the Standard Model(SM) should be invariant for the non-abelian symmetry group, described as follows: SU(3)C  SU(2)L  U(1)Y (143), where SU(3)C indicates the Strong Interaction(SI) color symmetry among Qs, governed by, Quantum Cromo-Dynamics (QCD), mediated by eight bosons (the so-called Eightfold Way) having zero mass: Gs, as shown in Eq.(132); SU(2)L  U(1)Y, instead, indicates the weak isospin symmetry group unifying the electromagnetic(EM) and WIs, giving body to the so-called Electro-Weak Interaction(EWI) and Electro-Weak Theory. As discussed in paragraph 2.6, this theory was proposed by Schwinger to Glashow in 1961, as a topic of his thesis, based on Yang-Mills theory.
However, it is important to highlight that "the EWI cannot be considered as a true unification, the two coupling constants do not derive from a common source. Indeed, the gauge group SU(2)L  U(1)Y is a product of two disconnected groups of gauge transformations, and the relationship between the coupling constants is not predicted by theory " [180]. Instead, "it is possible to speak of unification when a symmetry group (G) can be found such that: G  SU(2)L  U(1)Y (144).
In this case, it is possible to theoretically predict the relationship between the two coupling constants" [180].
And 'it is interesting to note that "the stadard model(SM) itself is not a unification theory. SM, in fact, is the product of three disconnected groups of gauge transformations. Some theories, called GUT(Grand Unified Theory), try to unify these three groups, looking for a G group" [180]: G  SU(3)C  SU(2)L  U(1)Y (145). As Penrose reminds us, many physicists argue that the structure of the QFT is "here to stay" [22] and that the blame for any inconsistency (usually infinities coming from divergent integrals, or divergent series, or both) is to be give to the particular scheme to which the QFT is applied, and not to the structure of the QFT itself [22]. As we all know, "Such schemes are normally specified by a Lagrangian, subject to certain conditions of symmetry. The Lagrangian L must be thought of as a space-time density, which means, in strict terms, that the invariant entity is the natural 4-form L Ɛ, where the 4-natural form Ɛ is the quantity that commonly is expressed as Ɛ = dx0 ʌ dx1 ʌ dx2 ʌ dx3 √(-det gij ). The integral of the action (S) is then: , where D indicates the four-dimensional (complete) volume of space-time (in its turn delimited by the border ∂D). This Lagrangian L describes the particle kinetic energy, minus the potential energy due to the force field. For each story, there will be an action S, which is the integral of the Lagrangian along the way. SM is a mathematical model in remarkable agreement with the experimental facts of a wide range of phenomena. However, it seems that the mathematical structure of this model is somewhat complicated and arbitrary.
The predictive capacity of a theory, and so it is valid for the QFT, depends crucially on the mathematical coherence of its theoretical bases. How do we find a quantum theory for particle physics that is consistent with the requirements of Einstein's Special Relativity? Dirac introduced antiparticles in a relativistic quantum theory, which forced us to enter a QFT. In fact, SM is an example particular QFT between interacting fields. The SM was largely driven by some strong demands for coherence, difficult to satisfy in these theories [22].
And yet, given that the mathematical formalism undertaken with the gauge theories and with the QFT led to inconsistent mathematical results, it was not necessary to continue along that path. Therefore, it would have been more logical and appropriate to abandon the mathematical formalism undertaken by Weyl (although it was very elegant) and resume the path traced by Schrödinger and Dirac, through their very valuable equations concerning the electron wave function; equations in which, as known, the electron is a massive particle. Instead, in order to fit the emerging divergences from the gauge theories equations, some forces were applied, modifying the equations ad hoc, but distorting the physical reality itself. In this way, to forcefully apply the purely mathematical concepts of the gauge theories to reality, the physical reality of things, such as particles, has been modified, forcing them to be massless. Otherwise, the symmetry is broken, 'and the numbers do not add up.
In all honesty, it looks like a really forced, unreal, not at all scientific procedure, at least in the Galilean sense. Thus, by imposing a massless G, one goes against Einstein MEEP, against the Yukawa Principle, and against multiple experimental evidences! To this purpose, as Barone reminds us, Herman Weyl confessed to Dyson: "In my research, I have always endeavored to combine truth with beauty, but when I had to choose, I usually chose the beauty" [181]. Barone continues: "Weyl was basically a mathematician and could, therefore, afford to opt for beauty, to the detriment of truth. Those who study Nature, obviously, cannot ignore the comparison with the real world" [181].
In full agreement with Penrose [22], Barone points out: "However, many physicists have shared, and share, albeit in a more moderate form, Weyl's aestheticism. The forms that beauty takes in Physics are the logical simplicity of theories, their harmonious architecture, the ability to unify phenomena and concepts. The true and the beauty are 2 forces that have moved the research of great physicists, sometimes aiming in the same direction, other times in different directions, in some cases in opposite directions. General Relativity of 1916, for example, immediately combined elegance with empirical efficacy (Einstein's theory was the simplest theory derivable from a general principle of symmetry). After the formulation of General Relativity, Einstein and other physicists set out to search for a unified theory that combined gravity and electromagnetism. In 1918 Weyl had the idea to make this theory starting from a new principle of symmetry (called gauge): he postulated that the physical laws were invariant with respect to a combined transformation of space-time and electromagnetic field (EMF). Weyl's theory, descending directly from this principle, was simple, compact, and mathematically elegant. It had only one fault: it was in clear disagreement with the facts (as Einstein pointed out -though praising the formal aspects).
The forces of truth and beauty aimed decisively in opposite directions" [181]. In this respect, Penrose goes into detail: "In order to appreciate the strength of the pressing demands for mathematical coherence (which continue to guide the most recent speculative theories, such as String Theory), we will need to examine the structure of the QFT. Despite the important lessons of Minkowski and Einstein on the interdependence of Space-Time (ST), SM adopts a representation of reality in which T is treated differently from S: there is only one external temporal coordinate, while there are many spatial coordinates since each particle has its own. This asymmetry is usually considered a "temporary" feature of the non-relativistic Quantum Theory, which is only an approximation to some more complete relativistic theory. Strictly speaking, at least in most of the relevant non-trivial examples of this theory, the QFT is mathematically inconsistent, and various expedients or ingenious schemes or mechanisms are needed to arrive at meaningful calculations.
It is very important to know if these devices are only temporary retreats to allow us to progress slowly within a mathematical framework that can perhaps be fundamentally flawed at a deep level, or if these expedients reflect deep truths that actually have genuine meaning for Nature in itself" [22].
Penrose adds: "There are strong reasons to believe that the laws of the current QM need fundamental change. These reasons are the result of accepted physical Principles, and of observed facts, concerning the Universe. I find it surprising, however, that a small number of quantum physicists think so. Newton's theory has not had a Measurement Paradox. As well as in Relativity, the changes necessary to Newton's theory (such as the slight deviations of Mercury's motion) were far from those used in QFT (including the infinities, corrected with Renormalization). Yet even renormalizable theories are not without infinities" [22]. As Barone writes, another famous science esthete was Paul Dirac, author of the synthesis between Special Relativity and QM, and antimatter theorist. Dirac maintained that "physical laws must be endowed with mathematical beauty" and used the "beauty principle" both as a heuristic guide and as a criterion for evaluating the theories [182].
In accordance with Weyl, Dirac said: "It is more important that the equations are beautiful, rather than in agreement with experiments" [182]. However, Barone adds: "actually, Physics lives on a dialectic between true and beautiful: empirical and aesthetic criteria coexist, intertwining. A theory in disagreement with the experience is unacceptable. At the same time, a theory that correctly describes the facts, but with a certain amount of arbitrariness, is not really explanatory and cannot be considered satisfactory" [181]. explored, of the knowledge of the physical universe', as Wheeler said. Scientists could count then on such a simple sextant (the equation E=mc2) not to leave any margin of mistake. Such a theoretical tool, such a measurement of the scientific project, was able to tell, any time, the human gender how much it would move towards the total annihilation of the matter in energy. The uranium fission, though powerful, allowed humanity to move forward only a thousand parts in the line towards the aim of a total conversion in energy, since the fission of the uranium's nucleus released only a thousand of the energy contained in its mass. A total transformation of the matter in energy would give, on the contrary, the last limit of the (controlled) production of energy to build a new industrial world. Or to produce a weapon with an insuperable power. Wheeler dreamt about a process able to convert totally a piece of matter in energy, he thought: 'The discovery of how to free the energy available at a reasonable scale may transform completely our Economy and the base of our Military Safety. Thus we have to give a special attention to the branches of Ultra-nucleonic (Physics beyond the level, that time quite well understood, of nucleons, i.e. protons and neutrons)'. Wheeler, in fact, knew perfectly that among the tasks of the ultra-nucleonic, there was a wider use of the energy coming from E=mc2 than the thousandth part released by the nuclear fission. The fission freeing only partial energy contained in the mass had made Wheeler -and a number of other physicists -to wonder if the equation of the sextant way the right ways towards the liberation of much bigger energy. Each collision of nuclear fusion released an energy a thousand bigger than the fission. Nevertheless, despite its great destructive power, even the hydrogen bomb (fusion bomb) still left a big part of the original mass unconverted in energy" [89]. Thus even with the H bomb, "a big part of the mass remains unconverted in energy" [89].
Where is the mass we are talking about then? With the fission, we have a broken nucleus with a big mass as uranium. That is, we have split and freed its components -protons and neutrons (nucleons) -however we haven't managed to split nucleons! The fusion, on its turn, puts together nucleus with a small mass, as hydrogen, but also in this case we haven't opened and empty the nucleons. So it is likely that, as Wheeler complained, the un-freed mass should correspond to the mass contained in the nucleons. We know that the latter is made of 3 Qs, moreover among the lightest, which will never be able to compensate the mass required by Wheeler, that is to say, the great quantity of mass still untransformed in energy, not even with fusion or fission bombs. This makes us think that the missing link is the mass contained inside nucleons. In this case, apart from Qs, we only know about the existence of Gs inside the nucleon. In fact, the intra-nucleonic space is crawling with Gs. We can only think that the great quantity of mass, not yet freed in any circumstance, is still contained inside nucleons and, probably, it is associated with Gs. How? It is represented by the mass equivalent of the energy of the G, an energy which is very high.
It could be objected that the mass we are looking for maybe inside nucleons but not related to the Gs. It could be, however, inside the nucleons, we know nothing but Qs and Gs. Besides, it is very well accepted that the G is extremely charged with energy, thus, according to the Mass-Energy Equivalence Principle(MEEP), we tend to believe that the mass we are looking for is related and/or concealed by the energy of the G. We think it is concealed just for a fundamental rule of the Quantum Mechanics: Bohr Complementarity Principle [176]. According to the latter, mass and energy are two complementary parameters, so one excludes, conceals, the other. In other words, if we know the energy of a particle in a specific moment, it will be very difficult to know its mass, too, and vice versa. Besides, we need to consider that when a particle is in motion -and the G is constantly in motion -it shows us more easily its undulation aspect, concealing in that time its corpuscular aspect. We can get the probable mass of a particle when it shows us its corpuscular character. This happens when a particle interacts or is almost still, but never completely motionless [176]. Heisenberg's Uncertainty Principle forbids a particle to be completely motionless. Also, in this case, we will never be able to know simultaneously, and with accuracy, the values of two complementary parameters of a particle, as the position and the momentum. In other words, if only for a mere hypothesis we were able to observe an isolated G (which is impossible since it carries colour charge), we would find it in full motion (showing us its undulation aspect), never still. Hence, since we cannot get the corpuscular aspect of the G, we cannot have information about the probable mass it carries.
Barone adds: "Given its enormous conceptual power, it is strange that Einstein's formula has never had a great weight in Philosophy" [190]. Bencivenga states: "Barone intends to refer to 'professional' philosophers, academics, those who within a generation will be forgotten as they deserve" [189]. Yet, despite the clarity and simplicity of this formula, known as MEEP, most mathematicians and physicists believe that any particle or quantum object, although energetic, carries a zero mass (unless one recover it through the BEH-M which is valid, however, only for the few particles sensitive to WI!). As far as he's concerned, in one of his masterly lectures, Peruzzi states: "The history of the Planck constant h teaches us that the answers to questions about the physical world are not always to be sought in the ultimate constituents of matter and fields: they seem rather distributed, not always obvious, in the various levels of reality.
In short, physics is a work that Bacon would call bees' work. Experimental men are similar to ants, they accumulate and consume. The reasoners resemble spiders, who make cobwebs out of their substance. Instead, bees take an intermediate route; they gather material from the flowers of gardens and fields, and transform it and digest it by virtue of their own capacity. Not unlike this is the work of true Philosophy, which derives the raw material from natural history and mechanical experiments, and does not preserve it intact in memory but transforms it and works it with the intellect. Therefore much can be hoped from a closer and purer relationship between these two faculties, the experimental and the rational" [191].
Moreover, we believe that just the very small value of the P mass-energy density shown that the Yang-Mills mass gap problem hypothesis is real, namely that the mass of the b quantum is not null! In fact, the P can also represent that minimum energy level (therefore a mass) corresponding to the excitation of the vacuum. The vacuum energy, in fact, is considered by most to correspond to the Dark Energy (DE), which particles (DEP), as discussed above, identifie with photons(Ps) of different energies (depending on the context and the operative site).
In short, and this should represent the keystone of this paper, the very small equivalent-mass, or dynamic mass, of a single P -corresponding to a truly infinitesimal value, however ≠ 0 -may be enough to correct and eliminate d'emblée all the infinities and divergences emerging from the calculations of the gauge and perturbative theories, and the QED and the QFT. Therefore, once freed (by the non-massless P) of the oppressive dogma imposed by the gauge theories, according to which all particles must absolutely be massless, we can try to insert the right value of the possible mass of b quantum in Eq.( 25), or Yang-Mills Equation. For all of the above reasons, we believe that this value is likely to fall within the range defined by the mass values found for the HB and the tQ, which intermediate value corresponds to ≈151(±26) GeV/c2. Consequently, the Yang-Mills Equation should be rewritten as follows: as the P, to calculate the density value of the DEP mass-energy, we must analyze its momentum (p): p=h/λ (as illustrated in equation 77).
Thus, in those circumstances in which the DEP have a λ superimposable to that of the CMB, the p of the DEP, indicated with DEPp, will be:  (150). As it can be easily seen, these values are very similar to those calculated by Perlmutten, ineherent the intergalactic DE (~10-29g/cm3); how to say, in our opinion, that this value also represents the energy-mass density of the intergalactic DEP.
This finding could represent an indirect counter-proof that the DEP can coincide with the Ps, whose λ varies according to the context in which they operate [195]. In fact, we have proposed that DE may also be present within the nuclear space (there it represents the N-N Force or Levi's Interaction [150], as to say the Pacini's saturation barrier [152]) and within the intra-nucleonic space, where the DE may create an incompressible physical barrier barrier that prevents Qs to get further closer. Probably representing the physical substrate underlying the peculiar Qs asymptotic freedom phenomenon.
In these last circumstances, the energy carried by DEP would be much more intense (compared to the intergalactic DE) so that the respective λ would be much shorter than CMB wavelength. That is, the λ of the involved DEP (or P, along with our hypothesis) should vary according to the considered context. We find very important emphasize that this inconstancy in the energy-mass density value of the DE (and therefore of the DEP), consequent to the variability of its λ, is in perfect harmony with Weinberg's concepts which, to make ends meet with the Anthropic Principle, presupposes that the vacuum energy (or DE) took different values in different domains of the Universe [196].