Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they...

102
Gravitation and Elementary Particles salvatore gerard micheal 1

Transcript of Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they...

Page 1: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Gravitation and Elementary Particlessalvatore gerard micheal

1

Page 2: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Dedication: to Yui and Arthur.

Preface:This book evolved as a series of papers. First, they were addendums to my book on systems and society: Humanity Thrive! Then they became too numerous and began to stand on their own – in terms of importance and legitimacy. (And no matter how you contort systems, physics belongs to physics – not systems.) After I wrote chapter five, I realized that I would have to publish a third book on physics (the first one was a bit primitive in its conceptualizations – and not freely available – so it’s ‘no great loss’ ;). The second book is entitled Space, Elastic and Impeding – and is also a bit primitive – but at least entertaining ;). The following text should satisfy the rigorous mind – while at the same time – satisfy the soul. We hunger for a Universe that makes sense. This book is an attempt to satisfy that. May God bless the curious mind. May God protect the open mind. May we all cherish the innocent.

I’m a bit critical of conventional physics – because I’ve been dismissed at every turn – this will come across as you read below. Some statements are a bit loose and should not be read as written in iridium – such as: “The automatic conception of dilation seems hasty especially if there is a following expansion or turbulence” (p7). This was written before a linear analog of frame-dragging had congealed in my mind and I was attempting to show respect to a dear colleague, Mayeul Arminjon (he developed a super-fluid model of space). Other statements appear vague or are deliberately vague – as is the nature of fundamental developments not ascribed to a computer (it takes a while for us humans to “get it” ;). I try my best to keep inconsistencies and typos to a minimum. May God bless your patience – if you have it to read through this text.

2

Page 3: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Contents:p04, 1. Introductionp09, 2. Frame-Draggingp17, 3. Temporal Curvaturep22, 4. Uncertainty – Part Onep27, 5. Uncertainty – Part Twop32, 6. The Source of Uncertaintyp40, 7. A Test – Stability Vs Randomnessp44, 8. The Four Perspectives of the Systems-Reliability ApproachP49, 9. Energy Distribution

3

Page 4: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

1. Introduction

I “wasted” several years on a project which falls into the category called “unification physics” – culminating in several books and papers which are basically ignored, unknown, and considered fringe by convention. A dear associate of mine declared that I have marginalized myself but I contend “it takes two to tango”.

I disagree with the “giants” of physics who have declared that we cannot comprehend quantum behavior. We can retrain our intuitions to reflect reality.

Unfortunately, physics has “gone down a path that leads nowhere”. For five basic reasons, physics has been lead astray. One is that a branch of modern proponents dogmatically pushed their “inherent indeterminism” philosophy onto the rest (including Einstein). These early modernists essentially used peer pressure to make the rest feel like “idiots” if they did not subscribe to their philosophy. Later, some experiments were misinterpreted which seemed to confirm the perspective. These supposedly disproved “local realism” (the model of reality that asserts even quantum behavior can be understood and modeled deterministically). Then, Richard Feynman (considered the most brilliant physicist since Einstein) “closed the door” on determinism by developing his path-integral formulation of quantum mechanics. This gave him the authority to promote indeterminism in the form of virtual particles and space-time foam. The Higgs boson (yet to be detected) is the “manifestation” of the “crutch” modern physics uses to “keep it all together”. And finally, the Casimir effect seems to confirm it all by measuring the “vacuum force”. (Please forgive the following extremely opinionated digression. To me, there's good science – like the design and intention of Gravity Probe-

4

Page 5: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

B. And there's bad science – like the data-analysis (better called data-manipulation) of the same project. Another good example of bad science are the “freeloaders” pretending to “do” science around the Casimir effect. The first thing that should catch your attention is the effect is only visible with metallic plates (conductors). And repulsion is observed with insulators. That should be a red-flag in your eyes that something else is going on. Those that swallow indeterminism “hook, line, and sinker” accept Casimir as “proof” of indeterminism. It seems to me the politics of science and funding issues are really what's “going on here”. They're grasping at straws. They sense they're on a sinking ship and desperate to find “progress”. And it's not just about money – there's a lot of ego tied into these issues. Imagine the embarrassment of professors and researchers who have endorsed concepts for years – some basing their entire careers on them – only to find out they were all wrong! That's why I think they'll never embrace determinism. Again, these are only my opinions. Let's return to the discussion.)

The real culprit to blame is not Richard Feynman – or even those early modernists. It is our infamous “friend” reduction – who is to blame. The philosophy of reduction permeates science and engineering to such an extent – that we automatically think the universe operates on this principle – from large to small. For each “force” of nature, we have devised a “force carrying particle”: strong/gluon, weak / vector boson, gravity/graviton, and electromagnetic/photon.

Unfortunately, for “science” (how can you call it science when it does not operate scientifically?), these particles are undetectable directly (with exception of the photon). And the reason I put force in quotes above is because three of four are not even legitimate unique forces!

5

Page 6: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Let's start with gravity. It is not a force per-se because it is curved space-time. The weak “force” is not because it is simply unstable nuclei – our naïve attempt to blame a particular culprit – is just that. And the strong “force” is merely the close-range version of gravity. The fact we have “unified” electromagnetism with the weak “force” into “electro-weak” is a testament to our ingenuity – not a reflection of reality.

..Of course, the problem in physics is more than just a sequence of misunderstandings and philosophical bent toward reduction. It is the revulsion of determinism and all its “consequences” which impel physicists to embrace inherent indeterminism / the probabilistic interpretation of quantum mechanics. Physicists are human and have a natural desire for freewill. This desire is deep in the psyche. So when a physicist hears the word “determinism”, it evokes all kinds of negative connotations: the “clockwork universe”, no choice, slavery, meaninglessness, “living death”,.. I exaggerate, but the point is made: physicists cannot even listen to a deterministic idea – it's revolting to them – a backwards step in the “evolution” of physics .. But it's silly – determinism can exhibit its own “brands” of “randomness”: chaotic systems / strange attractors, complexity, and unknowable internal phase. There's no need to “build in” randomness at the quantum level to have a universe with freewill. The push toward many-worlds and multiple dimensions is not fundamentally required – in terms of human needs – and to explain quantum behavior..

Perhaps indeed – electrons are “multi-state” entities under certain conditions. Or perhaps we're observing multiple states over a combinatorial range of possibilities. Electrons in orbitals certainly behave differently than electrons in conductors. This baffling behavior coupled with the sequence of historical events listed above has placed modern physics in the position

6

Page 7: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

of a blind man who's trying to navigate down an unfamiliar alley by sense of touch. It's extremely unfortunate that so many brilliant minds have been deluded into wasting lifetimes – pondering and researching ways to support a theory that's essentially incorrect.

In a very roundabout way, modern physics has approached a fair model of elementary particles. Indeed, they are much like vibrating closed-loop strings. But we don't need eleven dimensions to model their attributes and behavior. If we allow two features of space: impedance and elasticity, we can explain most (if not all) of particle behavior. Of course, the fundamental questions: why does space have these two qualities? And why is the mass ratio of proton/electron the value we measure? – remain still unanswered. The last hundred years has been exciting for physics – it seems – one “fundamental” discovery after another. I believe the next hundred will be just as exciting.

..In my book on physics, I predicted no detection of frame-dragging – but at least two independent studies indicate the effect is real. If there is frame-dragging for a massive spinning object, then there is an analogous relativistic-wake for massive objects approaching the speed of light. The wake could be comprised of a leading compression and following expansion of space. It would seem this linear drag imposes the fundamental limit on speed. So the idea of space as a “frictionless track”, as many envision, becomes less plausible. What about time? The automatic conception of dilation seems hasty especially if there is a following expansion or turbulence (an expansion would be associated with time speeding up or vice versa unless space and time are disparate – the opposite we've been “preaching” for a century). If we've detected frame-dragging, the many years of “fighting/rejecting the aether” may

7

Page 8: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

have been in vain. Frame-dragging implies there's something in space impeding masses – or – linking masses to space..

The reason for this chapter is an attempt to inspire others to search for answers to those questions – answers that jive with reality – within testable theories using observable models. (One reason I don't give physicists any slack is because they are more like a “priesthood of mysteries” than scientists. They dogmatically adhere to one perspective, such as their interpretation of the double-slit experiment, rather than performing a comprehensive scientific investigation – such as using different materials for the slit or controlling the phase of targeting electrons.)

8

Page 9: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

2. Frame-Dragging

Frame-Dragging, the Key to Unification?Salvatore G. Micheal, Faraday Group, [email protected], 11/11/2007

Space as an elastic medium is investigated. Frame-dragging is reevaluated in that context. Two experiments are proposed: one to verify frame-dragging and the other to investigate the strain of space. Five theoretical research areas are proposed: the elasticity (and strain) of space, the origin of natural modes or preferences among elementary particles, the relationship between the strong force and gravity in that context, the explicit relationship between electromagnetism and this context, and the potential salvaging of electro-weak.

Due to expansion of the Universe, space is under tension. When a particle mutually annihilates with its anti-counterpart, it's as if an ideal stretched string has been plucked – two photons / e-m waves are emitted in opposite directions. Of course, space has more qualities than just being under tension. It has permeability and permittivity. c2 = τ0/λ0 (1) p3wave propagation rate squared is tension reduced by mass per unit length c2 = 1/μ0ε0 (2) p250the speed of light squared is the inverse of permeability times permittivity=> λ0 = τ0μ0ε0 (3)So, a mass is an element of space (per unit length) under tension (or internal pressure) subject to permeability and permittivity. Perceptive readers should notice (3) is a clever

9

Page 10: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

rewrite of E = mc2. But it's more than that – it shows that masses are a product of the three and only three qualities of space – elasticity, permeability, and permitttivity: τ0 = Y(Δl/l) (4) p72tension is linearly related to extension through Young's modulus under the elastic limit=> λ0 = Y0μ0ε0(Δl/l) (5)(Page references are from Physics of Waves, Elmore and Heald, 1969, Dover.) Research needs to be performed to determine why there are three modes associated with protons, electrons, and neutrinos. Perhaps the three modes are associated with elasticity alone, combinations of each quality, or individually. First, Y0 needs to be defined/determined such that it is fixed for all elementary particles. Next, the following table needs to be “filled out” for each: Extension, Magnetic Moment, and Charge. Then, the table needs to be analyzed for any patterns. Clearly, experimental research needs to be performed in order to determine the first column.

Critical readers will doubt/dismiss the connection between space and an ideal string (and the connection between space and elongation), but recent developments indicate the analogies have robust features. Previously, in the interest of conceptual minimalism, I rejected the plausibility of black-holes, frame-dragging, and gravity waves. But they actually reinforce the analogies above. Singularities could correspond to exceeding the elastic limit of space. Gravity waves could correspond to “elastic waves in an extended homogeneous isotropic medium”, p225. And frame-dragging could be evidence for both elasticity and impedance of space.

If there's frame-dragging for massive spinning objects, then there's an analogous relativistic-wake for massive objects approaching the speed of light. The wake could be comprised

10

Page 11: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

of a following expansion of space. It would seem this linear drag imposes the fundamental limit on speed. So the idea of space as a “frictionless track”, as many envision, becomes less plausible. What about time? Dilation implies lengthening periods or time-compression, so relativistic effects are consistent with those near strong gravity sources. Frame-dragging implies relativistic effects are real not virtual – as many have claimed. Frame-dragging implies there's a quality of space impeding masses – or – linking masses to space.

Several years ago, I proposed that inertia is a manifestation of the extension/expansion, but I lacked the crucial insight provided by frame-dragging. In the original proposal, a moving mass produced a smeared expansion that lagged behind the mass. But frame-dragging implies the opposite is true: that space is stretched behind the leading edge of a moving mass. The former proposal was mass dependent: a larger mass produced a larger expansion and length-contraction was spatially uniform (in disagreement with reality). But the current proposal predicts length-contraction is unidirectional – co-linear with line-of-flight – creating a “pancake” from any object. It's strictly speed dependent. The effect appears virtual because the object restores to normal once speed is restored to its rest-frame. Agreeing with that would be like saying time dilation and frame-dragging are virtual effects as well – which is erroneous – those effects have lasting consequences: a permanent and irreversible time-displacement – and – a permanent twist in space (as long as rotation is maintained).

An interesting thought experiment would be to create a rotating mass which maximizes spatial twist (for simplicity, suppose we use the “north pole” of a spinning object as a reference point). Compare the four following objects, all of the same mass and composition, all with the axis of spin along the axis of

11

Page 12: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

symmetry: a long thin rod, a cylinder, a sphere, and a disk. (The cylinder and sphere have the same circumference.) The challenge is to visualize the coupling between mass and space – and – the appropriate projection/vantage to solve the problem. If we think about the rod first, we realize that it produces the least twist at its north pole because the mass-coupling near it is so small. The next smallest frame-dragger is the sphere – because its mass distribution, with respect to its north pole, is less than the cylinder of same diameter. Finally, we realize the “opposite” of the long thin rod, the large flat disk, is the “winner” in terms of frame-dragging. It's suggested the reader try the experiment with insulators and a small disk shaped test mass (also an insulator) placed at the north pole of each candidate mass. The reason for using insulators is to avoid any potential magnetic effects of conductors. Make sure you ground everything before spin-up; insulators can hold a static surface charge .. Two variations of the experiment above would be to repeat the scenario with conductors of similar density (similar to the insulators). Then mix them. The reason for choosing similar density material is to keep mass-coupling the same between same shaped objects. (So, run the experiment four times: one with insulators alone, one with conductors alone, one with insulator candidate masses and conductive test mass, and one with conductive candidate masses and insulator test mass. Make sure the conductors are not magnetized.)

Back to Y0. The units of Y0 need to be newtons or newton-meters (from (5); the LHS is in kg/m). Since newton-meters = joules, Y0 needs to be in newtons or joules (force or energy). Previously, I defined Y0 to be hc which is in units of joule-meters (not exactly what we're looking for). The only place to “borrow” units is from the extension. We could redefine extension to be extension/meter, but that would be artificial and somewhat equivalent to defining extension to be change-in-

12

Page 13: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

length/area. Whatever we do in defining Y0 has to make sense physically (with respect to Y0 and the extension). The basic requirement on Y0 is that it needs to be fixed for all elementary particles. The basic requirement on the extension is that it needs to be measurable (if we think in terms of volume, it does not have to be unidirectional). If we go with ∆l/area, that could be like changing a radius of a circle – but that choice would have to be justified intuitively and physically. Any choice of Y0

and extension would have to be similarly justified.

Let's discuss measuring the extension. (One way to think of the extension is the expansion of space due to the presence of mass, but this is missing the point of (5)! Mass IS the extension constrained by the three qualities of space.) Since we cannot make a spinning disk of protons, electrons, and neutrinos (individually) and measure the torque on a test-disk (exerted from frame-dragging), the best we can do presently is measure beam deflection of two nearly crossing beams of particles. (In making that first statement, I realized that in the heart of a cyclotron, one might be able to create “pseudo-disks” of electrons and protons – but measuring torque on a test-mass would become the issue – placing a sensor in the heart of a cyclotron is not an easy task!) In modeling the beam-crossing point and resulting deflections, we need to “subtract out” the electromagnetic interactions between sets of particles. I suggest varying the nearness of the crossing beams for each set of beams while keeping the angle between them fixed. I suggest: {electron, electron}, {electron, proton}, {electron, neutrino}, {proton, proton}, {proton, neutrino}, and {neutrino, neutrino} as sets. It would be nice to add sets with antiparticles, but dealing with neutrino beams is a formidable task in itself (to my knowledge, no one has created a neutrino beam yet). In any case, extracting the extension from measurements will require cleverness in perspective and experimental setup.

13

Page 14: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Again, back to Y0. After some contemplation, the choice of hc seems inappropriate and any efforts to “make it work” are “barking up the wrong tree”. In the RHS of (5), we've already “encoded” c in terms of its components: permeability and permittivity. So, it's already there in the equation (implicitly). Let's consider h, Planck's constant, alone. The units are fairly encouraging: joule-seconds. What about the intuitive meaning/relevancy? ħ, h/2π, is the fundamental unit of angular momentum. It's the magnitude of spin of photons and double that of protons/electrons. It's not a stretch of the imagination to describe ħ as the “energy of twist” of photons, protons, and electrons. But what is it twisting? Perhaps this is where frame-dragging reenters the scene. Perhaps ħ is a measure of the twist-energy in elementary particles – the “twist of space” that seems fundamental to elementary particles. If we can tentatively accept that masses are units of space under tension/pressure, then it is not a leap to say those elements possess twist – especially when we consider frame-dragging and the ubiquity of ħ.

Many will dismiss this paper as “mere speculation”, but the approach above requires less “leap of faith” than required for the 11 dimensions of string theorists. We are so “wrapped up” in our models of the Universe that we cannot “see the forest for the trees”. A good example of this is the following. I'd bet it's fair to say that most physicists don't know about hydrogen in electrostatic equilibrium. They know about the equations and formula which describe behavior, but they don't have an intuitive sense of “what's going on”. The reason I say this is because they don't know the origin of the fine-structure constant, alpha. Alpha is simply the orbital speed (αc) of an electron in electrostatic equilibrium with the nucleus. Simulations are typically dismissed as they “offer useful approximations, but little direct understanding” (cover of

14

Page 15: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Turbulence, Coherent Structures, Dynamical Systems, and Symmetry, Holmes, Lumley, and Berkooz, Cambridge, 1996). But they can provide valuable insights such as above. The finite-element method is a numerical method typically employed to model stresses and strains of materials of specific geometries. Perhaps it can be used to model attributes of elementary particles and their interactions (from the perspective above). Insights garnered can be used to refine/“fix” the model above so that it completely reflects reality. For instance, a linear relationship between stress and strain is used because that's a convenient theoretical starting point – as it is for many physical systems.

Around the same time as my look into inertia, I developed a unified function describing both gravity and the strong force (because they are both strictly attractive and can be thought of as originating from the extension alone). But theoretical work needs to be performed to derive G from quantum constants. Until that's done, any such function (while they may be interesting) will be arbitrary and artificial. Next, the fact the electromagnetic constants, permeability and permittivity, are “built in” (5) is conceptually nice, but Maxwell's equations need to be derived from (5) (or a variant of (5)) explicitly. Then, the theoretical unification between electro-weak “forces” needs to be reevaluated to determine if it fits the framework presented above. Admittedly, this “call to arms” is broad and demanding, but I believe the community is “up to” the task. It should be a collaborative effort. Even if I had the roadmap “divinely inspired”, planted in my mind, I believe it would be wrong of me just to hand it over to professionals. This blind alley (our obsession with the probabilistic-reduction approach) we find ourselves in needs to be self-corrected.

15

Page 16: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

The approach above was inspired by an engineering perspective. It should not be discarded or dismissed but thoroughly investigated – even if it's only another blind alley. Physics is at a self-made impasse presently – made from our dogmatic adherence to assumptions associated with probability-reduction. If we realize that double-slit phenomena can be explained with a model of elementary particles as extended 3D waves constrained by qualities of space, this opens the door to a reasonable and fully deterministic Standard Model – unified and integrated.

16

Page 17: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

3. Temporal Curvature

A New View of Gravity – A Distributed Compression of TimeSalvatore G. Micheal, Faraday Group, [email protected], 11/17/2007

Y0, the elasticity of space, is defined and calculated. Linear strain is calculated for electrons and protons. In the process, after a few assumptions, a new relation between temporal curvature and spatial curvature is established. Needed work is reviewed.

From the previous paper on frame-dragging, we invented a new relation between mass and the linear strain of space: λ0 = Y0μ0ε0(Δl/l) (1)mass per unit length (implicit) is linearly related to extension through the three parameters of space: elasticity, permeability, and permittivity

We had some trouble defining an appropriate Y0, the elasticity of space. Recall that the basic constraint on Y0 is that it must be consistent between elementary particles (and of course its units must agree with the equation above). Let's make a few standard assumptions which should not cause too much of a ruckus. Of course, those must be verified (or at least – not disproved) – as the consequences of those assumptions must also be verified. Until now, we have not made the 'per unit length' explicit. Let's do that and assign the Planck-length: λ0/lP = Y0μ0ε0(Δl/l) (2)This is a place to start and we'll follow a similar convention when the need arises. Let's replace lambda with the standard notation and move lP to the other side: m0 = (Y0lP)μ0ε0(Δl/l) (3)Multiply by unity (where tP is the Planck-time):

17

Page 18: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

m0 = (Y0lPtP)μ0ε0(Δl/ltP) (4)Now, the first factor on the RHS is 'where we want it' (units are in joule-seconds). And, the fact we had to 'contort' the extension by dividing it by the Planck-time should not prove insurmountable to deal with later. Finally, let's assume the first factor is equal to the magnitude of spin of electrons and protons, ħ/2: m0 = (ħ/2)μ0ε0(Δl/ltP) (5)By our last assumption, Y0 = ħ/2lPtP ≈ 6.0526*1043 N. To simplify and isolate the extension: m0 = (ħ/2c2)(Δl/l)(1/tP) (6)=> (Δl/l) = (2c2tP/ħ)m0 = 2(tP/ħ)E0 (7)So, the linear strain of space due to internal stress is directly related to rest-energy through a Planck-measure. Later, if space allows (pun intended), we will show that (7) reduces to an even simpler form involving only two factors. If our assumptions hold, the numerical values for (7), for electrons and protons respectively, are approximately: 8.3700*10-23 and 1.5368*10-19.The values are dimensionless – per the definition of linear strain. The meaning is: 'locally', space is expanded (linearly) by the fractions above (assumed in each dimension). What exactly locally means – will have to be addressed later. The numerical value of Y0 is extremely high as expected. All this says is: space is extremely inelastic. The numerical values for ∆l/l will have to be investigated – perhaps as suggested in the previous paper.

Let's deal with our assumptions first. The notions of Planck-time and Planck-length are associated with 'minimum measures' conventionally. Anything less is considered physically meaningless. If there is a fundamental limit on our precision in measuring things, we consider those to be lower bounds. If we could make a 'meter stick' with a length of the

18

Page 19: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Planck-length or a clock that 'ticked' per Planck-time, that would be the limit of our technology – physically imposed by the nature of our Universe. So, to use them above is not a huge stretch of our 'belief system'. Our first assumption, to employ 'mass per Planck-length', is not implying we assume electron masses are actually divided into small parts of m0/lP. It simply means that's the limit of our measuring ability – and that we associate a linear change in space (for now) with that minimum measure.

Conventionally, we think of m0, E0, ħ, c, and tP as fixed. If any of them varied, that would throw physics into chaos, right? But that is exactly what quantum mechanics has tried to cope with since inception: the seemingly statistical variation of m0/E0

about some modal value. Fortunately for science, ħ and c do not seem to vary statistically.

The fact we had to introduce tP above in order to simplify the expression for extension, is only the completion of another expression of uncertainty. That's the conventional view. Another perspective is to view that change in space per unit time. There are two further ways to view that: as the propagation of the gravity wave of a newly minted particle – or – as the locally changing extension over time. If we tentatively adopt the latter view, this provides a natural/integrated explanation of uncertainty. The only 'problem' is that the linear increase in extension cannot go on forever. It must necessarily oscillate. The simplest form of modeling that is with a saw-tooth wave (and slope ±∆l/l). We could get a little 'fancier' and model with a sinusoid. The critical factors are: amplitude and wavelength. Amplitude is associated with the variation in rest-mass/energy. Wavelength is associated with the choice of period: Planck-time, de Broglie 'period', Compton-period, or relativistic-period? The first appears too small (and arbitrary),

19

Page 20: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

the second is not properly defined for particles at rest, the third does not account for relativistic effects, so we are left with the fourth. The fourth is based on the third but takes into account time-dilation.

For consistency with relativistic-mass, relativistic-energy is defined as: E = ħω = E0/γ (8)where omega is the relativistic-angular-frequency and gamma = sqrt(1-(v/c)2). For consistency with time-dilation, relativistic-period must be lengthened: T = T0/γ (9)where T0 is the Compton-period of a particle at rest. Let's repeat equation seven here for convenience: (Δl/l) = (2c2tP/ħ)m0 = 2(tP/ħ)E0 (7)If we notice that heavier particles have larger extensions (comparing protons and electrons), we can replace every variable above with its relativistic counterpart (let's also give the extension a new name, X): X = (2c2tP/ħ)m = 2(tP/ħ)E (10)But because of (8), (10) can be rewritten: X = 2tPω = 4πtP/Tγ2 (11)relativistic-extension is two times the Planck-time times relativistic-angular-frequency which is also equal to the ratio of Planck-time to relativistic-period through a solid angle! (gamma-squared is a scaling factor from the relation ν≡1/Tγ2.)For particles at rest, (11) reduces to: X0 = 4πtP/T0 (12)extension is the ratio of Planck-time to period through a solid angleYou can't get much more intuitive and simpler than that!

One way to think of gravity is as curved space. Another way to think of gravity is as curved time (only). An object in a circular

20

Page 21: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

orbit (around Earth) is following a 'straight line' path (of least action) through curved space – or – is following a path of same temporal curvature. An object in free-fall is following a straight-line path to the maximum of spatial curvature – or – is following a path to the maximum of temporal curvature. Gravity can be analyzed exclusively as a distributed compression of time. (All trajectories can be treated as a linear combination of those two orthogonal trajectories. They are fundamentally different in terms of temporal curvature. All extended objects experience a gradient on different parts of their extension – it’s not just the ‘steepness of the hill’ which pulls them down. In the same way, time is infinitesimally slower on the ‘low side’ of an object in orbit. Objects move to maximize time-dilation.)

The analysis above has shown that, with a few assumptions, there’s an equivalence between spatial and temporal curvatures. So, another way of looking at particles is as: charged twists of space and localized compressions of time.What 'local' means still needs to be defined (not in a tautological way) precisely. A preference needs to be established – in viewing curvature – such that characteristics of space-time (such as Maxwell's relations) are more easily exhibited. Those characteristics need to be derived from (1). The other theoretical tasks need to be performed (set in the previous paper). The two experiments from the previous paper need to be performed. If there is indeed a deterministic oscillation in mass/energy/extension, that needs to be experimentally verified. A small joke was forgotten to be placed in the previous paper: “Don't cross the beams .. Never cross the beams!” ;)

21

Page 22: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

4. Uncertainty – Part One

A New Uncertainty Relation for Conventional PhysicsSalvatore G. Micheal, Faraday Group, [email protected], 11/19/2007

A new uncertainty relation is derived with the following parameters: extension of space (linear strain), time, and Planck-time. An argument on its fundamental nature and meaning is presented. Two related aether theories are discussed.

For those unable to divorce themselves from probability (or those unable to tolerate even a trial separation), the following train of thought was doggedly pursued to its 'brilliant conclusion' .. Near the end of the previous paper on temporal curvature, a relation between the extension of space (a crude measure of spatial curvature due to the presence of mass) and a measure of temporal curvature was developed: X = 4π(tP/T) (1)where subscripts are omitted for clarity; extension is the ratio of Planck-time over period through a solid angleOne expression of conventional uncertainty is: ∆ω∆t ≥ ½ (2)uncertainty in angular-frequency times uncertainty in time is greater than or equal to one-halfWith a little algebraic manipulation, this can be rewritten: 4π(∆t/∆T) ≥ 1 (3)Notice the form of (3) is almost the same as (1)! Now, let's examine things from a conventional perspective. Since extension is directly related to energy, there's some uncertainty associated with it: ∆X = ∆[4π(tP/T)] (4) = 4π(tP/∆T) (5)

22

Page 23: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

=> ∆X∆t/tP = 4π(∆t/∆T) (6)=> ∆X∆t/tP ≥ 1 (7)=> ∆X∆t ≥ tP (8)uncertainty in spatial extension due to presence of mass times uncertainty in time is greater than or equal to Planck-time

Planck-time is the lower-bound for uncertainties in space-strain and time. The purpose of this paper is not to 'bend to convention' – but to present things in a way that is acceptable to convention so that the previous papers (and any subsequent) are not rejected out of hand.

The author prefers deterministic and non-reduction (holistic) views of quantum behavior. I say this not out of ego but sentiment similar to Einstein and De Broglie: our lack of full understanding forces us to employ statistical/probability analysis. Then we further justify that by unequivocally stating measurable entities have some inherent uncertainties associated with them. Of course there are errors associated with every measurement; of course there are always limits on our precision. The author does not argue against fundamental limits on time and space. It is the source of those limits that I question; it is the source of those 'inherent uncertainties' that I need to understand.

I have a natural tendency to view things in terms of electric and magnetic flux because those can easily be visualized. Even if a time-varying 3D vector field is required, again, that can easily be visualized. In physical systems – energy form, location, and flow – are critical to understanding them. I have a natural tendency to attempt to visualize that also. But when there are gaps in our understanding, there are gaps in the visualizations which automatically beg to be filled.

23

Page 24: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Gravity can be visualized in the approach above. Even exchange of virtual particles and space-time foam can be visualized. But that does not validate them. It should be clear why quantum electrodynamics / quantum field theory is distasteful to me. You cannot question the math, but you can question the assumptions and techniques. In the first place, it's not a holistic approach. It wasn't invented to explain gravity or unify forces. The over-dependency on virtual particles is the second major issue. Take that away and what are you left with? A lattice of arcane math with questionable applicability.

What is the source of uncertainty in (8)? Is it space-time foam or some inherent uncertainty? Is that uncertainty based on some probability density function (which is truly random – the conventional approach) or on some internal oscillation? Let's examine relation (1) again: X = 4π(tP/T) (1)Let's rewrite it in terms of Planck-time: XT/4π = tP (9) ∆X∆T/4π = tP from (5) ∆X∆t ≥ tP (8)Convention would reject the second line as meaningless without a ≥ symbol. They might accept uncertainty in extension being inversely proportional to uncertainty in period, but they would see the statement as incomplete without the conventional relation (we are 'born, bred, and raised' to acknowledge a lower bound on uncertainty). Convention might find the first line interesting but not ascribe any deep meaning to it. I doubt they would see the relationship between temporal and spatial curvatures – even if a conventionalist had derived and presented the equation. They would focus on the assumption of internal oscillation and reject any conclusions based on that. After all, we did not precisely define uncertainty in energy: 'Amplitude is associated with the variation in rest-

24

Page 25: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

mass/energy.' Even if we did precisely define it (we might make an attempt later), there is the issue of validation. In any case, the physics 'atmosphere' is extremely hostile toward determinism and any aether-like associated proposals (a few will be discussed below). The third line is important to convention – if they want to unify gravity with electromagnetism (with or without quantum field theory and virtual particles). I'm certain that it can be derived within the conventional framework. I'm certain that it holds fundamental importance.

A dear associate of mine, Mayeul Arminjon, has developed a model of space as a 'super-fluid ether'. It's intriguing, but space behaves more like a highly elastic solid with 'strain bubbles' as 'matter waves' (G S Sandhu). But even he misses the mark in a way: he defines elasticity to be 1/ε0 (with corresponding inertial constant μ0). This allows him to derive Maxwell's equations by correspondence of form (correspondence to stress equations). That's a bit contrived to me. If he had started with a mechanical definition of elasticity (such as in the previous paper) and derived Maxwell from that, I'd find him more believable. He also 'disproves' the primary postulates of special and general relativity thereby rejecting both theories – only later to state 'at higher velocities and corresponding high energy interactions, adequate study and analysis of the associated phenomenon can only be made by using the techniques of special theory of relativity and Wave Mechanics.' (p25, Elastic Continuum Theory of Electromagnetic Field & Strain Bubbles), so he's a little inconsistent and tautological. Perhaps some of his ideas can be salvaged and incorporated into an integrated model of space-time and elementary particles – without tautology and inconsistency.

25

Page 26: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Relation (8) will be dismissed because it was derived with unconventional assumptions. But the associated insights are profound and far reaching. If there's an equivalence between spatial and temporal curvatures, gravity can be analyzed exclusively as a distributed temporal distortion, energy can be stored there, and this opens the door to a fully unified and integrated model of space-time and elementary particles.

26

Page 27: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

5. Uncertainty – Part Two

The Nature of UncertaintySalvatore G. Micheal, Faraday Group, [email protected], 11/22/2007

Position-momentum uncertainty is analyzed to be dependent only on two uncertainties: position and energy. That relation is found to be additive and a fourth uncertainty relation is discovered. The nature of uncertainty is discussed.

As of this moment, there are three fundamental uncertainty relations: energy-time, extension-time, and momentum-position: ∆E∆t ≥ ħ/2 (1) ∆X∆t ≥ tP (2) ∆p∆x ≥ ħ/2 (3)Let’s examine the last in detail. First, we need some basic relations: p ≡ mv, p = mv, and ∆(ab) = 2(b∆a + a∆b) (4)Where the first is the standard definition of momentum (a vector identity), the second is the scalar version of that, and the third can be verified by the reader (make an assumption about symmetry).So, ∆p = ∆(mv) = 2(v∆m + m∆v) (5)

(v∆m + m∆v)∆x ≥ ħ/4 (6)For simplicity, let initial time and position equal zero: (v∆m + m(2((1/t)∆x + x/∆t)))∆x ≥ ħ/4 (7)Since mass is directly related to energy and ∆E/(ħ/2) ≥ 1/∆t (1), (v∆E/c2 + 2m((1/t)∆x + x∆E/(ħ/2)))∆x ≥ ħ/4 (8)which is of the form: (b∆E + c∆x)∆x ≥ ħ/4 (9)

27

Page 28: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

where b and c are functions of v, m, t, and x (c here is not the speed of light).

b∆E∆x + c(∆x)2 ≥ ħ/4 (10)Now, adding something positive on the left does not change the direction of the relation (but we do lose some information – with proper choice of a, we’re ‘completing the square’): a(∆E)2 + b∆E∆x + c(∆x)2 > ħ/4 (11)

(a1∆E + a2∆x)2 > ħ/4 (12) |a1∆E + a2∆x| > √ħ/2 (13) a1∆E + a2∆x > √ħ/2 (14)

since a1 and a2 are positive functions of v, m, t, and x (with proper choice of coordinates). (a2 = √c, a1 = b/a2, a = a1

2 = b2/c, b = (v/c2 + 4mx/ħ), and c = 2m/t.) (15)

This implies that the ‘momentum-position’ uncertainty relation is actually an energy-position uncertainty relation that is linear-additive – not multiplicative!

And since energy is directly related to extension, a1∆X(ħ/2tP) + a2∆x > √ħ/2 (16)

This gives us four fundamental uncertainty relations: two that are multiplicative and two that are additive; one that is bounded below by ‘Planck-energy’, another that is bounded below by Planck-time, and two that are bounded below by linear functions of position-uncertainty: ∆E∆t ≥ ħ/2 (1) ∆X∆t ≥ tP (2) ∆E > √ħ/2a1 – (a2/a1)∆x (17) ∆X > tP/√ħa1 – (2tPa2/ħa1)∆x (18)

If we rewrite (1) and (2) to isolate energy and extension and think in terms of distortions in space-time, uncertainty in energy/extension is bounded below by linear functions of

28

Page 29: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

uncertainty in 1/t. This highlights two things: uncertainty in time directly ‘forces’ a lower bound on energy/extension – and – the reciprocal nature between space and time. ‘Random distortions in space’ provide a lower bound on uncertainty; concurrently, ‘random distortions in time’ provide a lower bound on uncertainty. No wonder the ‘space-time foamers’ feel justified in their approach. (Actually, energy-time uncertainty is not bounded by linear functions – UEt is bounded below by hyperbolic functions in time. We could have applied the same approach above to UEt (completing the square), but those linear functions would not be unique (because we are free to choose an infinite variety of a-s and c-s, the squared term coefficients). So the nature of energy-position uncertainty is fundamentally different than energy-time uncertainty. One is bounded by linear functions; the other is bounded by hyperbolic functions. That ‘right there’ is evidence against space-time foam (because uncertainty is not symmetric between space and time). (Or, that is evidence of a weak-link between space and time.) The other ‘juicy’ piece of information we get from above is that energy-position uncertainty suggests negative energy states are bounded above by symmetric linear functions of position uncertainty. (Mirror the linear functions, shaped like a delta, through the position axis.) Negative energy is suggested by a2 = -√c being a valid solution in (15) above.)

It appears uncertainty has one of three sources: internal oscillation, space-time foam, or inherent randomness. We have not made explicit – exactly how internal oscillation could exhibit itself in terms of the fundamental relations above. That is our next task.

If the source of UEt is exclusively internal oscillation, the simplest natural model is sinusoidal: ∆E = ħ/2t(sin2(ωt–ω0t0)+1) (19)

29

Page 30: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

where ω0 represents unknowable internal phase at t0 – our ‘measurement time’ and ω is relativistic angular frequency (E/ħ). The reader can verify the upper bound for this function is ħ/2t. There’s no reason to use ≥ in the relation above since we’re defining uncertainty here to be solely based on internal oscillation. Any measurement uncertainty is separate. In a sense, we’re defining ∆t = t(sin2(ωt–ω0t0)+1) with the stipulation above. But that’s distracting at this point so we’ll focus on energy: Et = E ± ∆E (19) = ħω ± ħ/2t(sin2(ωt–ω0t0)+1) (20)energy of a particle, at a certain measurement time t0, is relativistic energy with uncertainty defined above

In practice, we can replace t by ∆t, our uncertainty in time, but we’d still have to deal with internal phase, so let’s focus on the form of (20). We can analyze in terms of frequency: Et = ħ(ω ± 1/2t(sin2(ωt–ω0t0)+1)) (21)So what we’re really saying is: ∆ω = 1/2t(sin2(ωt–ω0t0)+1) (22)energy-time uncertainty is dependent on angular-frequency uncertainty which is a decaying periodic function of time-uncertainty and initial phase

Here, time-uncertainty is not assumed to be caused by ‘random distortions in space-time’ but rather simply – caused by measurement uncertainty. So we’ve arrived at a completely deterministic model of uncertainty caused essentially by unknown internal phase.

If we could create an electron beam, all with the same internal phase, we could verify above as distinct from inherent randomness. The point of the discussion above is not simply to discuss the possible causes of uncertainty – but to present

30

Page 31: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

internal oscillation as a viable alternative. It is hoped the reader now has a deeper understanding and appreciation of uncertainty and its forms.

31

Page 32: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

6. The Source of Uncertainty

Convention says the source of uncertainty is inherent randomness. Last chapter, we proposed the source of energy-time uncertainty is uncertainty in angular-frequency which has its root in unknown internal phase. Admittedly, the function describing ∆ω was somewhat arbitrarily assigned/constructed to conform to the lower bound of energy uncertainty. But the fact it could be constructed at all indicates the possibility of veracity. Now we need to show some theoretical evidence that the function has potential basis in physical reality (in other words – justify it) – or – investigate other candidate sources of uncertainty.

First, let’s look at (3) from chapter 4. A conventionalist would never express energy-time uncertainty that way because it implies internal oscillation. But let’s rewrite and ponder it: 4π∆t ≥ ∆T (1)uncertainty in time is bounded below by uncertainty in periodThe more I consider that relation, the more I think it has no great importance. All it really says is: uncertainty in measured time is bounded below by the uncertainty in some quality of the particle under examination. It’s a fundamental statement about measurement-error – not a fundamental relation about the nature of elementary particles. When we write it like this: ∆E∆t ≥ ħ/2 (2)it is fundamental because we know: E = hν = ħω (3)which provides some insights into elementary particles and the nature of uncertainty.

When we watch a pendulum swinging, it’s beautiful because of its elegance. It’s also beautiful because it illustrates gravity and

32

Page 33: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

the conversion/conservation of energy. Energy oscillates in form – between potential and kinetic. Energy is never lost.

Energy, as expressed above, has two parts: angular-momentum and frequency. But this ignores energy in electric flux and energy in extension. Some years ago, I investigated oscillatory electric field – to simulate the hydrogen atom under that assumption. It’s a good idea to explain tunneling, but the size of the resulting atom/orbit doesn’t agree with Bohr. So we are left with only three possible sources of uncertainty based on oscillation: ħ, ω, or X.

As stated, the function describing ∆ω is arbitrary and doesn’t satisfy a required physical connection (yet). If ω oscillates – like an FM radio signal – then there’d be some physical basis for ∆ω. But before we pursue that angle (pun intended), let’s consider the other candidates.

We haven’t seriously considered an oscillatory ħ – but it’s possible – and would explain its presence and dependence in energy-time uncertainty nicely. If the simple pendulum is an analog of ħ, then perhaps energy oscillates in form between twist and extension: perhaps the twist in space oscillates out-of-phase with the extension such that extension energy maximum corresponds to twist energy minimum. Some clock pendulums are made to twist this way – a spring stores the angular momentum and vice versa.

Previously, I proposed energy oscillates between the e-m field and extension because of the Poynting vector – which indicates power flow. But there was the issue of Bohr disagreement (which was ignored at the time). So at this point, it’s down to the three aforementioned candidates. I believe the reason most would dismiss X oscillating is that they assume particles would

33

Page 34: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

radiate gravitational energy in that scenario. And there’s a serious problem with ħ oscillating (through zero energy): overall energy would appear to disappear periodically. So if ħ oscillates, there must be restrictions on that. Those restrictions must be ‘built in’ the structure of space-time and theoretically explainable (that goes for any of the three candidates).

So perhaps the best candidate at present is ω. If ω oscillates, then perhaps a useful analogy is the ‘radio on a rotating satellite’ or ‘horn on the end of a spinning string’. In the first, (if the satellite’s moving fast enough), you get a doppler shift on your receiver. In the second, you get a doppler shift in your ears (you hear the sound oscillate – up and down). This is not a justification of the idea – just a couple illustrations. Perhaps the oscillation is caused by a disturbance. When a sensitive dynamical system is disturbed (disturbed from some equilibrium), it typically oscillates around some ‘attractor’ (stable region). So from a systems point of view, it’s definitely possible for a disturbed electron/proton to oscillate around some stable frequency (assuming those particles are something like sensitive dynamical systems).

Imagine particles as water droplets in zero gravity. Initially, they are spherical due to cohesion and surface tension. If you disturb one by trying to move it, it flattens where you touch it – then moves away – oscillating in shape (the shape oscillates in various forms of an ellipsoid). If there was a characteristic frequency associated with the original spherical drop, I’m sure the frequency would be disturbed because frequency is tied to wavelength and wavelength is associated with size/shape. Every simple object has a characteristic frequency associated with it (basically – de Broglie’s hypothesis). This is the ‘ring of the bell’ when you strike one with a hammer (impulse input). If you imagine particles as little ringing bells that can change

34

Page 35: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

shape (under input), it’s easy to imagine their characteristic frequencies changing under input. It’s basic systems theory that a transfer function can be determined by impulse input. The transfer function of a system represents system structure. So, structure can be discovered with impulse input. The only ‘problem’ with that is – an impulse can only be approximated in practice (nothing can impart infinite force/power instantaneously). That would destroy a system anyways. But it turns out physics and systems are not totally disjoint ;).

A dynamical system is one where past inputs affect present output or state. If we imagine elementary particles as tiny simple dynamical systems that are inherently stable (due to qualities of space-time), physical inputs (such as photon absorption or flux interaction) will clearly disturb those systems. If there are some characteristics that are fixed (spin, charge, and rest energy), then there are some that are flexible (relativistic energy, extension, and omega). Those flexible characteristics could oscillate (dependent on constraints listed above) or there could be just one that oscillates. Clearly, from a systems vantage, elementary particles are ‘disturbable’ with at least one oscillatory characteristic. It’s not a stretch to tentatively assign that to ω.

So right now, there are two competing primary assumptions about elementary particles: inherent stability vs inherent randomness. Let’s use Occam’s razor to cut away the fat:

35

Page 36: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Associated AssumptionsInherent Stability Inherent Randomness

e.p.s are stable simple dynamical systems

e.p.s are probability waves

random behavior is due to unknown internal phase

random behavior is due to implicit probability density functions

state variables are explicitly deterministic

state variables are interdependent random variables

uncertainty is due to physical bounds and measurement uncertainty

uncertainty is due to bounds on frequency analysis

e.p.s are ‘distinguishable’ by internal phase and any consequences of past disturbances

e.p.s are ‘distinguishable’ only in their flexible characteristics

According to Occam’s razor – the primary assumption with the larger number of associated assumptions (given all else is equal) – should be thrown out. There are five high-level assumptions associated with each primary. There are low-level assumptions for each high-level assumption:

e.p.s are stable simple dynamical systems ‘stable simple’ is defined by constraints on space-time past inputs affect present state

e.p.s are probability waves waves are constrained by setting past inputs don’t affect present state

36

Page 37: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

random behavior is due to unknown internal phase internal phase is based on relativistic frequency frequency is based on internal oscillation

random behavior is due to implicit probability density fcns density functions are based on setting

state variables are explicitly deterministic relationships are defined by qualities of space-time

state variables are interdependent random variables relationships are bounded by frequency analysis

uncertainty is due to physical bounds and measurement unc. physical bounds exist at some extremely low resolution

uncertainty is due to bounds on frequency analysis frequency analysis applies to elementary particles

e.p.s are ‘distinguishable’ by internal phase and any consequences of past disturbances internal phase is unobservable or currently misinterpreted e.p.s are dynamical systems

e.p.s are ‘distinguishable’ only in their flexible characteristics e.p.s are not dynamical systems e.p.s are indistinguishable in inflexible characteristics

Let’s regroup and delete the repetitions:

37

Page 38: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Inherent Stability:

e.p.s are stable simple dynamical systems ‘stable simple’ is defined by constraints on space-time

random behavior is based on internal oscillation internal oscillation is directly unobservable or currently misinterpreted

uncertainty is due to physical bounds and measurement unc. physical bounds exist at some extremely low resolution

Inherent Randomness:

e.p.s are probability waves waves are constrained by setting past inputs don’t affect present state

state variables are interdependent random variables relationships are bounded by frequency analysis

I’ve tried my best to regroup both sets of assumptions deleting repetitions and implicit assumptions. At the same time, I’ve deleted intermediate assumptions. The tally ‘at the end of it all’ is six vs five. It’s clear why convention prefers the latter set though it ‘wins’ by only one assumption. Examining the historical evolution of physics, it was more than Occam’s razor that decided the preferred set. It was the rejection of the aether, determinism, and the bent toward reduction which impelled physics toward probability. I’ve devised a test which should give some evidence one way or the other. It’s possible that conventionalists could ‘pervert’ the test by defining everything

38

Page 39: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

in terms of angles and probabilities, but that’s up to them. They have three choices: dismiss the test as meaningless, explain the test in terms of probability (which is likely ;), or accept the results as confirmation of inherent stability. (Let’s do the experiment and see what happens!)

39

Page 40: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

7. A Test – Stability Vs Randomness

The following test is based on the assumptions associated with each perspective: elementary particles are dynamical systems – or – they are not. A past disturbance affects present uncertainty – or – it does not.

If two particles are identical in: identity (two electrons for example), velocity, and position – they are identical. (This is the conventional perspective – ignoring polarization.) They are indistinguishable. It doesn’t matter how they got there; they behave the same from there on. Regardless of how they arrived, if you later measure some attribute, that value should be the same with the same level of error/uncertainty. Unless..

Unless particles are dynamical systems with a kind of ‘memory’ for past disturbances. Imagine two electrons arriving at the same place with the exact same momentum (at different times of course) but just after a huge difference in disturbance. If one arrived just after a small disturbance and the other arrived just after a much larger disturbance, there should be a larger uncertainty associated with the latter – if elementary particles have ‘memory’. If elementary particles are dynamical systems, they should exhibit larger uncertainties after larger past disturbances. This is the essence of the test.

The setting is somewhat like the inside of a TV tube: it’s evacuated with electron gun at one end and target at the other. The EG is adjustable in intensity (number of electrons emitted per unit time). The target, T, is a thin gold foil leaf which bends easily under electron impact. The following is a baseline setup:EG----------------------TThe EG is run at various intensities to measure deflection of T. Perhaps a laser bounced off T could give better resolution. In

40

Page 41: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

any case, we’re attempting to measure uncertainty in electron momentum – which is the variation in deflection of T. Theoretically, ∆p = ∆(mv) = 2(m∆v + v∆m) ≈ 2m∆v (1)since ∆m should be negligible. Once calculated, this can be compared to the measured uncertainty.

The next setup is called “small disturbance” and introduces three magnetic deflectors which disturb the beam by pure reflection: a small magnetic force from MD1 (magnetic deflector 1) deflects the beam off-target, MD2 over-corrects, and MD3 re-places the beam axially: MD2EG-----MD1 MD3-T

The final setup is called “large disturbance” and introduces a larger deflection by using stronger magnets (or more powerful electro-magnets): MD2 /\ / \EG-----MD1 MD3-T

Entire path length – from EG to T is the same – in setups two and three. This is to minimize the ‘number of changed variables’ between the two. That means the relative sizes of the diagrams above is deceptive: the physical separation between MD1 and MD3 is actually larger in setup two.

Applying Newton’s second law and the relationship between speed and acceleration (speed is the integral of acceleration), we find uncertainty in momentum is directly related to uncertainty in force: ∆p ≈ 2∆Ft (2)

41

Page 42: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

where F is the force imparted from MD3, t is the ‘interaction time’ of an electron with MD3, and uncertainty in time is negligible. Note that the force here induces an angular acceleration (a turn) – not a linear acceleration – axial with the beam. The only confounding factor is t, interaction time with MD3: in the “small disturbance” setup – that time should be smaller than in the “large disturbance” setup because there is less magnetic flux over the same volume (the path of the electron crosses less magnetic flux). So that factor will have to be accounted for in (2).

We are trying to calculate an expected uncertainty in deflection of T as compared to the baseline. Those following convention are free to employ the path-integral formulation devised by Feynman and compare with above. What ever you do, examine your assumptions: if path-integral requires you to account for uncertainty in forces and interaction times for all three magnets, then Feynman is assuming elementary particles are dynamical systems with random state variables. If that’s true, then convention and determinism differ by only one fundamental assumption: random state variables vs internal oscillation.

There are benefits that ‘go with’ determinism which convention conveniently ignores: the qualities of space-time constrain elementary particles – these are natural and ‘flow’ from the properties of space-time – as compared to convention’s attempt with 11 dimensions and string theory (their dogged adherence to reduction and probability becomes ludicrous and laughable). The other benefit of determinism is that it makes sense. Why appeal to probability when we have the systems approach? Why automatically assign the label “random wave” to elementary particles – based on appearance, ego, and historical revulsion of determinism? It boggles my

42

Page 43: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

mind – the intransigence of convention. I’ve realized “a marriage” is not the proper analogy of convention and probability-reduction. The proper analogy is a baby clinging to their mother’s breast – desperate for milk. The conventional adherence to probability-reduction is infantile.

43

Page 44: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

(“Wake up” – “grow up” I say to physicists (and Americans). The following chapter is taken from Humanity Thrive! and is an appropriate epilogue for this book.)

8. The Four Perspectives of the Systems-Reliability Approach

Boundary

What's inside the system, what's outside the system, and what're the major components of the system? In answering these questions, we address the system notion of boundary. Let’s examine the human system. What's inside the human system? Human beings, social organization (formal and informal), and our infrastructure – are major sub-systems. What are inputs? Those are energy, resources, ecologies that impact our lives, and natural (non-living) systems that impact our lives. What things “flow” between major sub-systems? Those things are: resources, energy, information, feelings (can be thought of as commodities that are exchanged), “control signals”, and disinformation. What are outputs of the human system? Those are wastes, heat, information, culture (both constructive and destructive aspects), and things that affect non-living systems and ecologies.

Aside: what's war in systems terms? War is the allocation of resources, energy, information, feelings (such as aggression), control signals, and disinformation – all directed at one goal: domination. The “rational” idea behind war (as hoped by governments waging war) is that long-term gains should outweigh any short-term malady. Please refer to the chapter below entitled: The Ends Cannot Justify the Means.

44

Page 45: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

So, the system notion of boundary is the view that identifies the system concerned: what is inside and out, what are major components, what flows between, and what flows in and out.

Scope

There are three major aspects of the system notion of scope: feasibility, customer requirements, and design responsibilities. Tied together in question form: can you design a workable system that satisfies customer and design requirements within budget? As applied to the human system: can we re-design a workable human system (as defined above) that satisfies humanity and our design constraints within our allocated budget (assume for the moment we have a design budget and authority to re-allocate system resources to satisfy design requirements)? This is an extremely difficult question when dealing with complex systems. Frequently, the entire process of “system design”: identify boundary, scope, maintenance concerns, and reliability – must be repeated several times – “filling out” details of sub-systems and flows, inputs and outputs, re-answering the question associated with scope (with every major change in system design, there is an associated change in the question of scope), and the concerns below.

Maintenance

Expect to pay at least the same amount for maintenance – as for “the original system”. In this case, the “end users” are human beings themselves. If we can design and implement a human system that satisfies (I would substitute the word fulfills here) the vast majority of human beings, if we can maximize quality-of-life while minimizing suffering, and at the same time – not create a welfare state, we would have accomplished something truly fundamental. Maintenance is the “upkeep” for

45

Page 46: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

the designed system – to satisfy end-user requirements. Frequently, the designed system does not take into account many of those (it’s too expensive and difficult to satisfy every end-user need) – and – it's difficult and sometimes impossible to anticipate changes in end-user requirements. So, it’s a trade-off: the more we spend on creating a “maintenance-free” product, the less we are likely to spend on maintenance – provided we have the foresight to anticipate the true needs of end-users. There's risk involved – which brings up the next topic.

Reliability

What is the risk/probability of failure of a major sub-system? What is the cost of that particular failure? Multiply the two and you get a simplistic projection of the relative cost. Let’s consider a “simple” example: a telecommunication switch (the device used to route local calls). The risk of total failure (where the switch “goes down” – it cannot route any new calls and all calls-in-progress are dropped) – is quite low: perhaps once in ten years. The cost of that failure can be quite high – depending on the local customer base and duration. Even considering averages, the cost can rise into the millions. So, let’s say the switch is down for three hours and costs the local telephone company two million in lost revenue and bad publicity. Just three hours in ten years. If you divide down-time by up-time (over ten years) then multiply by two million, you get around $70 which equates to about three hours of technician-time. So, we're justified if we allocate three technician-hours for switch maintenance (over ten years) to specifically avoid this kind of problem. Actually, telephone companies allocate much more than this to avoid total switch failure.

46

Page 47: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

Let’s move the discussion toward the human system. Catastrophic failure would be where every single human being would die. Admittedly, the probability of that is extremely low. Extremely low but non-zero. Some would say the cost of that event would be “infinity”. A number (no matter how low) times infinity is still infinity. So, the relative cost is still “just too high”. So, anything we spend on preventing that event – is money well spent.

A dynamical system is one in which past inputs affect present outputs or system state. Reliability usually refers to the domain of systems concerns – which reflect upon system stability. Stability refers to the behavior of system state over time. Is it restricted? Or does vary madly – threatening to destroy the system itself? (Reliability also refers to dependability or consistency of good system performance. If a car does not start, has repeated mechanical breakdowns, or exhibits uncontrollable vibrations while driving – we say it is unreliable.)

..In systems theory, much emphasis is put on controllability and observability – which are pretty much – exactly what they “say”: a system is controllable if there are finite inputs which “drive” (or push) system state to desired specifications – and – a system is observable if there is a set of measurable outputs which represent the state of the system. State variables are those which represent system structure. When we are designing a system “from scratch”, these are all known and explicit. When we are trying to understand a natural system “from the outside”, we have to make reasonable guesses about state, inputs, outputs, and attempt to determine if the system is observable and controllable..

47

Page 48: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

In systems analysis, there are stable systems and there are unstable systems. A famous image of wind shear causing increasing oscillations, in this case twists, is recalled by many of the public. The flexible bridge here is “the system” and the constant wind shear – the input. The system under the force of gravity (only) is stable. The system under gravity and wind shear – unstable.

There are many analogous stresses/inputs on the human system. Hunger can be thought of – as a kind of stress. Overpopulation causes hunger which is a stress on the human system. Disease vectors cause stress on the human system. Changing weather patterns cause stress on the human system. Disruption of food supply chains causes stress on the human system. Lowering the quality of education causes stress on the human system.

The point of this chapter is to introduce systems concepts, apply them cursorily to the human system, and provide a launching point for other ideas below.

48

Page 49: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

9. Energy Distribution

The previous chapter would have made a good finish for the short book, but as things go – good ideas tend to take on a life of their own. I’ve always been concerned with total energy and energy distribution within elementary particles – that was the basis for my first attempt at this theory, but that first attempt was too ambitious, ill conceived, and lacked appropriate insights. I proposed an inner structure for e.p.s – depending on luck (a lucky guess about inner structure) and the few insights I possessed at the time.. I won’t say that I was incorrect – just too ambitious in attempting to explain too many things.. So this chapter is not written in iridium – I won’t stake my meager reputation on the veracity of it, but it seems to make sense in the bigger picture of inherent stability; it’s consistent with the idea of internal oscillation.

We’ve previously proposed total energy is distributed in three components: ET = EX + Es + Ee (1) = (ħ/2tP)X + (ħ/2)/T0 + e2Z0/T0 (2) = E0/γ + E0/4π + E0/2π (3) = (1/γ + 3/4π)E0 (4) = ((4π/γ)/(4π/γ + 3) + 3/(4π/γ + 3))ET (5)

These relations assume a couple things: that these energies are distinct (there’s no oscillation or sharing between them), they hold for all e.p.s, and under annihilation – the first term dominates (the others vanish). Line (1) describes energy in extension/temporal-curvature, spin, and electric-flux. Line (2) is more explicit and is based on earlier derivations. Line (3) is a simplification based on known relations. Lines (4) and (5) are algebraic simplifications. Line (5) is interesting in that it illustrates the relationship between relativistic and non-

49

Page 50: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

relativistic forms of energy in e.p.s. It does not highlight the split of energy between spin and electric-flux because both of those are static quantities. But it illustrates the limiting nature of the static fraction – energy can never wholly reside in temporal-curvature because there will always be a small fraction in spin-flux – regardless of kinetic energy.. This is an alternative view compared to the ‘limiting nature of c’.

The nagging question in my mind has been the ‘confinement mechanism’: how do e.p.s ‘stay together’ – why don’t they simply dissipate? Perhaps the answer is in the extreme inelasticity of space. The numerical value of Y0 is extremely high which indicates space has a natural tendency to ‘crush the living daylights’ out of e.p.s. Considering this, it’s amazing they exist at all. So, these ‘strain bubbles’ must exert an internal pressure balancing the crushing force of space. (The question then becomes – where is the balancing point and why? This is equivalent to asking e.p. radius or volume. It’s also equivalent to asking energy density or ‘shape’ of e.p.s in terms of energy. It should be obvious I think the answer is in the qualities of space-time. The answer to this question must be ‘phrased’ in a non-tautological way. The answer is ‘there’ waiting to be discovered. I’m waiting for inspiration.)

If disturbances affect ω and therefore E, there must be a mechanism to restore equilibrium. Since X0 = 4πC0 ≡ 4πtP/T0

and X = 4πC where C is relativistic temporal-curvature, the dissipation must be in the form of minute gravity waves (C = (tP/h)E so C is a relativistic quantity like X and we’ve established gravity can be treated exclusively as distributed temporal-curvature). These must be released such that uncertainty in ω conforms to 1/2t(sin2(ωt-ω0t0)+1) or similar function. (This proposed phenomenon makes sense, but so does retention of disturbance energy – if there’s no mechanism for

50

Page 51: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

release. The experiment proposed above could be extended to include various distances between MD3 and T so that the idea can be tested. If disturbance energy is dissipated over time, and that time is significantly larger than the Plank-time, then we should be able to measure restoration to equilibrium.)

The nature of ET above indicates photons cannot possess electric-flux, have zero intrinsic spin (modern e.p. physics asserts this), and are ‘pure’ waves of temporal-curvature. Perhaps these ‘travelling strain bubbles’ oscillate out-of-phase with e-m field vectors. ET does not explain neutrinos unless they are travelling strain bubbles with no oscillation. ET does not explain why there are two or three E0 – a thorough analysis still needs to be performed on the ‘three properties’ table (of C, μ, and Q): C μ Q electron 6.6606*10-24 μe -e proton 1.2229*10-20 μp e

Before string theory, multiple dimensions, and exotic geometries, I proposed e.p.s have structure based on the structure of space-time. Looking at C alone in the table above hints at this. The coefficients are almost 6/9 and 11/9. Is there some deep meaning in these numbers? Perhaps. Perhaps not. 9 is 32 where 3 is the number of spatial dimensions we perceive. But because there are only two stable e.p.s, we don’t have enough information to say any more.. My original idea was that space provides a rectangular box where standing waves can reside – in one direction or the other. But the ‘box idea’ is equivalent to extra dimensions – is it not? If the ‘box’ resides in ‘time’, then time needs extra dimensions to accommodate it (which is an extremely ‘cool idea’ – but I must resist temptation). Let’s try to explain the table above within the four

51

Page 52: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

dimensions of space-time – before we appeal to multiple dimensions.

Before we discuss the conventional approach to that, let’s talk a bit about ω and internal oscillation. ω could represent the angular-frequency of a spherical standing wave within e.p.s. A standing wave of what? The ‘only thing that makes sense’ is a wave of temporal-curvature. A standing wave of spin or electric-flux makes no sense. So perhaps e.p.s are: spherical standing waves of temporal-curvature bounded by the extreme inelasticity of space possessing discrete twist and electric-flux.

This year, a very important (conventional) paper was published and arXived. It’s entitled: Statistical Understanding of Quark and Lepton Masses in Gaussian Landscapes. The authors are: Hall, Salem, and Watari. Partial funding for the research was supplied by the National Science Foundation and the US Department of Energy. Any project that can acquire both NSF and DOE funding is obviously important (to convention). After skimming the small book, I tend to agree with them – within the framework of convention. If the Standard Model is correct, if the approach of string theorists is correct, if reduction is a basic premise of the multiverse, if multiple dimensions are compacted in our and other universes, if the multiverse exists,.. Maybe you get my point. That’s a lot of “ifs”. And they’re not just any “ifs” – they’re big-fundamental “ifs” about the nature of our universe and all others. As I was reading the paper, I got the distinct impression that “this is a paper on high-energy physics and cosmology”. When you try to explain all particles, no matter how short lived, there is little choice but to employ a framework such as the one convention has. The paper is beautiful in its consistency and scope. But it’s a monster in implementation. If you can absorb the concepts without getting

52

Page 53: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

bogged down by the math, it’s actually not that complicated. Try to read/skim it. The arXiv number is: 0707.3446v2.

Nuclei behave as extended objects (objects with size), but protons and electrons behave as point-masses. The fact nuclei exhibit size is not a huge mystery to me: protons cannot exist near each other because of electrostatic repulsion. The ‘spacers’ in nuclei are neutrons. They also act as ‘glue’. (So, of course, do protons.) The problem with convention is to automatically assign a particle to that: gluon. Anyways, nuclei are extended basically because of proton repulsion. They have geometry, excitation modes, energy release modes, and of course – the fascinating quality of stability/instability. If we examine the alpha-particle (helium nucleus), this highlights the differences between convention and determinism. Convention says that particle has a finite (non-zero) probability of changing identity or decay. Determinism says: that particle will never decay unless it is unstable or disturbed. In my opinion, they’re stable and – no matter how long you wait, an alpha will remain an alpha will remain an alpha. Some nuclei are unstable because of geometry or vibrational/spin modes, some nuclei are unstable because of (relative) lack of ‘glue’, and some nuclei are unstable because they’re simply too big. Nuclei are fascinating systems, but they’re not elementary particles – just as short-lived particles, no matter how fundamental they may seem, are not e.p.s. A good example is the neutron. A free neutron is unstable: it decays in about eleven seconds. A bound neutron is stable – if the particular nucleus binding it is stable. A free neutron always decays into the products: proton, electron, and antineutrino. They’re obviously composite particles; they’re obviously not elementary .. An interesting challenge is to model the interior of a neutron deterministically, but more important presently is the issue of elementary particle size.

53

Page 54: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

The Compton-wavelength, identified by h = m0cλ0, has been dismissed by convention as meaningless because if e.p.s are point-masses, λ0 ‘obviously’ means nothing in terms of radius or anything geometric. In the process of looking for e.p. size, I’ve found interesting features; I’ve found that the assumption: e.p.s behave as point-masses implies they are point-masses – is basically incorrect. E.p.s appear to be point-masses because they’re so small.

The question of size arises from the consideration of balancing forces: the crushing inelastic force of space – balanced with the internal pressure of e.p.s. If e.p.s are spherical standing waves of temporal-curvature (with twist and charge), they must have boundary. The natural reference to use is Compton-wavelength. Y0 is already in units of force – we don’t have to modify it in any way. (We’re examining the equation: Fext/A = Fint/A and trying to determine the nature of A.) If e.p.s have size, the best first guess is based on Compton-wavelength: Y0 = E0/aλ0 (6)where a is a dimensionless scaling constant (we assume the equation must possess in order to ‘work’). Now, E = hν = hc/λ which implies: Y0 = hc/aλ0

2 (7)where a can be solved for the electron/proton and works out to be about 10-45/10-39. So, if e.p.s are Compton-spheres, they are only fractions thereof because of the extremely small scaling factors.

What about torii? The surface area of a torus can be controlled by adjusting relative radii, so we may be able to use that model for e.p. shape. The surface area of a very thin torus can be approximated by: 4π2rprm where rp is the primary (larger) radius and rm is the smaller/minor radius. Since λ0 >> lP, we can assign the following: 2π2λ0lP (here we’re assuming λ0 is the primary

54

Page 55: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

diameter – just for simplification). Now, in order to get that form in (7), we must divide E0 by lP (assuming rest energy is somehow packed into a Planck-length – giving e.p.s a ‘fighting chance’ to balance the crushing force of space), but there’s still a scaling factor we must assume is there to ‘make things work’ (size wise): Y0 = hc/a(2π2λ0lP) (8)Note that λ0 appears below because of E0 and lP appears below because of our assumption above. When we solve for a, we get a = 2lP/πλ0 = X0/2π2 which implies: Y0 = hc/(2π2λ0(X0/2π2)lP) (9)Which implies – if e.p.s are torii, they are ultra-thin torii – with minor radii much smaller than the Planck-length. The ‘interesting’ feature of this scenario is that when we plug in the first value of a into (8), we get: Y0 = hc/(4πlP

2) (10)where the denominator is the surface area of a Planck-sphere! So even if we assume e.p.s are shaped like ultra-thin torii, the shape we’re forced to accept is the sphere! It seems we cannot escape it. Of course, when we start with that assumption, (10) is easy to derive.

The Planck-sphere seems inescapable.. If indeed e.p.s are energy ‘packed into’ Planck-spheres, that’s why they appear to be point-masses – and why λ0 seems to have nothing to do with particle radius. Is λ0 anywhere in (10)? No. (10) proposes all non-composite particles with spin and charge have Planck-radius. What makes them different? We’re left with very few choices: wave-number inside the boundary and perhaps the orientation of μ with respect to ħ. (10) explains why protons and electrons have the same charge magnitude (treated as a surface charge over the same area). And again, why λ0 is merely a relational factor with no geometric meaning (λ0 has meaning when we equate it with c/ν0). Of course, explaining

55

Page 56: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

the relative magnitudes of magnetic moments is a chore determinism cannot deny – and – defining a suitable κ, or wave-number, that fits the framework above – is required. (Many would point out that we defined Y0 in terms of Planck-measures and it’s very easy to derive (10) from the definition. But quite honestly, I had forgotten the relation lP = ctP until very recently. The pattern of development above is actually how I derived the Planck-sphere. I was avoiding it as best I could precisely because – many would balk at the conception. I had started the derivation based on balancing forces, but noticed that area on the RHS seemed to fall quite easily into the denominator. I realize that the equation is incomplete in that area is not expressed on both sides as intended. But the form on the RHS is what’s interesting and in order to ‘make the units work’, we must choose some length-measure (to reduce energy into force). I preferred the Compton-wavelength because I had previously used it to define relativistic-measures. I think one main reason most theoreticians have discarded determinism is because they ‘get stuck’ on some particular point – like the geometric meaninglessness of λ0 – and basically give up on all associated concepts. But the point of this book and those previous – is to ‘run’ with the idea as long as possible – until it is proven inconsistent or invalid. I’ve had some limited successes in explaining decay patterns in nuclear physics. And the concepts presented in this book (albeit presented in a non-sophisticated way) are remarkably consistent, intuitive, and seem to have physical justification. I’ve personally seen a very strict and conservative nuclear engineer teach E = hν (about particle energy) but ignore the oscillatory implications (I proposed some internal oscillation but he balked and moved on). So evidently, conventioners treat E = hν somewhat like I treat λ0 – useful but not meaningful. Many would say I’ve created a ‘house of cards’ – a lattice of assumptions which is easily destroyed by a single removal. But I’ve tried to be very

56

Page 57: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

careful, restrictive, and explicit in employing any assumption. The fact we can explain physical things deterministically at all alludes to the possibility of veracity (as we stated in chapter six). I was fairly confident that I could not derive a function for uncertainty in omega – that makes sense. I’m personally very skeptical of both determinism and probability (for different reasons). “If it could be done, it would have already been done.” – is how part of me and many feel about determinism. But.. But perhaps most missed some crucial insight required to ‘put it all together’ (like gravity = distributed temporal-curvature). If an average nim-rod like me can stumble around and discover something fundamental, just imagine what a brilliant guy/gal could do – if they carried the ideas long/far enough. ;)

κ should be dimensionless and larger for heavier particles. The inverse of C or X does not work because that’s larger for lighter particles. If we try νtP, that actually equals C (getting lost in the numbers here ;). So, if we use C, we need a scaling factor that also ‘integerizes’ it. In order to accommodate our significant digits in e.p. masses, let’s use a 10n integer for κe

and derive κp based on that. If we choose 1000000 for κe and our scaling factor is πα where α = 58.6875025189, then κp = 1836081243 (give or take a few waves ;). Until we derive a more intuitive and physically-related κ, this will have to do for now. It illustrates the flexibility/arbitrariness of this factor. All κ ‘needs to do’ is be an integer (for both electron and proton) and display the ratio of masses exactly (to known precision) of mp/me. (This is not equivalent to multiple dimensions compacted to undetectability. Sure, we’re saying we have a wave smaller than anything we can ever measure. But it’s qualitatively different than proposing some extra dimensions compacted to ‘nothingness’. A scaling factor is like a renormalization factor – and we’ve tried to avoid that. But in

57

Page 58: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

the process, we’ve arrived at a Planck-sized object with internal structure. In theory, we can never verify that. But theory’s been known to be wrong. ;)

A note about hc. Is there some deep meaning about the product? It’s basically spin-energy times the speed of light. If we look at it in (10), we see that it’s bounded by the Planck-sphere. So, spin-energy times the limit is bounded by the Planck-sphere. There’s nothing remarkable about it – it simply begs the question: what’s the purpose of c in the equation? Does it mean spin is revolving on a second axis at the speed of light? Perhaps; perhaps not. The fact we were able to derive a size for e.p.s is ample justification for the form of (10). If we have time and space (pun intended), we’ll consider any ‘deep meaning’ of hc again – later.

So let’s summarize our findings. If our assumptions hold, elementary particles are: spherical standing waves of temporal-curvature, bounded by a sphere of Planck-radius (defined by Y0), containing an integral number of standing waves, possessing discrete twist analogous to spin, possessing discrete electric-flux, and possibly possessing an alignment or anti-alignment of spin and magnetic moment.

(The final statement is introduced to account for the notion of positive and negative charge.) It may seem like a ‘monster’ to some (especially probability-reductioners), but it’s preferable to multiple dimensions and random character. The ‘only’ problem that comes to mind is the double-slit phenomenon. I need to think about it. ;)

58

Page 59: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

(Enough? ;) Well.. even with an infinitesimal ‘core’, the flux is extended. It’s possible the electron ‘detects’ both slits simultaneously via its electric field. This could explain double-slit phenomena – as long as the physical extension of electric-flux is large enough to accommodate all double slit experiments. So it’s possible..

..The only two ways to successfully attack probability are: create a sophisticated and accessible formalism such as the arXived paper mentioned above, but bent toward determinism – or – attack uncertainty and provide a viable alternative. I don’t have the formal training to provide the former; the best I can do is attempt the latter. I’ve tried to do that within a consistent framework. I’ve tried to refresh the ‘tired old ideas’ of determinism and loosely – the aether. I’ve asked Mayeul Arminjon to mentor me because I felt I needed his formal training to give some conventional credibility to those ideas (assuming I could acquire some of it from him). But he’s too busy with his own pursuits. And no matter what area of science you focus on – you have a necessity to ‘pay your dues’ in order to pursue your own interests (typically, you must follow a research path that is not really to your tastes or interests – only later allowing you to focus on those). Being on the ‘outside’ has advantages and disadvantages. I’m free to focus on something until I ‘drop dead’. But I lack mentoring, guidance, and funding.. I’ve read many-a-crackpot and feel unfairly lumped with those brave souls. I’ve gleaned some precious nuggets from my meager middle-class public education (such as systems and error analysis). I’ve tried my best to pursue this track from a scientific/test/disprove/invalidate perspective. Admittedly, I’ve proposed a couple untestable ideas, but most seem to ‘jump and dance’ of their own accord (acquire a life of their own).. I have this insatiable curiosity; I’m naturally a researcher, but.. I didn’t have the discipline or ‘smarts’ to get

59

Page 60: Humanity Thrive - Michigan State Universitymicheal/physics/roadmap.doc  · Web viewBut they actually reinforce the analogies above. Singularities could correspond to exceeding the

all ‘As’ in university (I think it was the latter). Once I realized that, I sort of ‘gave up’ (for a time). By the time I came to the point of applying to graduate schools, I couldn’t get accepted into any program that inspired me. Systems wouldn’t have me, physics was clearly out,.. What were my options? Work as a technician and pursue physics in my ‘free time’? That’s what I did for several years – only to be dismissed and ignored by convention – and – dismissed and ignored by those that were not. It’s funny – but not .. When I was young, it was my dream to leave a positive lasting significant contribution to humanity. In my book on systems – I feel I’ve done that. But it’s been my secret desire to help physics ‘see the light’ as well.. My brother insists I’m too ambitions in these regards. Perhaps so.

My new (and only) baby boy has just come into the world. New life is always amazing.. I don’t know if I can be a good father – all I can do is try my best .. Sometimes I feel like such a complete and utter failure in life – such a loser ;) ..I had ‘friends’ who criticized and inspired many points in this book. I’m not looking for sympathy or pity .. I would like to be understood. I would like to be appreciated (a little bit). I would like these ideas to be treated without ego or arrogance. They deserve it; I don’t own them.

As I wrote Humanity Thrive! for the innocent of the world – I write this book for the open-minded .. I feel we’re on the verge of a deterministic renaissance. For near a century, we’ve doggedly pursued probability-reduction. We’ve tried to justify it with every result and observation. But isn’t it nigh time we gave chance (pun intended) to determinism? Research the indicators – they’re there.

Bless your patience if you’ve made it this far. We’ve got a long way to go baby – a long way to go..

60