A sceptic revisits the golden age of theoretical physics (1855-1955) to question and explore alternative interpretations of the evidence in an effort to inspire or provoke fresh ideas and more open thinking. Written in a light hearted style at physics graduate level, with just a little mathematics and a fair dose of originality. Do you have doubts about modern cosmology? Then why not take a look. Maybe it will inspire you to new ideas. Keywords: light, duality, Young's slit experiment, relativity, Mach's Principle, Ehrenfest Paradox, principle of equivalence, gravity, galaxy rotation curve problem, dark matter, cosmology
Don't wanna be here? Send us removal request.
Text
Hello dear curious person
Since I posted all these essays on Tumblr in 2018 I have revised and improved them and I have republished them on a nice little standalone and ad-free website I created. It is called hereticalphysics.com.au
The main advantage is that you can more easily read the whole adventure in the proper order, i.e. from front to back.
I really enjoyed this adventure in physics and I hope you do too. Next winter I plan to devote some more time to it, concentrating on ways in which my heretical ideas can be tested experimentally. They already (to my mind anyway) suggest more satisfying explanations to many aspects of physics which always bothered me (e.g. what causes inertia, why do we persist with the inherently contradictory wave-particle description for light and why can’t dark matter be found). But I appreciate that any new ideas not only have to be consistent with all established experiments, they also have to predict some new results which can be experimentally verified.
The adventure continues!
0 notes
Text
32 A SCEPTICAL ADVENTURE in PHYSICS - Summary, 3Oct18
Introduction Over the last few years I have revisited the big questions in theoretical physics that so interested me in my youth and which I once studied up to Master’s level in one of the world’s best universities. I read about the great advances that had taken place in observational astronomy over the last fifty years and was very impressed. But I was not impressed by developments in the area of theoretical physics. The big questions I had been interested in were still unanswered. Worse still, the theorists now supposed that all the types of matter and energy that had every been discovered only account for a few percent of the Universe and the rest must be comprised of mysterious dark energy and dark matter. In short they had managed to lose about 97% of the Universe!
You might think that any model that came up short by 97% would not be considered a perfect success. But the model in question had already become established as the orthodox paradigm. Thousands of scientists have made their careers out of developing it and they are heavily invested in its validity. It was what they had worked so hard to understand. It is what they have written papers on and got grants and awards for. They teach it at university and popularise it on TV. Changing direction is unthinkable. Uncomfortable facts are brushed aside. Anybody who suggests that the ‘emperor had no clothes’ is ignored. Cognitive dissonance and ‘group think’ rule.
The search for exotic cold dark matter has been going on for nearly fifty years without success. More recently it has been joined by a search for dark energy. Maybe these searches will be successful. But if they are not, I wonder how many decades have to pass before the suspicion that the whole roadshow is in the wrong ballpark becomes undeniable?
The other big change was in the culture and practice of science. The internet has led to an explosion of written work and on-line video clips. Some of this is excellent e.g. the ready availability of scientific paper preprints, and some really good educational materials available freely to everyone. The downside is that most of the volume is low quality. In self defence the cognoscenti have retreated behind their ivory tower walls and are only speaking to people they already know and who already agree with them.
I decided to go back over the history of physics since about 1855 to see how the current set of beliefs evolved. Since I was not wedded to any particular school of thought, dogma or orthodoxy, and was not beholden to any grants body or group of professors, I was free to be an open-minded skeptic.
It was a most fascinating journey. I learnt a lot, had a lot of fun and developed large number of new ideas. Now new ideas are a funny thing. Inevitably most of them are wrong. But every good idea starts out as a new idea, or is born out of a mix of new and old ideas. If you take the fun out of physics it will stagnate. o I decided to write up my adventure in an easily accessible format and make it available to anyone interested. I’ve keep the mathematics and references down to a minimum and have happily mixed my metaphors. I hope a few people find it interesting, amusing or informative. I hope that others will find it intriguing, challenging or even annoying. I would like to rattle the mental cages of the complacent. I would love to inspire courage and fresh thinking in a few talented, open-minded scientists.
I’ve posted my first draft in the form of 32 essays in this open access platform called Tumblr. This platform is dominated by young people sharing quirky images and adolescent ramblings. It is a world away from the formal scientific literature but at least it is quick, easily accessible and a matter of public record. In any case, the discipline of organising my thoughts on paper is useful and fun. I intend to produce a second draft on a more searchable platform sometime next year.
A next step in this journey is to explore the experimental implications of my various heretical ideas and suggestions. If any of them turn out to be promising the third step would be to inspire someone somewhere to check the verifiable predictions against the empirical evidence. Even if none of the ideas turn out to have merit the whole exercise has been (for me anyway) interesting, enjoyable and satisfying. I encourage enquiring minds and seekers after truth everywhere to understand where the current orthodox paradigms in their field have come from. Do not ignore the clues that do not fit, challenge the explanations that do not ring true and ask the questions that others dare not ask. Science will ultimately thank you.
Recap/Overview
1. Introduction. Twenty five examples of open issues. Not with current paradigms but in fundamental physics itself: Mach’s Principle, nature of light, galactic rotation curves etc.
2. Some Preliminary Comments. Descriptions and models as compared to really understanding things.
3. Preliminaries to Special Relativity. Reference frames, nature of time etc.
4. The Speed of Light. Attempts to detect the aether.
5. Special Relativity. Swimmer in the stream analogy. Hendrik Lorentz. Foundations of Special Relativity. The 3 postulates underlying Special Relativity. Time dilation. Length contraction.
6. The Very Fast Train Thought Experiment. Relativistic effects are real, not just illusory.
7. Special Relativity Discussion. The aether is not dead, just sleeping. Length contractions inferred, not demonstrated. A symmetrical twin paradox – what is the answer? Sagnac’s interferometer and what does it mean?
8. The Ehrenfest Paradox. Paradoxes and thought experiments can be very useful. Paul Ehrenfest’s rotating disc paradox worried Einstein. The clock postulate.
9. Light – Some Important Background. Understanding light has been a challenge for hundreds of years. It is still not fully understood. Wave-particle duality. Discussion of key experiments – Young’s Double Slit, Michelson-Morley etc.
10. A New Model for Light. Dissatisfied with incomplete and contradictory explanations using old fashioned analogies, the author invents his own heuristic model for the nature of light. If nothing else, just to provoke and inspire others to improve the current inadequate explanations and analogies.
11. Explanation for Young’s Double Slit Experiment. A demonstration of how easily the new model of light explains this troublesome experiment.
12. Michelson-Morley Revisited. This is perhaps the most famous null experiment ever reported. My suggested new model for light is not only able to describe what goes on in the experiment, but also helps to interpret it in an interesting way.
13. Special Relativity Revisited. Special Relativity has been very successful. However, it struggles in certain areas. I was surprised to find that not every aspect of Special Relativity has been fully tested e.g. the invariance of one-way speed of light and the second postulate (classical relativity) at relativistic speeds. So I developed a version of Special Relativity using only assumptions that have been fully tested. I called it Relativity Lorentzian style. This turned out to be useful in resolving some paradoxes in Special Relativity where the existing explanations seem weak. It gives different predictions to Special Relativity in certain circumstances and these may be open to experimental verification.
14. Relativity and Global Positioning Satellites. Looks at relativistic effects as they affect GPS systems. Suggest an experiment that may give surprising results and is worth checking. Might make a good topic for an existing expert with access to the data, or a PhD topic for a bright student. It would be interesting if ‘Relativity Lorentzian style’ gives a better result than Special Relativity.
15. Gravity and Inertia. Before discussing General Relativity I have some fun with some fundamental physics and favourite pet topics e.g. what underlies Newton’s laws of motion? what are the so-called fixed stars? how does a rotating system know it is rotating? I then discuss a major modern problem for existing physics – the fact that stars in spiral galaxies have orbital speeds that are much too fast.
16. A Machian Solution for the Physics of Spiral Galaxies. This is a slightly more formal essay. It considers the origins of inertia and suggests that stars in the rims of spiral galaxies are actually travelling at exactly their correct Keplerian velocities. There is no need to invent exotic cold dark matter at all. We simply have to understand that the correct reference frame for understanding the motions is partially related to the local galaxy and partially related to distant galaxies. The galaxies are telling us a fundamental truth that we have been too blind to see. Best of all, this new idea is open to experimental verification.
17. Q Theory Creation Myth. I became slightly annoyed by the smug complacency of some Big Bang Models adherents. Yes the model can explain a lot of things. But it isn’t perfect. Science requires open minds, not closed ranks. So, just to be provocative, I decided to write my own creation myth. I started with an energy field I call Q and created the whole Universe from it.
18. Q Theory Part 2 – Mass, Gravity and Inertia. Uses Q theory to explain the three basic phenomena of mass, gravity and inertia. Discusses why Newton’s Laws of motion are true. Explores the differences between linear motion and curved motion, and between linear and angular momentum. Develops a heuristic explanation for relativistic mass increase.
19. Q Theory Part 3 – Evolution of the Early Universe. Having too much fun to stop, I continue using Q theory and model the evolution of the whole Universe. Amongst other things, I consider why positive and negative entities are one way around and not the other and why there is almost no anti-matter or negative gravity. I posit a potential reason for expansion in the Universe and a potential candidate/substitute for dark energy.
20. Q Theory Part 4 – Evolution of the Modern Universe. Discusses the emergence of stars and galaxies. Top down and bottom up cosmic processes. The shapes of galaxies. Expansion and acceleration.
21. General Relativity Basics. A less fanciful essay that looks at the origins of General Relativity.
22. Einstein’s Equivalence Principle. Discusses this foundation of General Relativity and concludes that it is just a very useful mathematical assumption.
23. Gravitational Redshift. Considers the empirical evidence for this important phenomenon and how it fits in with General Relativity. Argues that it provides further proof that the nature of time is not what it seems. Argues that Pound-Rebka redshift experiment does not imply that the only way to understand gravity is to invoke curvature in all the dimensions of space-time.
24. Gravitational Light Bending. Looks at this famous “proof” of General Relativity. Discovers that classical physics already predicted half the effect. Suggests that a combination of Special Relativity and the fact that gravity slows down both time and the speed of light can account for the other half. Einstein’s curved geometry approach provides a clever and accurate description but is not necessarily the only possible way in which to model and interpret what is going on.
25. Gravity, Time and Light. Discusses the inter-related phenomena of time, gravity and light. This is a profound and beautiful topic in modern science.
26. Mercury’s Perihelion Precession. Einstein offered an explanation for the anomalous precession in the orbit of the planet Mercury as proof that his new model was mathematically useful. And so it is. But why is it so hard to explain what is going on in plain language? And is it the only possible way to understand what is going on?
27. Gravitational Waves. Gravitational waves are well described in Einstein’s General Relativity and have recently been able to be detected. But they were also predicted by classical physics hundreds of years ago. They are not proof that the only way to look at the Universe is through models that use fully curved spacetime.
28. Fresh Perspectives Needed. Looks at the many variants of General Relativity and agrees that the original version is the best. Argues that this does not mean General Relativity is perfect. Argues that modern cosmology is in trouble. Criticises the post-Einstein view that curved spacetime is somehow “real”. It is just a useful model.
29. Bell’s Inequality. Explains, proves and discusses this simple mathematical result as a prelude to the fundamental debate in physics between Einstein and Bohr.
30. Einstein, Bohr and Bell. Discusses the debate between Einstein and Bohr over the significance of the Copenhagen Interpretation in Quantum physics. The EPR Paradox and Schrodinger’s cat. The theorem and experiment idea posed by John Stuart Bell. Argues against the current view that Bell Experiments have supported Bohr’s view rather than Einstein’s.
31. Triple Polarisers Experiment. Looks at this simple experiment which is also held out as supporting Bohr over Einstein. Argues that it not a valid Bell’s Experiment at all, but that it does show how that our current explanatory models of light are really very poor and unsatisfactory.
Heretical Ideas in Fundamental Physics within these Essays:
A new model for light.
A modified version of Special Relativity.
Suggested explanations for a variety of difficult experiments.
Some new paradoxes to provoke people who think they have all the answers.
A suggested explanation for the Galactic Rotation Curve problem that does not require imaginary Cold Dark Matter.
Some alternative ideas to the Big Bang Theory
A suggested explanation for why Newton’s Laws of Motion are true.
A suggested explanation for the origin of mass, gravity and inertia.
A suggested explanation for charge/parity violation in the early Universe.
A suggestion for what drives expansion in the Universe.
Challenges assumptions about the nature of time and the speed of light in the Big Bang Model.
Suggests that Einstein’s Equivalence Principle is just a useful abstract assumption for the purpose of creating a clever type of mathematical model.
Argues that you can use fully curved spacetime geometry if you want to, but you do not have to.
Argues that all the experimental proofs of General Relativity can be explained using classical physics augmented by Special Relativity and the fact that gravity slows down time and the speed of light.
Argues against the modern notion that fully curved spacetime is ‘real’ and the only valid way to understand gravity.
Argues that modern Bell’s experiments using entangled photons are not in fact valid Bell’s experiments and that the results have possible interpretations that do not require us to believe in instantaneous action across impossibly large distances.
Argues that the two pillars of modern physics, General Relativity and Quantum Mechanics, are beautiful mathematical models but are nonetheless just mathematical models.
Argues that interpreting the models as literally true has led to the current quagmire in theoretical physics and cosmology.
Pleads for a more open-mind attitude towards heretical alternatives and complementary viewpoints.
If I can help encourage new physicists to think afresh about how modern theoretical physics got to where it is today, to the extent that they break out of current orthodox paradigms and come up with experimentally verifiable new insights and understandings, then I will be pleased beyond measure.
#science#physics#realtivity#quantum mechanics#light#Lorentz#cosmology#dark matter#scepticaladventure#hereticalphysics
4 notes
·
View notes
Text
31 Triple Polarizers Experiment 2 Oct18
Background The triple polarizers experiment is a simple experiment that you can easily do at home. And yet it offers fundamental clues to the nature of light, reality and the Universe!
The experiment is often mentioned in connection with Bell’s Theorem and Bell’s Experiment (see adjoining essays) but that is a little bit confusing because it is not a proper Bell’s Experiment. It does not involve so-called ‘entangled’ pairs of sub-atomic particles. In some ways you could think of it as half a Bell’s Experiment. It does however shed some light (pun intended) on Einstein’s belief that the Copenhagen Interpretation of quantum mechanics is somewhat lacking. Conversely, some authors claim that the experiment shows that Einstein’s views about determinism and reality are deficient.
I think it simply shows that the conventional wave-particle model of light is intrinsically self contradictory and fundamentally deficient. Such a heavy debate from such a simple experiment!
The Triple Polarizers Experiment All you need for this experiment is a torch and three sheets of polarized film. If you already have a source of polarized light, such as is emitted from many types of computer screen, all you need is two polarized lenses from a pair of polaroid sunglasses.
If you pass a normal beam of light through a reasonably good polarizing filter about half of it gets through. If you then try to pass the emerging beam through another extra polarizer turned through 90 degrees then no light gets through. But if you insert a third polarizer between the other two then, aligned at 45 degrees to both of them, then about 25% of the incident light comes out of the third and final polarizer.
The essential point is that adding an extra filter results in more light coming at the end, not less.
You can vary the experiment. The angles aren’t critical as long as the orientation of the middle lens is between the planes of polarization of the other two. You can insert extra lenses as well and get even more light out at the end.
It is almost as if each inserted lens sort of ‘twists’ some’ of the light and thus helps it pass through the next filter. In fact, I think that is getting close to the truth. So why is it a big deal?
Evidence against Einstein? Einstein was inclined to believe that photons and sub-atomic particles acquire definite properties when they are created and that these inherent properties pre-determine what happens when those particles interact with subsequent detection equipment. If not precisely, then at least in a probabilistic way.
The Copenhagen interpretation of quantum mechanics is that particles have a variety of potential properties all sort of stacked together and when an observation is made one of the states for one of the properties triumphs and all the other possibilities for that particular disappear.
Some physicists/authors inclined to the Copenhagen Interpretation (CI) say that the three polarizers experiment is evidence against the Einstein viewpoint. The say that the experiment consists of photons passing through or not passing through a sequence of filters. Each photon is either absorbed in a filter or it passes through that filter and on to the next one. If Einstein’s Hidden Variables (HV) view is correct, then the likelihood that a particular photon will pass through first filter is already pre-coded in that photon. It will also have pre-coded instructions for what do if it meets the middle filter and pre-coded instructions for what do if it meets the last filter. The instructions for whether to pass through the last filter or not affected by the first two filters because the photon passed straight through those two filters. So they say that the HV view cannot explain why the presence or otherwise of the middle filter is making a difference to the outcomes at the third filter.
Other physicists argue that the middle filter must be ‘disrupting’ the experiment somehow. It was because of this argument that Bell developed his clever idea for testing things in a different way.
Bell’s Clever Idea In 1964 the Irish physicist John Stuart Bell suggested that a potential way to resolve the argument would be to split the experiment into two halves and send some photons though filters on one side and the others through filters on the other side. Put as large a distance as possible between the A side and the B side. Then any ‘disruption’ would be quarantined to one side or the other. If the effects of disruption still occurred, then it would be evidence that what happens on one side is able to influence what happens on the other side, instantaneously or almost instantaneously.
But there is one more crucial step. The photons going left and right have to be exactly the same as each other, or at least exact mirror images. They have to be born simultaneously in exactly identical circumstances, preferable from the same event. In other words they have to be identical twins, also known as a mirrored pair, also known as an entangled pair.
This is the origin of the famous but controversial Bell Experiment. I have discussed it in another essay. Let us return to the three polarizers experiment.
Comments on Interpreting the Triple Polarizers Experiment I have so many objections and complaints about the conventional explanations for the triple polarizers experiment that it hard to know where to begin. Try reading multiple textbooks or online lectures in an effort to find a simple explanation of this simple experiment. You will soon discover a mixture of different explanations that contradict, complicate and confuse each other.
I am convinced that the experiment is a simple demonstration about the true nature of light and we need to recognize what it is telling us.
Let us start with the conventional view that light is both a wave and a particle. That it has a dual nature and its behaviour depends upon the type of experiment that is being performed. I do not regard this as a clever model but rather as an admission of failure. It is self-contradictory, inconsistent, incomplete and unsatisfactory. Simply to accept the model because it works pretty well is to deny the opportunities for further insights and a fuller understanding.
In an earlier essay I used a platypus as an analogy. When a platypus was first shown to western scientists they were not sure whether it was a duck (because it has a bill, webbed feet and lays eggs) or a rodent (because it has fur and burrows underground). But it is not a duck, and it is not a beaver, nor is it a duck or a beaver depending on how it is viewed, or a duck and a beaver until it is revealed to be one or the other by the act of viewing it. It is what it is. Likewise a photon.
Some writers describe the polarization of light by comparing it to a rope passing through a picket fence and being shaken up and down in the form a wave. I think this is a terrible analogy. For a start, a standing wave can pass through a picket fence in any orientation if it has a null point (node) at the point of intersection.
Other explainers talk about light consisting of electromagnetic vector field waves that can be thought of as having a component parallel to the plane of polarization and a component perpendicular to the plane of polarization. They then say that one component gets absorbed and the other passes through. I think this is also wrong and misleading. If one component gets absorbed then the emerging light would have lower energy (i.e. be red-shifted) and this is not what is observed in practice.
Other authors switch to the model of light as little photonic bullets. They talk about probabilities of being absorbed or passing through the filters. But then they have to explain what happens at the polarizers. Some skewer the bullets with a hypothetical little sideways stick and say that the photons pass through if the stick lines up with the polarizing molecules in the filter, but otherwise not. How then to explain the fact that 50% of the light get through a polarizing when the incident polarization is randomly oriented? Let alone the full experiment.
Some authors have a bet each way and start with the wriggly little worm electromagnetic wave model for light but then chop the wave into short pieces to try to represent its quantization.
Attempts to describe circular polarization add further confusing complexity. The representation of light as a double electromagnetic wave seems useful, but does this mean that all light is a double electromagnetic wave? Wouldn’t this make it easy to split into two? And how does it reconcile with the photon idea. Are we to believe that light consists of a double photon? Or perhaps the photon is spinning as it travels. If so, how does this accord with quantized spin?
Furthermore, no author I have yet read recognizes the fact that the photons are travelling at the speed of light and hence must obey the lessons of Special Relativity taken to its logical limit. Why teach Special Relativity if you don’t believe it?
I have written another essay about what I suspect all the evidence is telling us about the true nature of light and have come up with suggestion for a new model that seems to work. In order to break away from older inappropriate analogies I invented a new word – I say that light consists of phots.
However, for the purposes of this essay I want to focus on trying to understand the triple polarizers experiment. Why is it perplexing? Is Nature playing some sort of trick on us? If so – where is the subterfuge? Where are we misunderstanding what is going on? Are some of our assumptions wrong? Are we misinterpreting what we think we are seeing?
It’s a simple experiment. I think it should have a simple description that makes sense and is consistent with the rest of physics.
Suggested Explanation for the Three Polarizers Experiment A simple explanation is that we are deceived when we think that photons pass straight through a polarized film. They look like they do but it’s a trick. What might happen (and I think it does) is that the incident photons are absorbed into the filter and they are either absorbed or they cause a new and very similar ‘child’ photon to be produced which resumes travelling on the other side of the filter and in the same direction as the incident photon. The process is effectively instantaneous.
It is now quite easy to describe the three polarizers experiment. Let us start with light that is initially unpolarized. In other words it consists of photons that are polarized in random orientations. The first polarizer absorbs all the photons and recreates about half as many child photons. The child photons have an orientation more closely aligned to the first polarizer’s plane of polarization. The photons that arrived attuned to the first filter’s plane of polarization are the ones that tend to give rise to child phots with a similar persuasion. The others are the ones that tend to be absorbed.
The middle polarizer absorbs all the phots it receives and emits many (about half for 45 degree relative orientation) ‘grand child’ phots. The child phots have an average orientation more aligned to the plane of polarization of the middle polarizer. This gives the effect of twisting the beam of photons. Note that you cannot watch a particular photon or chain of photons in motion. You can only observe the consequences of a photon being destroyed. (The most common one being that a new photon is created which then enters your eye or a photomultiplier or camera or similar device).
Likewise at the third polarizer. The ‘grandchild’ photons emitted by the middle polarizer are oriented in a way that leads to about half as many ‘great grandchild’ photons being created and emitted by the final polarizer.
The hidden variables idea is not being contradicted at all. Simply for the reason than none of the initial photons every reach the second and third polarizers. Their offspring reach the additional filters but not them. And like all offspring (!) the new generation photons have their own ideas about how they should interact with any polarizers they should happen to meet.
Summary The three polarizers experiments is simple but very interesting. I think it is not proof that Einstein’s concerns about the Copenhagen Interpretation are somehow wrong. I think the experiment can be interpreted in a simple way that does not lead to such a conclusion.
#polarized light#polarised light#Bell's experiment#einstein#nature of light#heretical physics#phots#three polarizers experiment
0 notes
Text
30 Einstein, Bohr and Bell 1Oct18
Background While Einstein was developing his theories of Relativity and was using them to model the then known Universe (i.e. the Milky Way), other physicists were developing a ways of describing events on a fundamentally small scale – quantum mechanics. Relativity and quantum mechanics are considered to be the two great pillars of physics developed in the 20th century. A pity then that they do not fit well together. For example, there is no satisfying model of gravity based on quantum mechanics. Nor was Einstein entirely comfortable with the main way in which quantum mechanics was expressed and interpreted. He thought that the main mathematical model of quantum mechanics lacked something and he thought the way in which the model was generally interpreted was a bit odd. So did many other scientists. And many people still do.
This essay will look at the Einstein-Bohr difference of opinion and also at suggestions by John Bell about how to test which view is better. It is all quite fundamental to the way we perceive reality and the Universe. But first some background
The Copenhagen Interpretation In the early part of the century Walter Heisenberg, Neils Bohr and others realized that it is impossible to determine/observe/measure all the properties of a subatomic particle or photon at the same time. The very act of measuring one property always affects the other properties. (See Heisenberg Uncertainty Principle). Sometimes, as with photons, the act of observation involves the prior destruction of the object itself.
In response to this fundamental restriction, physicists developed their mathematical models using probabilities for the various physical properties. The physical properties of sub-atomic objects are described as multi-dimensional vectors modelled by probability density functions – sort of ‘fuzzy’ multiple dimensional wave functions (see Schrödinger wave equation etc.)
When a photon or subatomic particle hits a detector, the wave function collapses and one aspect or another (e.g. its location) becomes well determined.
But this is where the interpretations begin to differ. Neils Bohr and others began to believe that their mathematical approach was a literally accurate representation of reality – e.g. that photons do not in fact have a well-defined location in spacetime until they are detected. This became the Copenhagen Interpretation (CI).
(Historical note: In 1921 Einstein received the Nobel Prize in Physics for his work on explaining the photo-electric effect. In 1922 Bohr was awarded the Nobel Prize for Physics for his work on the structure of atoms and the radiations from atoms.)
Schrödinger’s Cat Einstein, Erwin Schrödinger and others worried that the probabilistic model used by quantum mechanics lacked something. The approach might be clever and useful but maybe the interpretation of objects as ‘a superposition of possible states which only become determined upon observation’ was not to be taken too literally. Maybe the model using probabilities was just a model. Einstein is reported to have quipped “god does not play dice.”
In 1935 Schrödinger highlighted the issue by using a thought experiment involving a large everyday object such as a cat. A cat is placed into a sealed box that also contains a flask of poison and a trigger mechanism linked to the radioactive decay of an unstable atom. If the atom decays, the flask is shattered, releasing the poison, which kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the situation is best modelled with a probability function in which the cat is simultaneously alive and dead. Yet, when a person (hypothetically) opens the box and looks inside, they see the cat either alive or dead, not both. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.
The EPR Thought Experiment Also in 1935, Einstein, Boris Podolsky and Nathan Rosen (EPR) put forward a thought experiment involving ‘entangled’ particles (see later) and used this to argue that the CI approach was incomplete. The way in which the thought experiment was presented was a bit confusing. Einstein put his name to the paper but later said he wished it had been expressed better.
One of the ways in which the thought experiment is presented leads to the conclusion that the Copenhagen viewpoint requires action across large distances with a speed greater than that of light, possibly even instantaneous. Since this is regarded as impossible under the ‘laws’ of Special Relativity, the thought experiment is often called the EPR Paradox.
Entangled Pairs It was Erwin Schrödinger who first coined the term “entanglement”. Entanglement occurs when a pair of particles is created together in such a way that the laws of conservation apply. Accordingly the two members of the pair must take equal and opposite values for properties such as their linear and angular momentum. A good example is where a subatomic particle decays into an electron and a positron.
A more complicated example is when a photon enters a certain type of crystal and causes the generation of two photons with lesser energy.
The word entanglement now means different things to different people and this can add to the general confusion. For some people entanglement simply means that if a certain property of one member of the pair is observed, then in principle it is known what that property will reveal itself to be for the other member of the pair, if and when it is measured, for it has to be equal and opposite.
Other people go further and say that entanglement is when the act of observing one member of a pair causes the other member of the pair to have an equal but opposite value for that property.
Since this is the very issue I want to take a look at, I will try to avoid the term ‘entanglement’. I’m going to call pairs of entities created simultaneously in the way described above ‘mirrored pairs’.
EPR argued that is unreasonable to suppose that the act of measuring a property for one member of a mirrored pair actually simultaneously causes the other member of the mirrored pair to have the opposite property, even though it is located far away from Particle A at that instant. Einstein called this “spooky action at a distance” and he did not like it. EPR argued that effects can only be caused by something in proximity to the particle affected (a principle they called ‘locality’).
EPR also argued that measurement on mirror pair member reveals also what that property must be for its mirrored pair. They said that if we know what the property of a particle is going to be when it is measured, it is reasonable to expect that that particle actually has that property before it is measured. They called this the principle of ‘reality’.
Mind you, there are so many different versions of ‘reality principles’ that I do not know which one to describe to you. Any fundamental statement on the topic of reality is always going to be difficult, especially if the philosophers get involved. (Where is a logical positivist when you need one?)
EPR concluded that the formalism being used by the Copenhagen School was probably (pun intended) missing something. EPR preferred to think that the properties of all subatomic particles are determined at their birth. The particles carried with them intrinsic aspects that later determine how they interact. These became called Hidden Variables.
“Not so,” said the CI school. “Any property measured for one member of a mirrored pair is only determined at the instant of measurement and this then determines the matching property for its twin. Admittedly, how this happens is inexplicable. However it is well described by the mathematics of the situation.”
Several years after the EPR paper was published , the physicist David Bohm modified the EPR example so that things were a bit clearer. In the Bohm formulation, an unstable spin 0 particle decays into two different particles, Particle A and Particle B, heading in opposite directions. Because the initial particle had spin 0, the sum of the two new particle spins must equal zero. If Particle A has spin +1/2, then Particle B must have spin -1/2 (and vice versa).
According to the Copenhagen interpretation of quantum mechanics, until a measurement is made, neither particle has a definite state. They are both in a superposition of possible states, with an equal probability (in this case) of having positive or negative spin. However, as soon as spin is observed for A, then it is clear what the spin of B must be, and observers can be 100% confident what they will find when they or other observers manage to measure it.
The whole issue bubbled along for several decades.
Bell’s Theorem In 1964 the Irish physicist John Stewart Bell claimed that any model using hidden variables could not have the same explanatory power as the CI version of quantum mechanics (Bell’s Theorem). He then claimed there was a way to test whether this was true.
Start by assuming that if there is a Hidden Variable in each member of a mirrored pair then this will exactly pre-determine how both entity will behave in future circumstances. Same circumstances, same results for both of them. Then test the outcomes using the statistical outcomes of a certain type of experiment (Bell’s Experiment).
Bell suggested sending mirrored entities through various experimental measurements and building up a statistical picture that indicated the probabilities of the outcomes. If indeed the outcomes were predetermined in the way just described, the probabilities should satisfy a simple mathematical relationship that is now called Bell’s Inequality (see previous essay). Bell claimed that if this inequality is violated then the HV idea must be faulty, and if the HV idea is faulty then the CI idea must be true.
Ten years later David Bohm suggested ways in which Bell’s experiment might be able to be performed in practice.
Various teams of experimenters carried out versions of this idea, but using mirrored photons instead of electron/positron pairs. (I think this is a pity because the results would be clearer if they used electron/positron pairs.) Generally speaking the experimenters claimed to have found that Bell’s Inequality can be violated. Not always, and not by very much, but enough to get them excited.
Other physicists pointed to weaknesses and “loopholes” in the experiments that meant that the experiments did not strictly comply with conditions set out by Bell for a valid test. Other teams then did experiments that they claimed did not have loopholes.
Gradually more and more physicists came to the view that Bell’s logic was sound and the Bell experiments were revealing a problem with the HV view of things. This led then to think that the CI viewpoint was receiving experimental support.
It is now common to read statements like this “the evidence shows that measurement on one part of an entangled pair determines not only the outcome for that entity but also for its partner, even if that partner is at an enormous distance away. It is uncomfortable and counter-intuitive we agree and we cannot explain it in plain language. But it works mathematically and experimentally, so just get used to it.”
Bell’s Experiment Generate photons originating as a mirrored pair and pass one of each pair through a filter on one side of the experiment and the other through a filter on the other side of the experiment. Consider the detection of photons after they have passed through a filter to be an event. These events form the set that Bell’s Inequality can be applied to (allegedly).
The events are separated spatially rather than temporally. Make the spatial separation as big as possible. Use two laboratories even. For instance, make one set of measurements in a lab called A and make the others in a lab called B located across town somewhere.
Pass the photons through a polarized filter on one side and a polarized filter on the other side. Use three types of filters, called X, Y and Z. Filter Y is rotated so that its plane of polarization is at an angle between zero and 45 degrees with respect to filter X. Filter Z is rotated so that its plane of polarization is rotated at a bigger angle than this, usually double.
Look for examples where a photon passes through a filter on one side, but not the other. The aim is to find out if events (detections) on one side have any bearing on what happens on the other side.
If a photon passes through filter X on one side but its pair does not pass through filter Y on the other side, count this as a statistic of type <X,Y>. Likewise for other pairs of filters Y,Z and X,Z. Keep interchanging the filters and collect as many statistics as you can.
Finally look at the accumulated statistics. How does <X,Y> + <Y,Z> compare in size to <X, Z>? If the events of passing through (or not passing through) each individual filter are reliably predetermined and independent of each other then the result are supposed to satisfy the mathematical truism that <X,Y> + <Y,Z> ≥ <X,Z> (Bell’s Inequality). (See earlier essay for an explanation and proof of Bell’s Inequality.)
If the inequality is violated then there is something wrong. Either there are no hidden variables, or events on one side of the experiment are somehow connected and causally entwined with events on the other side of the experiment, no matter what the separation distance is.
Teams that have done this experiment claim Bell’s Inequality is able to be violated. They are able to create situations in which there are not enough events of type <X,Y> and <Y,Z > compared to events of type <X,Z>. It seems that if one member of a pair passes through X on one side then the other member of the pair is extra likely to pass through Y on the other side, thus failing to contribute to the statistic <X,Y>. Similarly if one member of the pair passes through Y on one side then the other member of the pair is extra likely to pass through Z on the other side, thus failing to contribute to the statistic <Y,Z>.
Events X and Z do not seem to affect each other across the divide, and this is put down to the larger angle of rotation between filters X and Z.
Many physicists interpret the results in the way suggested by Bell. They say that the inequality can be violated, so either there are no hidden variables, or events on one side of the experiment are somehow connected and causally entwined with events on the other side of the experiment.
The CI model treats both member of the mirrored/entangled pair in the same probability wave function and what happens in one place to one member of the pair instantaneously affects what happens the other member in another place. So they say the CI model is literally true.
And that is the current state of play of the argument devised by John Stuart Bell.
My Initial Comments on the Above I think that the conclusion that the Bell experiments using photons have proved that instantaneous cause and effects can take place right across the Universe must be wrong. I know that many professional physicists now believe the conclusion to be true. I do not care. Nor do I care that professional physicists seem to assume that anyone holding a contrary view must be naïve or uneducated.
I think that the idea of instantaneous action is not credible. Nonsense even. And I am happy to side with Einstein on this.
I think the way the Bell Theorem, experiment and interpretation have been carried out to date is an elaborate trick played by the greatest magician of all - “Mother Nature”. I say this with no disrespect to the many great minds that have worked on the issue, or to the quality of most of the experiments.
The challenge therefore is to work out how the trick is done. Where is the ’sleight of hand’ in the photonic Bell experiments? Where is the fault in the design or interpretation of the experiments? What false assumptions have crept into the logic? How should the results be interpreted?
Discussion on the Bell Argument
Let me unpack the Bell argument a little more. It contains several steps.
Following Bohm’s suggestion, Bell suggested subjecting sending mirrored entities through spatially separated observation points and building up a statistical picture that indicated the probabilities of a small set of joint outcomes. Bohm suggested electron-positron pairs but most experimenters use pairs of photons.
Bell claimed that if the tests are properly designed then they satisfy the pre-requisites for Bell’s Inequality to apply i.e. if the outcomes for each test/observation are well determined in advance by hidden variables attached to the pair members from birth, then the outcomes should comply with Bell’s Inequality.
If Bell’s Inequality is violated by the appropriate combinations of the statistical outcomes, there must be something wrong with the HV concept.
In the HV concept is faulty then the CI concept must be true.
I have concerns with Steps 1, 2, 3 and 4.
Step 1: The creation of mirrored pairs of subatomic particles is fairly straightforward but the creation of mirrored pairs of photons a lot less so. I cannot think of a decay event that sends off a mirrored pair of photons. I accept that the process of using certain crystals to “downconvert” laser light of a particular frequency into two beams at half the frequency might look like it is producing mirrored pairs of photons, but is it? The output might be totally coherent beams that act like waves that are in step with each other, but can this be thought of as properly mirrored particles?
Switching to the photon view of light I guess I am saying that just because two photons have the same energy level and are produced in identical circumstances and start off exactly at the same time might not be sufficient ground for them to be properly mirrored entities. They might just be circumstantially mirrored entities. They might look like mirrored entities in key respects, but the hidden variable within them might be different. If this is so then the Bell logic does not apply and it is not a valid test.
One particular type of experiment raises questions in my mind. The ones that use incoming starlight as their source of photons. I think starlight is much more complicated than generally assumed. No photon created within a star makes it to the surface unscathed. It takes years of knock on effects even to reach the surface of the star and then head off into space. Then there are powerful electromagnetic fields and solar winds to contend with and a lot of hydrogen and dust along the way. Not to mention countless neutrinos. On approaching the inner solar system there is potential interference from our Sun. Approaching Earth there is the Earth’s magnetic field and Van Allen radiation belts to traverse. Next comes the Earth’s atmosphere and yet more ions and dust. And finally comes effects from refracting or reflecting the light into the detectors in the telescopes. Somewhere in all of this there might be opportunities for photons from the same star travelling in exactly the same path over enormous distances for millions of years to ‘fall into step’ with a suitable partner.
In any case, the members of a properly mirrored pair must have opposite properties for key variables, including their linear momentum, Hence they must have opposite linear momentum at the time the pair is born. Hence the members of an entangled photon pair must head off in opposite directions. How then are they supposed both arrive at Earth at the same time? In all of this I am prepared to accept that some photons have become locked in step with other photons, thus developing a high degree of coherence. But I think this is not what EPR and Bell were talking about.
Step 2: Bell’s inequality requires the members of the set in question to have definite, unambiguous and unchanging answers to binary yes/no type questions is relation to three properties, with at least one of the properties being a yes. In my opinion, experiments using photons will always struggle to meet these requirements. Bizarre thing happen at a quantum level and the answer to any question about a property is s typically answered in terms of probabilities. If there are hidden probabilities that make it more likely that if a mirrored twin passes X on one side then its twin is more likely to pass filter Y on the other side then I think Bell’s inequality does not apply.
Note that there is a presumption that identical particles give rise to identical outcomes when presented with identical polarizing filters. This might seem reasonable but I do not think it is necessarily always valid. Typical polarizing filters consist of molecules that are organized so that they are conductive in one particular plane. But they are not static. The molecules are vibrating thermally and the electrons within them are subject to all kinds of quantum fluctuations. Hence every incident photon might have a partially randomized experience. For example, it might tunnel straight through in a situation where it ought to be absorbed. This might change the observed outcomes relative to the theoretical ones.
Here is another concern. Many of the experiments do not use single polarizers on each side of the experiment. They prefer to work with polarized light so they send each mirrored pair through an identical polarizer X and then they use Y and Z type filters on each side. Next they exchange the X filters for Y and then insert X and Z filters on each side. Whenever I see pairs of filters I start to doubt that the conditions for a Bell’s inequality to apply are being met.
I happen to think that photons do not pass through filters at all. I think that when a photon encounters a filter it is absorbed by that filter and then it either stays absorbed in the filter or it gives rise to a child-photon that is emitted on the other side. I have many reasons for leaning towards this view and I discuss this in other essays. The point here is that the photons detected in the experiments are not the original mirrored twins but rather their descendants. And while the descendants may have inherited most of the features of their parents, such as their energy levels, they may have also have acquired some differences through the process of being reborn, such as a somewhat different plane of orientation.
I think it is entirely credible that the process of filtering the photons pairs with a filter type X makes the child photons more likely to be able to pass through a filter type Y, especially if the relative angle is less than 45 degrees. And likewise that the process of filtering the photons pairs with a filter type Y makes the child photons more likely to be able to pass through a filter type Z, especially if the relative angle is less than 45 degrees. This alone would be sufficient to depress the statistics of type <X,Y> and <Y,Z> and thus give results that look like violations of Bell’s Inequality.
Bell’s Inequality only applies to a well defined set of objects which have unambiguous, static and reliable answers to binary yes/no type questions about three properties. In so-called Bell’s experiments with multiple layers of polarizers I fear the questions are being asked of the wrong people – not the original mirrored twins, but rather their descendants.
I know that many experiments use beam splitters and mirrors instead of polarized filters, but the same concern arises there too.
Step 3: If Bell’s Inequality appears to be violated, then maybe the correct interpretation is that the way in which the Hidden Variable are thought of as working is wrong, or that the conditions required for a Bell’s inequality to apply are not present. I have given a few possibilities above and I will give additional suggestions below. The earlier experiments all suffered from “loopholes” and I am suggesting that the later ones might are also subject to uncertainty and “trickery” stemming form the fact that we do not understand everything there is to know about light. I think there is ample opportunity for Steps 2 and 3 to be invalid.
Step 4: I am far from convinced that the HV concept and the CI concept are a mutually exclusive and complete pair of outcomes. I have not seen any evidence or proof of this. It just seems to be an assumption. Maybe proving the HV viewpoint wrong does not imply that the CI viewpoint is right. Maybe both viewpoints have strong and weak aspects. Maybe neither viewpoint is completely right or wrong. Maybe they are complementary viewpoints, not mutually exclusive.
Refer back to my analogy in an earlier essay about using orthogonal viewpoints when looking at a solid object like a cone. From above and below the cone looks like a disc. From the side it looks like a triangle. Neither view is right or wrong. The validity or invalidity of one view has little bearing on the other. Both views are incomplete. A much better understanding and appreciation of the cone awaits someone who can synthesize or step outside the two views.
Photonic Bell’s Experiments A possible explanation for what happens in the Bell’s experiments using photons is this. All the photons have a predetermined disposition in relation what will happen if they meet filters of type X, type Y and Type Z. However, no photon actually passes through any filter. What happens is that when a photon encounters a filter it is absorbed by the filter and then either stays absorbed or gives rise to a child-photon that is emitted on the other side.
Furthermore, this pre-disposition is not the same for all pairs. Suppose some pairs are more ‘lucky/prolific’ than average and others are less so. The overall average remains the same.
By lucky photon I mean a photon that has the following property. If it is likely to produce a child photon at X is also likely to produce child photon at Y and/or if it is likely to produce a child photon at Y is also likely to produce child photon at Z. In fact between any two filters where the relative rotation is 45 degrees or less. Lucky photons that are likely to produce a child photon at X may like to produce child photon at Z but the stretch is too great, so there is no effect.
Statistics of the simultaneous left and right outcomes are taken as usual. The statistic of type <X,Y> is depressed due to effect of having some lucky photons in the mix. Likewise the statistic of type <Y,Z> is depressed due to effect of having some lucky photons in the mix. The statistic of type <X,Z> is unaffected.
Bell’s Inequality does not apply. If you wrongly assume that it does apply then you may well discover an apparent violation.
Objections to my Objections I can imagine people who are wedded to conventional opinion retorting – “but you have simply created an ad-hoc property that you call ‘lucky’. What is this lucky property and why should we give it any credence?”
I could reply that it is a less objectionable than instantaneous spooky action at a distance, but in fairness I should try to answer the question. Here are two suggestions. I am not saying that they are right but rather that they offer lines of enquiry and are the sort of thing we should be looking for.
I think of photons as having a spatial presence in a plane centred on and orthogonal to their direction of travel. I also think of photons as having an orientation which manifests itself through polarization effects.
Unpolarized light consists of photons with randomly oriented directions. A polarizing filter will absorb nearly any and all photons which have an orientation greater than +/- 45 degrees to its plane of polarization. I think that the filter also absorbs all photons which have an orientation less than +/- 45 degrees to its plane of polarization but these absorptions will immediately give rise to new ‘child-photons’ on the other side.
The cutoff angle occurs at 45 degrees. But, as with all things on a quantum scale, the cutoff is not exact and precise. Some photons with a smaller relative orientation will be absorbed. Some with a larger relative angle will not be absorbed but will go on to produce child photons. These are the photons that I call lucky. What is more, I think their child photons will have an orientation that is smaller than their parents. Hence they are less likely to be absorbed in a subsequent filter with a small relative orientation to the one they were born in.
And, just to complete the picture, if one member of a mirrored pair is lucky, then so is its twin. Lucky on one side, lucky on the other, on and around the cutoff angle.
If boundary effects don’t explain “lucky’ then maybe circular polarization does. It seems that the plane of polarization of some photons takes different values according to the distance it has travelled, thus giving the impression that the photon must be rotating as it travels. Maybe these are the “lucky’ photons. If they encounter a filter and are not absorbed in Lab A, then perhaps their twin has a greater probability of not being absorbed in filter in Lab B that has a relative degree of rotation of 45 degrees or less.
Conclusion Bell’s Experiment is an attempt to discredit the idea that subatomic entities have intrinsic properties in favour of the idea that properties only become well defined when they are observed. Many experiments using allegedly ‘entangled’ pairs of photons claim to satisfy the conditions for Bell’s ideas to apply and go on to claim to have discovered indirect evidence that either subatomic entities do have properties which are predetermined at birth, or that it is possible to have instantaneous communication and influence over any distance at all.
I do not think it is safe to accept this conclusion. I think that the experiments to date probably do not comply with the requirements for a valid Bell’s experiment. Hence the usual interpretation of the results is invalid.
I think the experiments do tell us something valuable about light but generally speaking we have not yet understood what the experiments are trying to tell us. I think it would help if we accepted that our current understanding of the nature of light is a horrible mishmash of inappropriate, old-fashioned analogies and that we should become more open to new ideas about the nature of light. That would be the real beauty of the Bell experiments to date.
#Einstein#Bohr#John Bell#Schrodinger#Bell's Theorem#Bell's Experiment#entanglement#hidden variables#Copenhagen Interpretation#spooky action at a distance#light#photons#polarised light#heretical physics
1 note
·
View note
Text
29 Bells Inequality 30Sep18
Bell’s Inequality is a simple mathematical result that can be applied to any set of objects where there are three potential properties for those objects and the objects must have one, two or three of those properties. For example, consider a set of 100 men who are wearing hats and/or scarves and/or gloves.
It is called Bell’s Inequality not because it was discovered by the Irish physicist John Stewart Bell but because it was used by him in an influential argument about the nature of quantum mechanics and reality. His argument is generally known as Bell’s Theorem. Experiments that follow his suggestion as to how to test his ideas are called Bell’s Experiments.
I will be discussing all that in the next few essays but first I’d like to get Bell’s Inequality out of the way.
Bell’s Inequality Consider a set of objects, fixed in number, and three properties X, Y and Z. For each object the simple question “does it have property X?” must be answerable by a simple Yes or No and that answer must remain fixed. For each object the simple question “does it have property Y?” must be answerable by a simple Yes or No and that answer must remain fixed. For each object the simple question “does it have property Z?” must be answerable by a simple Yes or No and that answer must remain fixed. Hence a particular object can have one, two or three of the properties.
Denote:
the number of objects that have property X and not property Y by <X, Y>. the number of objects that have property Y and not property Z by <Y, Z>. the number of objects that have property X and not property Z by <X, Z>.
Bells’ Inequality is simply that <X,Y> + <Y,Z> ≥ <X,Z> i.e. the number of objects with property X and not property Y plus the number of objects with property Y and not property Z must equal or exceed the number of objects with property X and not property Z.
Note the pattern in the equation. Y is repeated on the left hand side (the first time in the negative) but does not appear on the right hand side. It ‘drops out’ on the right hand side leaving just the extremities of the left hand side to create the solitary term on the right.
Example: A fixed group of men and the three properties …. wearing hats, wearing scarves and wearing gloves. Bell’s Inequality says: The number of men wearing hats but not scarves, plus the number of men wearing scarves but not gloves, must equal or exceed the number of men wearing hats but not gloves.
In fact there are six inequalities like this. The first symbol can be any one of the three properties and the second symbol can be any one of the remaining two symbols. For example <Z,Y> + <Y,X> ≥ <Z,X> must also be true. There is nothing mysterious about this – it is just simple mathematics.
Proof of the Inequality The simplest way to demonstrate the proof of the inequality is to use a Venn Diagram.
Figure: Venn diagram of a set of objects with properties X and/or Y and/or Z. The lower case letters represent numbers. The number of objects with only property X is a, the number with all three properties is e and so on. Adding up all the numbers gives the total number of objects. If you want to get fancy you can make the size of each separate area proportional to the number of objects it represents.
Proof: <X,Y> + <Y,Z> ≥ <X,Z> becomes (a + d) + (b + c) ≥ (a + b) which must be true because the left hand side contains the extra numbers d and c.
If you want an intuitive explanation try this. The left hand side is bigger than the right because it is boosted by some objects in X that do in fact have property Z and also by objects which have only property Y.
Example:
Figure: Venn diagram of a set of men wearing hats and/or scarves and/or gloves.
There are 42 men in total. Let’s check the Bell Inequality that says that the number of men wearing hats but not scarves, plus the number of men wearing scarves but not gloves, must equal or exceed the number of men wearing hats but not gloves. The number wearing hats but not gloves is 12+5 = 17. The number wearing hats but not scarves is 14 and the number wearing scarves but not gloves is 15, giving a combined total of 29 and this is greater than 17 so the inequality holds (as it must).
A strange case of Hats, Scarves and Gloves Suppose that we replace all the men by twins and put half of each twin pair in a group on the left and the other half in a group on the right. When we do our survey of how many men are wearing hats but not scarves, how many are wearing scarves but not gloves, and how many are wearing hats but not gloves, we get the first part of our answer by surveying the group on the left, and the answer to the second part of each question by surveying the group on the right.
This is analogous to what happens in the Bell experiment. But note that it is more complicated than in the simple Bell Inequality. Some additional assumptions have crept into the story. We have to assume that each member of each twin pair is always dressed exactly the same as the other. We also have to assume that splitting our sampling method is not distorting the results.
What happens if the sample set’s behaviours are not simple and static? I think problems arise.
Suppose the men wearing hats have a fickle habit in relation to wearing scarves and if they are not wearing a scarf and feel a bit cold they pull a scarf out of their pocket and wear it for short periods of time. Each twin does exactly the same as their sibling. The men not wearing hats do not do this.
Likewise suppose the men wearing scarves have a fickle habit in relation to wearing gloves. If they are not wearing gloves and are feeling a bit cold they tend to pull a pair of gloves out of their pocket and put them on for short periods of time. Each twin does exactly the same as the other. The men with no scarves do not do this.
If we count hats on one side and no scarves on the other side we will get a low statistic due to the extra scarves that have been put on. If we count scarves on one side and no gloves on the other side we will get a low statistic due to the extra gloves that have been put on. If we count hats on one side and no gloves on the other side we will get the same answer as always because wearing hats does not influence glove wearing behaviour.
In these circumstances Bell’s Inequality does not apply. The statistics could easily give a relationship that does not comply with the relationship set out in Bell’s Inequality.
The Simple Pub Crawl I’ll give some more examples and increase the complexity a bit in a way that will be relevant to some of the physics I want to discuss in subsequent essays.
Let us suppose that a group of people are in a village with three pubs/hotels/bars and they have all decided to go on a pub crawl. They can each visit one, two or three pubs (and each pub no more than once). Let us call the pubs X, Y and Z.
Let us take an example where 10 people visit X only, 6 people visit Y only and 4 people visit Z only. 1 person visits X and Y (but not Z), 2 people visit Y and Z (but not X) and 3 people visit X and Z (but not (Y). Two hardy souls visit all three pubs.
How many people are there? Just add all the numbers together. Answer is 28. Which pub is the most popular? Answer is X with 16 visits. How many people visit at least 2 pubs? Answer is 8. OK – got the picture? Now let’s try Bells Inequality.
For each person we can ask – did you exit pub X, yes or no? Did you exit pub Y, yes or no? Did you exit pub Z, yes or no? We have the conditions for Bell’s Inequality to apply.
We can now ask every person ‘Did you leave X but not Y? Did you leave Y but not Z? Did you leave X but not Z?’ Add together the number of people that visited X but not Y, and the number that visited Y but not Z. Answer is 13 + 7 = 20. Compare this to the number visiting X but not Z. Answer is 11. Bells Inequality is satisfied because 20 is greater than 11. Note that the number of people who visited all three pubs doesn’t come into it.
A month later the pub crawl is held again. Word has got around and 100 people turn up for the event. The organisers add a new rule. The participants can visit 1, 2 or 3 pubs as they prefer, but they have to visit X first and any of the others in alphabetical order. The organisers monitor the event by placing motion detectors outside each pub.
They note that 100 people emerged from pub X, 85 people emerged from pub Y and 50 people emerged from pub Z. (XY) is the number that emerged from X but not Y and so is 100 – 15 = 85. (YZ) is the number that emerged from Y but not Z and so is 85 - 50 = 35. (XZ) is the number that emerged from X but not Z and so is 100 – 50 = 50. Arranged in the form of Bell’s inequality gives 85 + 35 ≥ 50 which is clearly true.
The Triple Pub Crawl Let me use a new version of the pub crawl story. The participants have to visit two or three pubs. The pubs they do visit have to be visited in alphabetical order (i.e. XY, YZ and XZ). Furthermore if they visit a pub they do not have to leave it again. If they do exit a pub they will pass through a person-counter. The pubs act a bit like filters. Some participants get stuck in X, some get stuck in Y and some get stuck in Z.
Disaster strikes! On the occasion of the new format pub crawl it turns out that pub Y has been booked for a wedding and is not available. However the organisers decide to proceed anyway as 64 people have turned up for the event. When they collect the counters the next morning they observe that 32 participants exited pub X but none exited Pub Z. Curious, especially as the organisers are sure that quite a few people that left pub X did go into pub Z.
The next time they hold the same event, pub Y is not closed and so all three pubs are available. 64 people again turn up for the event. The organisers decide that everyone has to visit all three pubs, starting with X. When the organisers collect the counters the next morning they observe that 32 people exited pub X, 16 exited pub Y and 8 exited club Z.
So again we have a set of objects (in this case people). And for each person we can ask – did you exit pub X, yes or no? Likewise pub Y and pub Z. And then we can ask each person Did you leave X but not Y? did you leave Y but not Z? and Did you leave X but not Z?
So we have the conditions for Bells Inequality to apply. Let’s check it. <X,Y> + <Y,Z> = 16 + 8 = 24 whereas <X,Z> = 24. So Bell’s Inequality holds (just).
The more interesting thing to note here is that adding back the middle pub Y has led to more people managing to exit the third pub Z than when pub Y was missing. It seems that pub Y has somehow conditioned participants to survive pub Z a lot better.
That would be a bit surprising in a real pub crawl experiment. But perhaps possible, especially if the participants are Australian.
What is surprising is that it always happens in what I call the three polarizers experiment. The photons are replaced by people and the pubs are replaced by linearly polarized filters. A beam of photons is aimed to pass through all three polarizers. Polarizer Z is turned at an angle of somewhere between zero and 90 degrees to the first polarizer and the middle polarizer is oriented with half that angle of turn. When the middle polarizer is taken away the number of photons leaving the last filter is low. Putting the middle polarizer back again increases the total number of photons exiting the last filter. The brightness of the emergent light increases even though an extra filter has been added. It is a simple neat experiment that you can do at home.
I hope you enjoyed this brief excursion into some mathematics.
1 note
·
View note
Text
28 Fresh Perspectives Needed 13Sep18
Introduction I have three motivations in writing this series of essays. The first is to improve my own understanding through research, contemplation and organizing my ideas by writing them. The second is to be able to share this journey with others in a plain-English format, just in case anyone is interested or amused by this (which is not the case for my wife, friends and relatives!). My third reason is the hope of provoking fresh thinking in areas that seem to me to be calling out for new insights and answers.
I think modern cosmology is in trouble. 95-98% of the Universe has gone missing and cannot be found. Cosmologists claim to understand the history of the Universe, even before the so-called Big Bang, but cannot explain the motion of stars in spiral galaxies or why the Universe seems to be expanding at an accelerating rate, or why its geometry seems to be so flat. We can’t even deeply explain the motion of a simple Foucault pendulum here on Earth. And our model for light is an unsatisfactory pastiche of conflicting ideas. Paradoxes persist. How long will it take before we agree – we seem to have lost the path, it should not be as hard as this, we have missed something, we have failed to fully understand something fundamental.
So I have gone back through the foundations of modern physics with an open but skeptical mind. It has been a fascinating journey.
Here are my conclusions: 1. Our model for light is clumsy, old fashioned, contradictory and severely limiting. It needs a fresh look. Wave-particle duality is just a label for something we don’t properly understand. 2. The aether theory isn’t dead, it is just sleeping. It ran into problems and was bypassed, shelved and ignored. But maybe Lorentz, Sagnac and others were onto something that needs a fresh and further look. 3. Special Relativity is a work of genius and uncovered some fundamental aspects of Nature. But it rests on three postulates that may not be always, everywhere and entirely valid. 4. The issue of where does inertia comes from in the first place needs some deep consideration. 5. General Relativity is brilliantly successful because it recognizes that both time and the speed of light are affected by gravity. It is also a clever and powerful model that brings a whole lot of new mathematics into play. But for all that it is just a model. 6. If you add the insights of Special Relativity and the fact that gravity slows the speed of light/time back into classical physics you can successfully model, quantify and predict all of the so-called proofs of General Relativity. This shows that General Relativity is an excellent way of looking at certain aspects of Nature, but it is not the only way of understanding Nature. 7. Restricting our viewpoints restricts our understanding and progress. We need some fresh perspectives.
The Cone Here is a parable. Consider a solid cone made out of some hard shiny material. Viewed from one end it will look like a disc. Viewed from the other end it will still look like a disc, but with some indications that it has a symmetrical three dimensional nature and maybe a pointy tip. Viewed side on it will look like a triangle, again with some suggestions of curvature but this time from side to side. Which view is correct? The answer is that all three views are correct, but none is completely correct. Insisting that one view is correct and ignoring the other views is to limit understanding of the true nature of the cone.
I think it is the same with General Relativity. Insisting that it is the only correct way of interpreting the Universe is to limit our chances of developing a deeper understanding.
The Rise of the Metric Approach to Gravity There are various ways to try to describe physics involving gravity. It is clear that four dimensional spacetime is needed and that the lessons of Special Relativity need to be included. Furthermore, the fact that gravity slows down the speed of light and the rate of time points to the need to allow for flexibility in the time dimension. So a flat Minkowski spacetime is not entirely adequate. But this is where the great divide comes in. You can choose to follow Einstein and make use of a fully curved spacetime model, or not.
If you do choose Einstein’s geometric approach then you can regard this as just a model – as Einstein himself did – or you can go further and choose to regard curved spacetime as some sort of fundamental reality. This last step gained popularity after Einstein died, promulgated by luminaries such as Misner, Thorne and Wheeler in the United States and Stephen Hawking in the United Kingdom.
Necessity for Full Spacetime Curvature? N.B. In this essay I will again be making references to a heavyweight textbook on gravity by Charles Misner, Kip Thorne and John Wheeler (MTW): “Gravitation” C W Misner, K S Thorne, J A Wheeler Freeman Press, 1970 ISBN 0-7167-0344
MTW have played a major role in promoting the idea that gravity is nothing more than spacetime curvature. Einstein’s own approach was regarded as a curiosity by many scientists for the first forty years of its life but from the middle of the 20th century it gradually assumed the ascendency. Whereas Einstein regarded his full curvature approach to be a useful tool, MTW and others reinterpreted his approach and helped to create the modern view that gravity is just an illusion created by full spacetime curvature.
MTW do not have an open mind on the subject. On p1066 they say “Among all bodies of physical law none has ever been found that is simpler or more beautiful than Einstein’s geometric theory of gravity; nor has any theory of gravity been discovered that is more compelling”. On p1067 they say “For any adequate description of gravity, look to a metric theory”. On p 421 they say “Mass-energy curves space is the central principle of gravity”. On p429 they praise General Relativity because “It describes gravity entirely in terms of geometry; most of its competitors do not”. And John Wheeler is often quoted as saying “Matter tells spacetime how to curve and curved spacetime tell matter how to move.”
Clifford M Will was a student of Kip Thorne at Caltech and later became a leading professor of physics, specializing in General Relativity. I will use quotes from his online article “The Confrontation between General Relativity and Experiment” June 2014, SpringerLink, and his overview “Was Einstein Right? A Centenary Assessment” 33 pages, 8 figures, published in General Relativity and Gravitation: A Centennial Perspective, eds. A. Ashtekar, B. Berger, J. Isenberg and M. A. H. MacCallum (Cambridge University Press), 2015. Abridged version at arXiv:1403.7377.
Will presents the Einstein Equivalence Principle and Strong Equivalence Principle in confusing ways, with each principle containing other principles. Will then says that the equivalence principles are the heart and soul of gravitational theory, “for it is possible to argue convincingly that if they are valid, then gravitation must be a ‘curved spacetime’ phenomenon.” In other words, the effects of gravity must be equivalent to the effects of living in a curved spacetime.
I think that to whether “the effects of gravity must be equivalent to the effects of living in a curved spacetime curvature” is true or not depending on what it meant by ‘equivalent’. If the author means “can be satisfactorily modeled by” I would happily agree. If the author means we live in a curved spacetime reality and gravity does not exist, then I think that he has gone too far. Further than Einstein ever went and further than is reasonable.
The Parametrized Post Newtonian Formalism The Parameterized Post-Newtonian Framework (PPN) was developed over 50 years by illustrious physicists such as Eddington, Robertson, Schiff, Nordtvedt and Will. The PPN formalism adds ten generic parameters to a basic version of General Relativity. The parameters can be adjusted so that the PPN can represent a whole variety of competing geometric models of gravity, including subtle variations of General Relativity itself. The idea is then to use experimental results to put limits on the parameters as a way of weeding out the range of theories and preventing new weeds from taking hold.
Theories subject to this treatment include Einstein’s General Relativity (1915), Whitehead (1922), one of Bergman’s (1968) scalar-tensor theories, one of Nordstrom’s theories, theories by Birkhoff (1943), Dicke-Brans-Jordan (1961, 1959), Ni (1970, 1972) and many others.
Suffice it to say that plain vanilla General Relativity complies neatly with these tests and the other models struggle.
The work by Will also covers a lot of metric theories, classified into General Relativity, scalar-tensor theories (of which the Jordan–Fierz–Brans–Dicke theory is a good example), vector-tensor theories and scalar-vector-tensor theories. Let me quote from the second of the Will references mentioned above: • A number of theories fall into the class of “prior-geometric” theories, with absolute elements such as a flat background metric in addition to the physical metric. Most of these theories predict “preferred-frame” effects that have been tightly constrained by observations. An example is Rosen’s bi-metric theory. • A large number of alternative theories of gravity predict gravitational wave emission substantially different from that of general relativity, in strong disagreement with observations of the binary pulsar. • Scalar-tensor modifications of general relativity have become very popular in unification schemes such as string theory, and in cosmological model building. Because the scalar fields could be massive, the potentials in the post-Newtonian limit could be modified by Yukawa-like terms. • Theories that also incorporate vector fields have attracted recent attention, in the spirit of the Extension of the Standard Model (of sub-atomic particles), as models for violations of Lorentz invariance in the gravitational sector, and as potential candidates to account for phenomena such as galaxy rotation curves without resorting to dark matter.
Again Will uses a range of experimental results and concludes that plain vanilla General Relativity complies neatly with these tests and the other models do not.
My problem with all this work is not its intention, but the way it is carried out. MTW, Will and Nordtvedt make up the rules, act as prosecutor, select the evidence, interpret the evidence and act as jury and judge. If they encounter a problem in a theory they allow no attempt to fix it. They simply bayonet the wounded theory, declare it dead and buried, and move on to the next.
Einstein struggled for ten years to develop his General Relativity model, with many twists and turns. He produced an initial calculation for the light bending effect that was half the final result. He argued for, then against, and finally for the existence of gravitational waves. He introduced a cosmological constant and then called it a mistake. So it may be a bit harsh and possibly premature to kill off other models at the first sign of a problem.
Models with Backgrounds The antagonism by MTW towards alternate theories of gravity and alternate interpretations of Einstein’s approach shows up in the discussion by MTW of attempts to view gravity as a standard type of field situated in a flat spacetime background.
This approach has been developed and explored by notable theorists such as Gupta, Kraichman, Thirring, Feyman, Weinberg and Deser (see MTW p436). It offers one of several routes to the field equations of Einstein’s General Relativity.
One version (Fierz and Pauli 1939) borrowed from quantum theory and envisages gravity as occurring via the exchange of gravitons – hypothetical zero rest mass particles with a spin number of 2. MTW claim that by the time this approach is fully developed the original flat spacetime has become unobservable. MTW dismiss the theory (p437) because it is silent about the emergence of the Universe from an initial singularity - the Big Bang Theory. Hence MTW dismiss a serious attempt to bring together the two pillars of modern physics because it is silent about something else that they like.
MTW start their more general discussion of models with backgrounds by stating that any flat background must be unobservable (p424). This is the same point of view put to Lorentz by Einstein in relation to Special Relativity.
Einstein did not deny the possibility of a background that may or may not correspond to a lumiferous aether. He just argued that if it cannot show up in Michelson-Morley type experiments it must be unobservable and hence not useful.
MTW praise General Relativity for being free of any ‘prior geometry’, and criticize any competitors which admit this as a possibility. I think they have a point because I think that all geometry is a man-made overlay and hence has no prior reality. In fact no reality at all except in our own minds. But I think that that agreeing that there is no prior geometry is not quite the same as agreeing that there is no background. It might just mean that the background has no prior geometry.
I think this subtle difference is vitally and fundamentally important.
In my view, Newton’s rotating bucket, the Foucault pendulum and the Sagnac interferometer all readily distinguish reference frames which are rotating or accelerating from ones which are not. The Cosmic Microwave Background does the same. Conversely, Einstein’s attempt to explain why some objects demonstrate rotational phenomena and others do not by imposing boundary conditions on his cosmological models was a failure. Einstein thought so anyway, even though it suits many modern cosmologists to disagree.
So why do MTW show such antagonism to the idea of a cosmological background? I think it might be because they conflate it with ‘prior geometry’. However, to their credit they attempt to clarify their language. On p 429 they say “By ‘prior geometry’ one means any aspect of the geometry of spacetime that is fixed immutably, i.e. that cannot be changed by changing the distribution of gravitating sources”.
So what about a background geometry that is affected by the distribution of gravitating sources in the Universe? This echoes the argument put forward by Ernst Mach towards the end of the 19th century. Einstein admired the idea so much that he dubbed it “Mach’s Principle.” (Einstein endeavored to incorporate the idea into his theories all his life, but eventually concluded that he had not been successful.) If the background is given a Machian interpretation then I think it has to be taken very seriously.
There seems to be a confused belief that the idea of a background conflicts with something that Einstein called the Principle of General Covariance. This principle states that the outcome of physical experiments does not depend on the choice of reference frame in which to view them. In other words, physics is agnostic to reference frames invented by observers for their own convenience.
This principle is entirely reasonable, but has little to do with the fact that some reference frames are more equal than others. For example, frames that are not accelerating or rotating do not contain spurious forces deflecting unattached test particles all over the place.
It is robustly true that the physical outcomes are the same whatever reference frame you chose to use – it is just that the ease of describing what is going on them is vastly different depending on the frame you happen to choose to describe them in. Choose your frame right and you do not have to invent ‘fictitious’ forces to balance the books.
An example of the type of effort that I admire is a modeling approach developed by W T Ni in 1970 and 1972 (see MTW p1070). This has a background geometry and treats gravity as a scalar field. MTW agree that the theory satisfies the equivalence principle (which version is not clear), and that the model is self-consistent and complete. But then they say “If the solar system were at rest in the ‘rest frame of the Universe’, the theory would agree with all experiments to date – except possibly for the expansion of the Universe. But the motion of the solar system through the Universe leads to serious disagreement with experiment (Will and Nordtvedt 1972)”.
The alleged fatal flaw comes from work by Will in 1971 that suggests that the force between two massive objects will depend on the way in which they are travelling through the background metric. This is calculated to create a twice-a-day fluctuation in the tides on Earth and also to tidal fluctuations within the Earth that conflicts with the experimental evidence. This leads Will to claim that the theory of Whitehead (1922) and of Ni (1972) cannot be correct. But is Will correct?
When Galileo presented his model of a spinning Earth orbiting a stationary Sun, the wise men of the day calculated that the tangential speed of the Earth’s surface could be anything up to 1600km/hour and said that Galileo’s model could not possibly be true because no bird could fly that fast in order to keep up. It was not an unreasonable point because this was in the days prior to Torricelli and hence there was not yet a concept of the atmosphere being a thin layer coating the earth with no viscous drag where it meets the vacuum of space. Likewise Will’s objection could be perfectly reasonable and clever, but nevertheless wrong.
Since General Relativity and modern physics generally cannot explain the enormous anomaly in the motion of stars in the discs of all spiral galaxies (without inventing hypothetical dark matter) there is clearly something occurring that is not well explained by the conventional paradigm. So it may be unwise to be too definite about what is right and what is wrong at this stage. What if the type of thinking put forward by Ni resolves the galaxy rotation curve crisis?
The Fabric of Spacetime? It is common to hear expressions such as “the fabric of spacetime”, and “spacetime tells matter how to move”. It is also common to see diagrams with spacetime depicted as a curved or distorted rubber sheet with dimples around massive objects forcing the path of freely moving bodies into curves and orbits.
This is a bit unfortunate. It tends to create the impression that spacetime is a real thing, an actual entity. That moving objects are deflected because they hit a bump in the road.
Spacetime is a just way of defining places and moments, lengths and times in a satisfactory way so we can describe what is going on. We do this sort of thing all the time, but we need to be careful not to get confused between our imagined constructs and actual reality.
For example, we have an agreed way of assigning lines of latitude and longitude to the surface of the Earth. This creates a two dimensional grid. But it is not real. You cannot see it, touch it, taste it, smell it or hear it. You cannot detect it with any instruments. It does not interact with matter in any shape or form. The motion of everything on Earth is oblivious to the imaginary grid that we have imagined and agreed upon for our own convenience of reference.
It is the same with spacetime. Curved or not. It is just a reference frame that we overlay onto physical reality to make it easier to talk about what is going on. If it proves convenient to use a warped geometry then use that. If some other representation is more convenient then use that one instead. It makes no difference to actual reality.
Imaginary reference frames are useful for describing physical systems. That is all. Spacetime does not exist as a thing in its own right, any more than the lines of latitude and longitude exist on the surface of the Earth. Spacetime curvature does not tell matter how to move any more than the lines of latitude and longitude tell ships and planes how to move, or ducks how to migrate. It is important not to become confused between descriptions of reality that we happen to find useful and reality itself.
When someone says “matter tells spacetime how to curve and spacetime curvature tells matter how to move” we should not take the words too literally. It would be better to remind ourselves that we have imposed an imaginary and somewhat arbitrary reference frame across the physical system we are trying to describe and that for some purposes we find it convenient to model the effects of gravity by using a warped four dimensional framework.
I know that General Relativity can be recast in terms so generic that the convenience of coordinates can be dispensed with all together. However I do not think this alters my point.
I am also familiar with the argument that goes as follows. By applying geometry to the surface of the Earth we can discover that two dimensional geometry which is locally Euclidean no longer works on a larger scale, thus revealing that the surface of the Earth is a curved manifold in three dimensional space. Similarly, applying four dimensional geometry which is locally Lorentzian on a larger scale reveals that spacetime curvature is necessary to account for physical dynamics in the presence of gravity. I would agree with this if I thought that Einstein’s Equivalence Principle was literally true. But I don’t. I think that gravity can be mimicked by a linear acceleration to a certain degree, and that it is possible to build clever mathematical models based on this fact. But is gravity, the most dominant force in the Universe, just an illusion created by a quirk of geometry? I don’t think so.
Is General Relativity Perfect? Yes, you can get rid of gravity by imagining spacetime is curved. Yes this is brilliant stuff. Yes this produces a small number of remarkable (very small) predictions that turn out to be true. And yes it is possible to build innumerable wonderful cosmological models using curved spacetime geometries. But that is not conclusive proof that the modern version of General Relativity is the only way to look at the Universe, the best way to look at the Universe, or even the most convenient way to look at the Universe.
General Relativity is very hard to use and I think the results of its over complicated mathematics throw up more questions than they answer. And while I am being heretical, I may as well produce a list of criticisms. In my naive opinion, the modern version of General Relativity: 1. is based on a Principle of Equivalence which is just a mathematical assumption 2. elevates spacetime to a status it does not deserve 3. does not explain why matter, stress and energy distort spacetime 4. does not explain the origins of linear or rotational inertia 5. does not explain why matter has mass 6. is so complicated that it enables mathematicians to come up with a whole range of solutions which have no correspondence in Nature 7. creates red herrings that waste everyone’s time 8. has not helped with the dilemmas of missing Cold Dark Matter and Dark Energy 9. has predictions which can be accounted for in other ways 10. obscures, bypasses or overshadows a lot of fundamental issues that deserve more attention.
In many ways, the proof of the pudding is in the eating. A century after General Relativity was produced, most astronomers do not use it except in special circumstances such as gravitational lensing and black holes. In day-to-day discussions they just use a post-Newtonian approximation. And although Einstein and modern ‘metricists’ seem to be averse to any ‘a priori’ geometry in the Universe, astronomers nevertheless find it convenient to have agreed reference frames for the Solar System, the Milky Way and the wider Universe. Are they are instinctively using something that has physical significance?
Spatial Curvature on a Cosmological Scale Einstein’s General Relativity model requires mass/energy to warp spacetime. Its equations involve mass/stress/energy tensor warping the 4x4 spacetime metric tensor in all of its sixteen components (six of which are duplicates due to symmetry).
Suppose there are three spacecraft at rest with respect to each other and the cosmic microwave background. Connect them by laser beams and measure the angles between the three beams. The beams form a triangle. If the interior angles always add up to 180 degrees, then spacetime is flat. If the sum of the angles is more or less than that, then space has positive or negative curvature.
As far as we can tell, on a very large spatial scale our Universe is flat. Very flat. Parallel lines in intergalactic space will never meet. But this requires a very particular value for the universal stress energy tensor. Theorists have been struggling for nearly a century to explain why this might be so. It would be a remarkable coincidence if the average mass/stress/energy in our Universe happens to be exactly the right amount for universal spacetime to have neither positive nor negative curvature on a macro scale.
So Why is our Universe so Flat? The Einstein-Friedmann-Robertson-Walker models of the Universe involve solutions for Einstein’s equations based on the assumption that the Universe is more or less uniformly filled with mass/stress/energy, a bit like a perfect fluid. The equations have solutions which have positive curvature (like a sphere in 3D), negative curvature (like a hyperbolic surface in 3D) or zero curvature (i.e. flat) depending on the energy density measured by the omnipresent mass/stress/energy tensor.
But observations show the Universe is flat. So how did the Universe come up with exactly the right amount of mass/stress/energy to arrive at this special case? Theorists have come up with all sorts of suggestions, but it is still a mystery.
I have a much simpler answer. The Universe is flat because is has always been flat and that is the only thing it can be. The non-flat Friedmann solutions are just artifacts of the model and the assumptions used in obtaining its generic solutions. Not all aspects of the solutions have to correspond to reality.
When I read the debate between cosmologists about possible values for the cosmological constant I cannot help but be reminded of the debate between medieval theologians about how many angels can dance on the head on a pin. All very clever, but maybe not very useful.
Consider the Cartesian map analogy again. It is possible to map the surface of the earth onto a two dimensional surface by allowing the lines of longitude to move further and further apart as the distance from the equator increases. This is very useful, especially when reproducing maps on paper. But it produces singularities at the north and south poles. It is no good worrying about the meaning of these singularities and what bizarre things might be happening at the poles because the singularities do not exist in reality – they are just an artifact of the mathematical approach used to build the two dimensional model.
Likewise, you can waste time worrying why the Universe is flat, or you can just accept that the non-flat mathematical solutions are an artifact of a particular set of solutions to a peculiar model of the Universe based on a peculiar approach to describing physics.
I say peculiar model of the Universe because it seems to me that the Einstein-Friedmann-Robertson-Walker models of the Universe rest on some doubtful assumptions. Friedmann was not making assumptions summarizing actual experimental observations – he was just making gross simplifications in order to be able to get a handle on the mathematics. I don’t think the Universe is anything like a homogeneous perfect fluid. The more we look the more we find macro patterns in its structure. Huge super-clusters of galaxies, filaments, voids and walls. The analogy to the particles in a fluid are the galaxies. However, unlike the molecules in a fluid, the space between the galaxies contains a lot of intergalactic dust, neutrinos and photons. Furthermore galaxies collide with each other in ways that are totally different to the ways that molecules collide in a fluid. And the list of differences goes on.
I think the large scale geometry of our Universe is so flat because it was never curved. Cosmological curvature is our idea – not Nature’s.
Conclusion Inspired by Einstein’s great work there have been literally dozens of other models of gravitation over the last hundred years or so.
At first gravitation theory was a theorist’s paradise but an experimenter’s purgatory. Since the 1960’s however, developments in space technology and astronomy have created the ability to test many aspects of this work. The clear winner has been Einstein’s original theory. So much so that many scientists regard spacetime curvature not as a model of what is going on in nature, but as a fundamental new reality.
I think this is a mistake. I think that General Relativity is a very clever and successful model, but a model none-the-less. There is plenty that we do not yet understand properly about the Universe, how it works and how it evolved. Refusing to consider alternative approaches has the strong likelihood of unnecessarily limiting our understanding and delaying the next generation of breakthroughs.
General Relativity leaves key questions unanswered, e.g. in relation to the origins of inertia and the dynamics of spiral galaxies. The fact that 98% of the Universe required by theory cannot actually be found may be trying to tell us something - we have missed something fundamental. We have got off track. Something is wrong somewhere.
The current orthodoxy is a cage to our thinking and it deserves to be rattled and shaken. And finally a message to young scientists – please do not stop asking questions and do not stop questioning what they tell you, especially if it seems fudged.
A quote from a lecture Einstein gave in 1921: “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain they do not refer to reality.”
2 notes
·
View notes
Text
27 Gravitational Waves 12Sep18
Introduction This essay continues my series of essays discussing tests of Einstein’s Theory of General Relativity. More detailed descriptions of the test themselves can be found online and in the literature. See for example the literature review in May 2017 by Estelle Asmodelle from the University of Central Lancashire Ref: arXiv:1705.04397 [gr-qc] or arXiv:1705.04397v1 [gr-qc].
I have questioned whether the experimental tests exclude any other explanations for the same phenomenon. So far I have examined gravitational redshifts and gravitational light bending, the Shapiro round-trip light delay and the ‘anomalous’ precession of Mercury. The evidence so far is that while General Relativity provides a satisfying explanation for all of these experimental observations, other ways of describing the outcomes are also viable. Hence there may be more than one way to include all the evidence within a different but still complete and consistent model or theory.
In this essay I will look at the latest of the five so-called tests – gravitational waves.
Gravitational Waves Gravitational waves are generated in certain gravitational interactions and propagate as waves outward from their source at the speed of light. Their possibility was discussed in 1893 by the polymath Oliver Heaviside, using the analogy between the inverse-square laws in both gravitation and electricity.
In 1905, Henri Poincaré suggested that a model of physics using the Lorentz transformations (then being incorporated into Special Relativity) required the possibility of gravitational waves (‘ondes gravifiques’) emanating from a body and propagating at the speed of light.
Some authors claim that gravitational waves disprove Newton’s mechanics since Newton assumed that gravity acted instantaneously at a distance. I think this is unfair to Newton. Whether or not Newton explicitly claimed that gravity acted instantaneously at a distance I do not know, but it would have been a reasonable and pragmatic working assumption to make at the time. Furthermore whether he assumed instantaneous effects or delays at the speed of light makes no practical difference to the validity of Newton’s work for the type of celestial mechanics he was interested in.
In 1916, Einstein suggested that gravitational waves were a firm prediction of General Relativity. He said that that large accelerations of mass/energy would cause disturbances in the spacetime metric around them and that such disturbances would travel outwards at the speed of light. A spherical acceleration of a star would not suffice because the gravity effects would still be felt as coming from the centre of mass. The cause would have to be a large asymmetric mass that was rotating rapidly. Or better still, two very large masses that were rotating around each other.
In general terms, gravitational waves are radiated by objects whose motion involves acceleration and changes in that acceleration, provided that the motion is not spherically symmetric (like an expanding or contracting sphere) or rotationally symmetric (like a spinning disk or sphere).
A simple example is a spinning dumbbell. If the dumbbell spins around its axis of its connecting bar, it will not radiate gravitational waves. If it tumbles end over end, like in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, or the faster it tumbles, the greater the gravitational radiation. In an extreme case, such as when two massive stars like neutron stars or black holes are orbiting each other very quickly, then significant amounts of gravitational radiation will be given off.
Over the next twenty years the idea developed slowly. Even Einstein had his doubts about whether gravitational waves should exist or not. He said as much to Karl Schwarzschild and later started a collaboration with Nathan Rosen to debunk the whole idea. But instead of debunking the idea Einstein and Rosen further developed it and by 1937 they had published a reasonably complete version of gravitational waves in General Relativity. Note that this is 22 years after the General Theory was first published.
In 1956, the year after Einstein’s death, Felix Pirani reduced some of the confusion by representing gravitational waves in terms of the manifestly observable Riemann curvature tensor.
In 1957 Richard Feynman argued that gravitational waves should be able to carry energy and so might be able to be detected. Note that gravitational waves are also expected to be able to carry away angular or linear momentum. Feynman’s insight inspired Joseph Weber to try to build the first gravity wave detectors. However his efforts were not successful. The incredible weakness of the effects being sought cannot be over emphasized.
More support came from indirect sources. Theorists predicted that gravity waves would sap energy out of an intensely strong gravitational system. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar (a discovery that would earn them the 1993 Nobel Prize in Physics). In 1979, results were published detailing measurement of the gradual decay of the orbital period of the Hulse-Taylor pulsar, and these measurements fitted precisely with the loss of energy and angular momentum through gravitational radiation as predicted by calculations using General Relativity.
Four types of gravitational waves (GWs) have been predicted. Firstly, there are ‘continuous GWs,’ which have almost constant frequency and relatively small amplitude, and are expected to come from binary systems in rotation, or from a single extended asymmetric mass object rotating about its axis.
Secondly, there are ‘Inspiral GWs,’ which are produced by massive binary systems that are spiralling in towards one another. As their orbital distance lessens, their rotational velocity increases rapidly.
Then there are ‘Burst GWs,’ which are produced by an extreme event such as asymmetric gamma ray bursters or supernovae.
Lastly, there are ‘Stochastic GWs,’ which are predicted to have been created in the very early universe by sonic waves within the primordial soup. These are sometimes called primordial GWs and they are predicted to produce a GW background. Personally I doubt that this last type of GW exists.
On February 11, 2016, the LIGO and Virgo Scientific Collaboration announced they had made the first observation of gravitational waves. The observation itself was made on 14 September 2015, using the Advanced LIGO detectors. The gravity waves originated from a pair of merging black holes millions of years ago that released energy equivalent to a billion trillion stars within seconds. For the first time in human history, mankind could ‘feel and hear’ something happening in deep space and not just ‘see’ it. The black holes were estimated to be 36 and 29 solar masses respectively and circling each other at 250 times per second when the signal was first detected.
By August 2017 half a dozen other detections of gravitational waves were announced. I think all of them have been in-spiral GW’s. These produce a characteristic ‘chirp’ in which the signal becomes quicker and stronger and then stops. This is very useful for finding the signal amongst all the background noise. The flickering light pattern signal in the interferometer detector can be turned directly into a sound wave and actually does sound like a chirp.
In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. (The same Kip Thorne who co-authored the heavyweight textbook on gravity that I have referred to so often in these essays that I gave it its own acronym - MTW).
As I first drafted this essay in 2017 there was considerable excitement in the world of astronomy because the Large Interferometer Gravity Wave Observatories (LIGO) suggested that a pair of neutron starts were in the process of collapsing. Space based telescopes were then able to look in that direction and they observed an intense burst of gamma rays. This is the first example of the two types of observational instruments working together and the dual result confirms that LIGO had been observing what they thought they were observing. Furthermore it provides evidence that gravitational waves travel at the speed of light.
Detection LIGO is a large-scale long-term physics project that includes the design, construction and operation of observatories designed to detect cosmic gravitational waves and applied theoretical work to develop gravitational-wave observations as an astronomical tool. It has been a struggle lasting many decades. It took many attempts to achieve funding for the observatories and nearly a decade to make the first successful observations. A triumph of persistence, optimism and the begrudging willingness of the USA National Science Foundation to fund a speculative fundamental science project to the tune of US$1.1 billion over the course of 40 years.
To my mind the experimental set up is reminiscent of Michelson Morley experiments 140 years ago. But it is on a much larger scale and is incredibly more sensitive, with all sorts of very clever tricks to increase the sensitivity and to get unwanted noise out of the system. Two large observatories have been built in the United States (in the states of Washington and Louisiana) with the aim of detecting gravitational waves by enhanced laser interferometry. The observatories have mirrors 4 km apart. Each arm contains resonant cavities at the end.
When a gravitational wave passes through the interferometer, the spacetime in the local area is altered. Depending on the source of the wave and its polarization, this results in an effective change in length of one or both of the beams. The effective length change between the beams will cause the light currently in the cavity to become very slightly out of phase (anti-phase) with the incoming light. The cavity will therefore periodically get very slightly out of coherence and the beams, which are tuned to destructively interfere at the detector, will have a very slight periodically varying detuning. This results in a measurable signal.
Or, to put it another way: After approximately 280 trips up and down the 4 km long evaluated tube arms to the far mirrors and back again, the two beams leave the arms and recombine at the beam splitter. The beams returning from two arms are kept out of phase so that when the arms are both in coherence and interference (as when there is no gravitational wave or extraneous disturbance passing through), their light waves subtract, and no light should arrive at the final photodiode. When a gravitational wave passes through the interferometer, the distances along the arms of the interferometer are repeatedly shortened and lengthened, creating a resonance and causing the beams to become slightly less out of phase and thus allowing some of the laser light to arrives at the final photodiode, thus creating a signal.
Light that does not contain a signal is returned to the interferometer using a power recycling mirror, thus increasing the power of the light in the arms. In actual operation, noise sources can cause movement in the optics that produces similar effects to real gravitational wave signals. A great deal of the art and skill in the design of the observatories, and in the complexity of their construction, is associated with the reduction of spurious motions of the mirrors. Observers also compare signals from both sites to reduce the effects of noise.
The observatories are so sensitive that they can detect a change in the length of their arms equivalent to ten-thousandth the charge diameter of a proton. This is equivalent to measuring the distance to Proxima Centauri with an error smaller than the width of a human hair.
Although the official description of LIGO talks about gravitational waves shortening and lengthening the arms of the interferometers by almost infinitesimal amounts, I think it might also be reasonable to describe what is going on as very slight changes in the speed of the photons being reflected back and forth 280 times in the 4 km long arms, as compared to the reference photons in the resonant cavities.
Some Comments on the Interpretation Commentators continually refer to gravitational waves as being “ripples in the fabric of spacetime”. There seems to be some deep-seated human desire to regard spacetime as being real and tangible, more or less like some sort of four dimensional fluid in in which the Universe is immersed. Computer based animations invariably depict empty space as some sort of rubberized sheet being dimpled by massive ball bearings and this promotes the same sort of mental images, attitudes and beliefs. Which is a pity.
It may be a lost cause but I point out once again that spacetime is a human construct for measuring, modeling and discussing what is going on in the Universe. It has no more reality that the Cartesian coordinate grid of latitude and longitude lines here on Earth.
It was not Einstein who promoted the idea that curved spacetime is an actual physical reality. This only happened after his death and was promoted by authors such as MTW and Stephen Hawking. For example, John Wheeler often made the comment that “mass/energy tells spacetime how to curve, and spacetime curvature tells matter how to move”. The cover of MTW classic textbook shows a little ant wandering around on the surface of an apple and dutifully following its curvature.
I would say to John Wheeler that he has started to confuse mathematical models with reality and that the analogy with the ant is a false one. The ant can feel the curvature of the apple with its little feet. The surface and its curvature is real and tangible. But spacetime is a manmade imagination created for our own convenience. A better analogy is the lines of latitude and longitude we have invented for talking about movement on the surface of our home planet. These lines do not actually exist. They cannot be observed. They are not tangible. I would say to John Wheeler that spacetime does not tell matter how to move any more than the latitude and longitude grid on Earth tells ducks how to migrate.
Which is not to say that I think that spacetime does not correspond to something that it observable. In fact I do. But this is a heretical idea that I will explore in other essays.
I also agree that applying a spacetime metric to this “something” is a good idea. But spacetime is not that something, and that something is not spacetime. In other words, do not get a reference system invented by mankind for convenience of describing physics mentally confused with reality itself.
Another crime in my book is commentators who compare gravitational waves with electromagnetic waves. Unless such commentators can explain how two stars orbiting each other can produce quantized packets of energy and then how these packets can be reflected, polarized, refracted etc. I suggest that they refrain from such analogies. If they must use analogies I suggest that they try acoustic comparisons instead.
Note that Doppler effects are a familiar phenomenon in sound waves and they should also occur for other moving disturbances such as gravitational waves. But where gravitational waves are concerned the effects should not be called red-shifting. The Doppler effect is not called red-shifting when it applies to acoustic waves and I think it should not be called red-shifting for gravitational waves either. It is just a plain old Doppler effect.
Discussion I do not find it surprising that a pair of massive pair of stars rotating about each other might have tiny push-pull effects a long way away. I think this is what you would expect to find even with a basics inverse-square law based on classical physics. For example, if a large asteroid suddenly knocked the moon out of its orbit, I think it reasonable to expect that observers on Earth would notice changes in gravity very soon afterwards.
Nor am I surprised that gravitational disturbances travel at the speed of light. In fact I am surprised that this has not been measured experimentally years ago. For example, the passage of the Moon overhead produces a noticeable gravitational tidal effect on the surface of the Earth. Since the centre of the pattern of this disturbance coincides exactly with where the Moon appears to be then that is evidence for the gravitational effect to be arriving hand in hand with the visible light from the Moon.
I would be surprised if gravitational waves are ever found to consist of discrete quantized packets, analogous to photons. In my currently preferred conceptual model of the Universe, photons are disturbances in something that can be modelled by spacetime constructs, and gravitational waves are disturbances of that something itself.
This is more than a semantic difference. Consider a laser beam that is pointed at the sky and turned on and off again. This sends bunches of well-collimated photons off on a journey into deep space which, in principle, can continue travelling indefinitely. Barring absorption by dust or blocking by some solid barrier, the beam of photons stands a chance of being able to be detected on some distant galaxy at some time in the future. Not so a gravitational wave. The energy from a gravitational wave spreads outwards in all directions and becomes increasingly weak with distance from its source. I think there is almost no chance of being able to detect gravitational waves coming from binary star events and suchlike outside of their local galaxy. Colliding galaxies might be a different story.
Conclusion After initial doubts, Einstein eventually decided that gravitational waves were a necessary feature of his Theory of General Relativity. The recent detection of gravitational waves, apart from being a remarkable achievement, is further confirmation that General Relativity works well as a model. However I think it is not proof that General Relativity is the only viable and useful way of looking at physics in our Universe.
1 note
·
View note
Text
26 Mercury’s Perihelion Precession 10Sep18
Introduction This essay continues a discussion about tests of Einstein’s Theory of General Relativity and the foundations of modern physics more generally. It looks at an early success claimed for General Relativity in that its mathematics were accurately able to model a perplexing feature of solar system physics – a tiny unexplained extra precession in the orbit of Mercury around the Sun. But is General Relativity the only way to achieve this success? And why is it so hard to explain what is going on in plain English?
Prologue Previous essays have looked at the classics tests of General Relativity that involve photons. So far I have examined gravitational redshifts and gravitational light bending and the Shapiro round trip light delay. These all involve the behaviour of photons. The evidence suggests that General Relativity provides an accurate and consistent explanation for any minor deviations from classical predictions.
However, the evidence also suggests that General Relativity is not the only way of interpreting the results. I‘m open to the idea that General Relativity provides a satisfying explanation for the results but that this does not mean that no other theory, approach or model is possible. In fact I strongly suspect that other approaches may offer extra understanding and insights.
More detailed descriptions of the tests themselves can be found online and in the literature. See for example the literature review in May 2017 by Estelle Asmodelle from the University of Central Lancashire Ref: arXiv:1705.04397 [gr-qc] or arXiv:1705.04397v1 [gr-qc]
Included in the tests is the very first evidence that Einstein offered for the usefulness of his new approach the fact that it finally gave an accurate answer for a tiny oddity in the orbit of the planet Mercury.
MTW (p433) quote Einstein as saying in 1915 ”In the present work I find an important conformation of this most radical theory of relativity; it turns out that it explains qualitatively and quantitatively the secular precession of the orbit of Mercury in the direction of the orbital motion, as discovered by Leverrier, which amounts to about 45” per century, without calling on any special hypothesis whatsoever” (see MTW p433).
Einstein’s explanation of the anomalous precession in the perihelion of Mercury’s orbit around the Sun provided the first experimental support for General Relativity. I say experimental support not because it was an experiment designed to test General Relativity but because General Relativity gave a quantified explanation of the effect where other theories had failed to do so in a consistent and satisfying way.
The success of the Mercury perihelion precession calculation was the first confirmation that General Relativity works for large, heavy, slow moving objects like a planet.
However, a note of warning. The fine detail of Mercury’s orbit is affected by many complications and several unknowns. Furthermore, Einstein’s result comes from the depths of some pretty complicated mathematics and is never, that I can find anyway, accompanied by a simple plain English explanation of what is going on. The current orthodox explanation is that spacetime curvature tells Mercury to travel around in its orbit and a small second order effect in the curvature is responsible for the anomalous extra precession. But try asking some questions about this such as – how and why does the small second order effect arise? What if anything does the eccentricity of Mercury’s orbit have to do with it? Would the effect still occur if Mercury’s orbit was in the other direction? What is the full gamut of relativistic effects in Mercury’s orbit etc. The populist presenters of General Relativity quickly run out of puff.
Most explanations of the perihelion effect are purely mathematical. This may satisfy those that think that physics is just mathematics, but I think it is less than fully helpful to students who have not devoted several years of their life to reaching that particular part of mathematics, or to the general public. I think a good physicist should be able to give a reasonable explanation of their mathematical results in plain language, or at least try to do so.
Precession of Mercury In Newtonian physics, the path of a small object orbiting a large spherical mass traces out an ellipse that has the center of the spherical mass at its focus. The point of closest approach is called the periapsis. If the central body is the Sun it is called the perihelion.
A number of effects in the Solar System cause the perihelia of planets to slowly advance (precess), in the same direction as their orbit. The main cause come from the gravitational effects from other planets. A much smaller effect is due to the fact that the Sun is not spherical and is itself spinning (i.e. the Sun is oblate and has a quadrupole moment.)
You can think of the precession in the following way. First imagine the orbit is a perfect ellipse repeating itself over and over again. Then imagine this whole pattern slowly spins around the Sun. The following diagram may help.
Diagram is by Estelle Asmodelle as quoted above. For clarity, the size of the effect is greatly exaggerated. The Sun is the green disc and Mercury the yellow disc – also grossly exaggerated in scale. Theta is the change in angle between one aphelion (or perihelion) to the next.
In Newton’s, ‘System of the World,’ Book 3, Newton states that precession of Mercury can be accounted for by perturbation caused by other planets. Newton’s work does explain the precession to a very large degree, but it does not fully account for the phenomena. There is a small residual anomalous precession that was recognized and identified as a problem in celestial mechanics by Urbain Le Verrier in 1859.
Le Verrier had already achieved fame by predicting the existence of the planet Neptune. He seemed to enjoy the fine detail of celestial mechanics and was brilliant at it. Le Verrier used perturbation theory and planetary transit data spanning 50 years of observations to calculate the interaction of Mercury with the other planets He calculated the residual/’ anomalous’ precession to be 38” per century. This was further refined by Simon Newcomb in 1882 and re-calculated as being 43” per century. The current best estimate is 43.08 plus or minus about 0.1 arcseconds/century. This is about 0.033 parts per million per year.
For clarity let me explain the larger precession effects in Mercury’s orbit. There is a total effect of about 5600 seconds of arc per century, but of this 5030 seconds are due to classical mechanics using an Earth based system of reference which in effect means that Mercury’s orbit is being looked at from a continuously changing angles.
The second biggest effect comes form the gravity of other planets, mainly Jupiter. All the planets perturb (wobble) each other’s orbits a little bit. The combined effects of the planets drags the midline of Mercury’s orbit (i.e. a line from the perihelion to the aphelion) around by 530 seconds per century. This effect was already calculated by Newton and other classical physicists.
After Le Verrier’s work many theories emerged to try to resolve the small, residual ‘anomalous’ precession using Newtonian equations, but they were ‘ad hoc’ and failed to have useful wider predictions. For example, some people speculated that the excess precession effect might be due to an unseen planet that they named Vulcan. After all, that was how Neptune and Pluto were discovered. Other people suggested bands of asteroids, solar wind effects, strange goings on inside the Sun, the Solar corona, electric and magnetic influences, departures from the inverse square law for gravity, a dependence of the gravitational force upon the velocity of the orbiter and so on.
Some attempts at physical explanations suggest that Mercury feels a gravitational potential that is not exactly the potential of Newtonian celestial mechanics. Some sources say this is embodied in the Schwarzschild solution for just the Sun itself, and some say it arises because Mercury’s own mass distorts the spacetime through which it moves. Other sources suggest that a change in the gravitational potential (or the change in the spacetime curvature if that is the way you want to look at it) is caused by the other planets.
Unconventional explanations also continue to come forward. The complexity of the physical situation and the fact that the outcome is already know lends itself to ideas that ‘back solve’ the result. For example, it is possible to make some assumptions about Venus so that it perturbs the orbit of Mercury to the desired degree. Some of these unconventional suggestion contain physics that seems plausible or at least interesting but the effects seem too small, or not evident anywhere else. Some of the efforts get the right answer but I struggle to understand the suggested physics or the calculations.
Overall I find the array of explanations to complicated and confusing and possible contradictory.
MTW (p1113) quote G Clemence from 1947 “The observations cannot be made from a Newtonian frame of reference. They are referred to the moving equinox, that is, they are affected by the precession of equinoxes, and the determination of the precessional motion is one of the most difficult problems of observational astronomy, if not the most difficult. In the light of all these hazards, it is not surprising that a difference of opinion could exist regarding the closeness of agreement between the observed and theoretical motions.”
Einstein used a variety of mathematical approaches (e.g. Gerber’s equation) over a span of eight years (1907-1915) to try to explain the anomalous precession. It is fair to say that he developed his General Theory partly with the Mercury precession issue in mind (although a 1938 book signed off by Einstein asserted he developed his theory first and then only then applied it t to the problem) and that he had several attempts at being able to model it. In 1915 he used the field equations of his new model and was able to explain the anomalous precession precisely, which delighted him greatly. His result for the tiny anomalous precession was 45”± 5”. This early success was a ‘feather in the cap’ for his new theory and helped it gain favorable attention.
In 1915 Karl Schwarzschild (then serving with an artillery brigade on the Russian front) produced a solution of Einstein’s equations for the case of a large, non-rotating spherically symmetric gravitational mass. This gives a neater route to the mathematical calculation of Mercury’s orbit in General Relativity and the precession shows up as a small extra term dependent on the inverse cube of Mercury’s orbital radius. This term is absent in classical mechanics.
The second derivation may be neater than the first but it still does not, to me anyway, give a clear and simple explanation plain English explanation of what is going on. My overall impression is that that it took Einstein eight years of hard work, with several dead ends along the way, to finally come up with an approach and equations that finally worked. He used a simplified model for Mercury’s situation and managed to conjure up a tiny second order term which gave a prediction for the anomalous precession very close the best estimates available at the time. The model works, but exactly why it works is less clear.
Perihelion Shifts elsewhere The cause of the anomalous perihelion shift for Mercury also applies to the orbits of other planets, but the effects are smaller. For example, about 8.63 arcseconds per century for Venus and 3.84 arcseconds per century for Earth. Other examples in the galaxy are more interesting.
In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, a discovery that would earn them the 1993 Nobel Prize in Physics. Scientists were very excited by this and subsequent discoveries as they provided marvelous examples of physics at work in much stronger gravitational fields than those found in the Solar System.
Applying the General Relativity to these systems gave rise to predictions of ‘anomalous’ perihelion shifts just as it does for Mercury, only very much larger. The observational evidence is consistent with the predictions obtained using General Relativity.
The General Relativity model also predicts energy to be carried away from such systems in the form of gravity waves. (The correct term might be gravitational waves – I’m not sure, so I’m going with the shorter word). Binary pulsars show orbital decays consistent with gravity waves carrying away some of their energy, as predicted. Another success for General Relativity. Gravity waves are the subject of my next essay.
Plain English Explanation? The explanation by General Relativity of anomalous perihelion precession of Mercury can be explained mathematically in several ways. Einstein originally used approximation methods for the equation of motion and his work contained a higher order term that equates to the size of the anomaly.
Later on the solution was expressed in terms of the Schwarzschild metric for a large non-rotating spherical mass. Solving the field equations for this situation gives a framework for modeling Mercury’s orbit and the result has an extra term involving the inverse cube of the Mercury’s orbital radius. This term accurately accounts for the little extra bit of precession. But why?
I always like to find a plain English explanation in terms than non-mathematicians can appreciate. I think it helps improve everyone’s understanding of the underlying physics. You would think that such an explanation would be easy to find but it is not. Furthermore the partial explanations and suggestions of possible explanations that I can find on the Internet contradict each other.
My main interest is to try to find out why the General Relativity approach is successful. What new physics is brought to the table? After all, Einstein was not an experimenter. Between 1900 and 1915 he did not discover some new physical object or some new and previously unrecognized force of nature. What he did do so brilliantly was devise a new way of looking at things. With the assistance of Minkowski, Grossman and many others he was able to deploy a whole new set of mathematical tools that have proved to be powerful and useful. But where was the new physics? What, if anything, had earlier great scientists not taken into account? I think the answer to that is as follows.
The new physics in Einstein’s work falls into two groups. The first group builds on the work of Lorentz and Poincaré and was embedded in Special Relativity. It starts by insisting that physics has to be very careful about its reference frames and the meaning of measurements and the effect of movement on the fundamentals of mass, length, time and the speed of light. The new physics is that time dilates, lengths contract and mass increases as systems approach the speed of light, plus the consequence that mass and energy are equivalent. The second group of new physics arises purely from thought experiments on accelerated reference frames. The new physics is that gravity slows down the speed of light and dilates time.
I think the explanation for Mercury’s perihelion precession must come from these new aspects of physics. The fact that curving the geometry of spacetime is a useful mathematical modeling technique is not of itself the source of the explanation.
Conjecture There was an enormous amount of new physics being discovered in the early part of the 19th century, particularly in atomic and nuclear physics. However, I think that discoveries (perhaps realizations is a better word) relevant to the Mercury orbit issue can be found in Special Relativity and the fact that gravity slows down time.
Let us apply this to Mercury’s orbit. There are three relativistic effects that Newton, Laplace, Cavendish, Le Verrier, Poincaré and other great classicists would not have been aware of and which I think might be relevant to the extra little bit of precession in Mercury’ orbit.
Gravitational time dilation. The Sun creates a huge gravitational well and Mercury, being the planet closest to the Sun, feels the force of the Sun’s gravity most strongly. We now know, and can demonstrate with atomic clocks, that time is slowed down by gravity. Hence a clock at any point in the path of Mercury’s orbit runs more slowly than a reference clock a long way away from the Sun. This effect is the same whether the clock is in orbit or whether it is held in place somehow. So a clock that travels in orbit along the same path as Mercury will experience time dilation the whole time. In fact, time as experienced by Mercury, i.e. local time, also known as proper time, will be slowed due to an effect from the Sun’s gravity. Let us suppose that a distant observer is using their own clock to measure the time Mercury takes to get back to the same place it was in the previous orbit. The will assess this as being the period of Mercury’s orbit. But for observers on Mercury, using their own proper time, the value shown on their clocks when this reoccurrence happens will be a lower value. In other words Mercury will not consider its orbital period to be completed until a little while longer, but by that time it will be a little further around the track.
Relativistic time dilation. What is going on here is a version of the twin paradox. One twin waits at a particular point on Mercury’s orbit, stopwatch in hand, while an identical twin with an identical stopwatch travels around the orbital path. When the twins meet again they find that the travelling twin’s stopwatch gives a lower reading that the stopwatch of the twin who stayed in place. Even though the travelling twin was in free fall and felt no acceleration, and the static twin did feel the pull of the Sun, it is the travelling twin whose clock has the lower reading. The travelling twin is entitled to think that the orbital period has not yet arrived. In fact the whole planet Mercury is entitled to consider its orbit to be not yet completed, requiring just a little more time to do so. But by then it will have travelled just a little further along.
Relativistic mass increase. Special Relativity teaches us that a moving object gains extra mass. And Mercury does travel quite fast, especially near its perihelion. Not nearly as fast as light of course, but still quite fast. An average speed of 170,505 km/hour. So it does gain a bit of extra mass. Now the mass of a body in orbit does not affect the size or period of its orbit – a heavy object falls as fast as a light one. And the angular momentum of a body in orbit (circular or elliptical) remains constant. You can think of an orbit as being stable because the Sun’s pull on the objects gravitational mass is balanced by the desire of the object’s inertial mass to keep travelling in a straight line. The equivalence of gravitational mass and inertial mass has been demonstrated in quite a few equisitively sensitive experiments. But where does the relativistic mass increase fit in? Assume that it leads to a stronger pull on the planet from the Sun’s gravity. Then this has to be balanced by an increase in the planet’s speed. There are two ways this can happen – an increase in the rate of travel along the same path or by a small amount of rotation of the entire orbit, in the same direction and plane as the unaffected orbit. The later is the same as an orbital precession.
Consider the orbit of Mercury to lie in a normal 2 dimensional Euclidean reference plane which is non-rotating (as determined with Sagnac interferometers) and where the centre of the Sun is at the origin. Now add in time. Not a Newtonian time dimension but rather a set of concentric time zones reflecting the fact that time runs slower closer to the surface of the Sun. At the radius of Mercury’s orbit time is running slow. Speed is distance divided by time so Mercury’s speed using the correct type of time is actually faster than the Newtonian calculation. However, the orbit does not expand in radius. But it does overshoot a little bit. The effect is to give the whole orbit a bit of a rotation (precession). Meanwhile another phenomenon is also going on. Mercury gains a bit of relativistic mass. Again it does not affect the orbit as such, but it does lead to a bit of extra angular momentum equivalent to the whole orbit rotating at the rate of precession.
Okay. Maybe this right or maybe it isn’t. It’s just a thought. I mention it to give an example of the sort of explanation I am looking for. We know the three relativistic effects exist, and we know that they were not included by classical physics. Since they exist they must have an effect. What is that effect and how big is it? Does it account for some or all of the extra precession?
So I decided to try to find out if others have managed to produce a satisfactory explanation of Mercury’s perihelion precession using nothing more than classical physics augmented by Special Relativity and the fact that gravity slows down time. I was also interested to find out if anyone had switched from complicated mathematical approximation techniques to heavy number crunching computer based simulations – another tool not available to classical physicists.
The answers to both questions is yes. Before looking at that I would like to mention some ideas of my own that I have put to one side.
Some Discarded Ideas Cutting floor idea #1: I think we can rule out the relativistic frame dragging effect predicted by Lense and Thirring in 1918. They considered inertial reference frames in the exterior vicinity of a large massive rotating sphere. They discovered an effect a bit similar to the behaviour of a viscous liquid in close proximity to a rotating sphere. There is a dragging effect near the poles and a counter-rotating rolling effect near the equator, a bit like small cogwheels on the rim of a big gearwheel. I’m not sure how this might explain the precession of Mercury and the effects seem far too small in any case.
Cutting floor idea #2: Mercury’s orbit can be decomposed into a circular orbit plus a rhythmic back and forth motion that makes it elliptical. Maybe the back and forth motion is demonstrating the same effect as a Foucault pendulum, which also demonstrates a slow precession. This would require the whole solar system to be rotating slowly with respect to the so-called “fixed stars” (whatever that means). I quite like the idea but it would too much of a coincidence that the result should be so close to the result that comes from the Schwarzschild solution.
Cutting floor idea #3: In an earlier essay I suggested (seriously and not just as a provocation) that Mach’s Principle holds the solution to the galaxy rotation curve problem, obviating the need to invent imaginary cold dark matter. I questioned the meaning of the idea of “fixed stars” in a post Hubble reality where every galaxy is rotating in a Universe of countless other galaxies. I suggested that localized Lorentzian inertial reference frames in the outer reaches of spiral galaxies will orient partly to the stars in the home galaxy and partly to a background frame provided by all the other galaxies in the Universe. The end result is that stars in spiral galaxies move around the central bulge of their galaxy in their exact Keplerian orbits, in spite of what appears to outside observers to be excessively fast motions. This led me to wonder if a test object near to the Sun might also experience a bit of inertial frame dragging. But I’ve put this idea to one side for Mercury’s precession because I think that the Sun by itself would not produce anything like the size of the effect I was talking about.
An Alternative Description of the Precession? In an incomplete search of online accessible material I came across several papers that claimed to account for Mercury’s perihelion precession using nothing more than Special Relativity. I took heart that there are more people that suspect as I do that the full conventional version of General Relativity need not be the only way of explaining and understanding what is going on.
However, as is usual these days, unconventional suggestions struggle to be included in the mainstream literature. There are so many people putting stuff online that busy professors and other science professionals resort to ignoring everything from people they do not know or know of, or which is published in unusual ways or places. Even seasoned theoreticians struggle to gain traction unless they are solving a known problem, or if they can provide assurances that their work complies with all known experimental results and lends itself to some new definitive testing. The usual reaction to some claimed new approach or set of insights is – precisely nothing. Deafening silence. Which I guess is better than condemning the authors as heretics.
Of the papers I did read, the one I liked the best was the following: Title: “Simulation model for anomalous precession of the perihelion of Mercury’s orbit” by Abhijit Biswas and Krishnan RS Mani; Department of Physics,
Godopy Center for Scientific Research, Calcutta 700 008, India. Publishd in the Central European Journal of Physics, 3(1) 2005 pp69–76 Abstract: The ‘anomalous perihelion precession’ of Mercury, announced by Le Verrier in 1859, was a highly controversial topic for more than half a century and invoked many alternative theories until 1916, when Einstein presented his theory of general relativity as an alternative theory of gravitation and showed perihelion precession to be one of its potential manifestations. As perihelion precession was a directly derived result of the full General Theory and not just the Equivalence Principle, Einstein viewed it as the most critical test of his theory. This paper presents the computed value of the anomalous perihelion precession of Mercury’s orbit using a new relativistic simulation model that employs a simple transformation factor for mass and time, proposed in an earlier paper. This computed value compares well with the prediction of general relativity and is also in complete agreement with the observed value within its range of uncertainty. No general relativistic equations have been used for computing the results presented in this paper.
I further quote from the paper “In a bid to evolve a simpler approach to analyze experiments so far conducted to prove General Relativity Theory, our previous paper described a mathematical model and the results of numerical simulation of time delay and light deflection experiments, without using general relativistic equations. These results were in good agreement with the respective values predicted by General Relativity Theory as well as with the relevant recent accurate experimental results. In continuation, the result of numerical simulation of the so-called ‘anomalous perihelion precession’ of Mercury’s orbit is being presented in this paper. It may be mentioned here that no work using a similar approach, for the calculation of the anomalous precession of Mercury, could be found in relevant literature.”
As mentioned, this paper follows one published the previous year in the same journal : “Simulation Model for Shapiro Time delay and Light deflection experiments” by A. Biswas and K.R.S. Mani, Central European Journal of Physics, Vol. 2, (2004), pp. 687–697.
In that paper the authors describe how they modelled the behaviour of light using nothing more than Special Relativity equations. They used numerical simulation to model the behaviour of photons in situations involved in the classic tests of General Relativity and came up with the same results.
The Biswas-Mani approach uses the relativistic mass of Mercury and gravitationally dilated time. It sets up the dynamical system in a conventional way and rearranges the equations to focus on the angular momentum of the system. It then lets computers run the system for a century. Three relativistic effects are revealed and together they add up precisely to the observed result.
Other people can comment more expertly on this work, but to me it seems both reasonable and intriguing.
I also found a more recent effort in the same vein. Two Israeli scientists, Yaakov Friedman and Joseph Steiner, explain that they have been able to model the observed anomalous perihelion shift in Mercury and its origin without curving spacetime. They use what they call their “Relativistic Newtonian Dynamics” model. The general reader can find an introduction on Wikipedia if they Google “Relativistic Newtonian Dynamics”. Those that can access the archives of science paper preprints at Cornell University can find it as arXiv:1603.02560 [physics.gen-ph] (or arXiv:1603.02560v1 [physics.gen-ph]).
Let me quote the first paragraph from the Wikipedia article. “Relativistic Newtonian Dynamics (RND) is an extension of Newtonian dynamics that overcomes its shortcomings by considering the influence of potential energy on space and time using some principles of Einstein's theories of special and general relativity. In its current form, it models the motion of objects with non-zero mass as well as massless particles under the attraction of a time independent conservative force in some inertial frame. Unlike general relativity, RND is not restricted solely to a gravitational potential and does not require the exigency of curving spacetime. Created in 2015, the centennial year of Einstein's general relativity, by Israeli scientists Yaakov Friedman in collaboration with Joseph Steiner, RND predicts accurately both classical and modern tests of general relativity, such as the perihelion precession of Mercury which agrees with the known observed perihelion precession, the periastron advance of a binary star which is identical to the post-Keplerian equation of the relativistic advance of the periastron in a binary, gravitational lensing which is identical to Einstein's formula for weak gravitational lensing using GR, and light travel (Shapiro delay) time delay which agree with the known formula for the Shapiro time delay, confirmed experimentally by several experiments.”
Let me give two quotes from the reference given above: ‘In this letter we present a new simple relativistic model for planetary motion predicting Mercury’s precession without General Relativity. The energy conservation equation for planetary motion in Newtonian Gravity is rewritten in terms of dimensionless energy and, then into a norm equation for the 4-velocity in absolute spacetime. This norm equation is then transformed into the corresponding equation in a spacetime influenced by the gravitational potential - the real spacetime. Introducing the concept of influenced direction, the resulting equation yields immediately the known equation for planetary motion predicted by General Relativity. This model predicts the observed value and also provides an interpretation of Mercury’s anomalous precession from the point of view of relativistic Newtonian dynamics. Finally, we show how to recover the Schwarzschild metric from this equation…..”
“The motion of a planet can be decomposed into two periodic motions: the radial motion and the angular rotation. In Newtonian Gravity the periods of these motions are equal, resulting in a non-precessing orbit. Since in reality there is a precession, these two periods are not equal. The reason for this lies in the inaccurate description of the respective velocities (radial and transverse) of these motions by the Newtonian Gravity model ….. In Special Relativity, both the radial and transverse components of the velocity are altered, resulting in unequal periods with relatively small difference between them and hence a small precession. In our model, only the radial component of the velocity is influenced, while the transverse (angular) component is not. This, in turn, accentuates the difference between these periods, resulting in the observed precession.”
As well as the above works I have found several others that also put forward an explanation of Mercury’s perihelion precession by keeping Newtonian gravity, augmented by relativistic effects, and without resorting to full space-time curvature.
Conclusion Einstein’s ability to explain the anomalous precession of Mercury’s orbit identified by Le Verrier was an early success for his new theory of General Relativity. It also provides a test involving a large massive object and so is different in nature from the tests involving photons and gravitational waves.
Some people are happy to believe that the result shows that we must live in a reality with curved spacetime geometry. Mercury is simply following a natural path in curved spacetime and something, possibly the eccentricity of its orbit, flicks that path into a bit of a precession. It may surprise them to be told that Einstein never held this view himself. He held that his model was a useful mathematical description of what was going on, not that it was literally true.
To my mind, it was not the new mathematical toolkit of curved spacetime geometry that led to the success but rather the fact that General Relativity incorporated some new physics from Special Relativity, plus the fact that gravity slows down time and the speed of light.
I was pleased to find several papers in the literature that take a similar view.
I think that the advantage of being able to look at a phenomenon such as this from different perspectives is that it assists the understanding of what is going on and may help generate additional insights.
0 notes
Text
25 Gravity, Time and Light 6Sep18
Introduction In essence, Special Relativity is a systematic attempt to describe the physics of things that move fast, based up on postulates about light, and General Relativity is an attempt to include gravity. Hence a good question at the core of it all is “How does gravity affect the speed of light?” You might think this is a simple question but have a look at the Internet. Confusion reigns!
Sources of Confusion I think a lot of the confusion comes from lack of precision in the question. If Relativity has taught us anything it has taught us to be wary about simple questions about time, distance and speed. We have learnt we must first specify “Who is the observer and what is their situation?” We have learnt that the answers to simple questions about lengths, durations and speeds depend on how they are measured and in what circumstances are they measured - everything is relative.
Hence the speed of light in a strong gravitational field as measured by a local observer who is also embedded in that field might be different from a result obtained by a distant observer well away from the massive object creating the field.
It turns out that the speed of light is affected by a gravitational field, but so to (in fact hence) is local time keeping and so the local speed of light as measured by that local observer is the same as usual. The slowdown is undetectable. But from a more distant perspective the slowdown in the speed of light does become detectable.
Another source of confusion comes from the interrelated complexity between time, distance and speed. In a world where time can run fast or slow, distances can contract and Euclidean geometry may not hold true, the meaning of measurements and hence the quantification of physical properties becomes treacherous.
A third source of confusion comes from confused people teaching confused or confusing messages to innocent student, often with absolute conviction e.g. that the speed of light is a universal invariant that its always and everywhere the same.
What is Time? I was watching a science program the other day and the reporter asked a group of astrophysicists attending a theoretical physics conference in Ontario – what is time? Well they um’d and ah’d and said it was a difficult question and so on. They could not give a ready answer in plain English.
So let me have a go. Firstly the word itself covers two sorts of concepts – a way of tagging a river of events which may or may not be linked causally, and durations between events. Or to take a simple example – If you ask a time keeper at a sports event “What is the time?” she may reply “Do you mean the time of day, or the result of some competitor’s performance in an event?”
The underlying concept is causality. If an event A causes an event B then we say that A occurs before or simultaneously with Event B. No-one has ever witnessed causality running backwards, so we assume that it is a strictly one way affair.
When we come across physical phenomena with a repetitive regularity about them, such as the vibrations of a quartz crystal, we can use it to create a useful clock and hence create a measure of local time durations.
If we standardize such clocks to each other we can start to talk about the time more generally, and we can give an elementary answer to the question “what is the agreed exact time of day?”. But this is a man made convention. We have to be careful not to assume that our concepts can be applied well beyond the scope in which they were created. For example, we cannot assume that the whole Universe is embedded in some sort of all embracing river of time with a Universal standard clock somewhere.
Once we start to consider events on a cosmological scale, or in fast moving situations, we have come to understand that our normal day-to-day concepts of time do not suffice. Different observers can measure different time durations for the interval between the same two events. Time can be observed to run slow, not because clocks are distorted but because the finite speed of light means that the very concept of ‘what happens when’ needs to be reconsidered. Over and above that it turns out that time also runs slow in a gravitational field.
So time is nothing more than mankind’s attempt to quantify intervals between events. It is a manmade construct overlaid on reality, nothing more. It has no independent reality. In fact it can be considered to be a widely shared illusion. And a treacherous one at that.
Definitions and Standards of Measurement Let’s look at the simple equation c = D/T where c is the speed of light, D is a measure of distance travelled and T is a measure of time duration. For this to have any meaning we need an agreed way of measuring D and T and we need agreed units of measurement for both D and T. But where to start?
In the modern world our standards of measurement start with time.
Since the 1970’s there has been an international agreement that a standard second is defined by a set number of oscillations in the electromagnetic radiation (i.e. light) emitted by hyperfine transitions within Caesium 133 atoms held in certain conditions. A standard meter is then defined as being the distance travelled by this light in 1/299,792,458 standard seconds.
It then follows axiomatically that the standard speed of light is 299,792,458 meters per second. If nothing else this helps to pin down some terminology. But it does not mean that the actual speed of light will always be the same as the standard speed of light. A trivial example is if the light is travelling through glass. It’s slower. A more complicated example is if the light is travelling across a spiral galaxy.
Note that this whole approach to the definition of standards could have started in a different way. For example, a standard meter could have been defined as the distance between two scratches on a bar of metal held at a precise temperature in a specified location (e.g. Paris), and a standard second could then have been defined by the time taken for light to travel a set number of meters. Or a standard second could have been defined using an atomic clock and the speed of light could have been left out of it altogether, which is what used to happen before the current system was adopted.
There are three spatial dimensions and only one time dimension, so a democratic approach suggests we should start with defining distance and then move on to define time. Seriously though, length is a lot more observable and tangible than time. We can see and touch and run a ruler over the length of a thing. Time is invisible and intangible.
Timekeeping is always (as far as I can tell) based on motion of some sort, whether this be vibrations in a quartz crystal, the swing of a pendulum or the rotation of a planet. And since motion involves both distance and time, defining time durations based on the motions of things seems a little bit tricky. If time did not exist, how would we know anything was moving? The answer is that we could see things happening – things doing things to other things. Causality at work. But this would offer no guarantees about the nature of time. For example, if everything in the Universe speeded up by 10%, how could we tell?
Furthermore, we know from experiments that time is affected by motion (Lorentzian time dilation) and by gravity (gravitational time dilation). So time is a rubbery phenomenon and in some situations it is a deceptive illusion.
Length is also affected by motion (Lorentzian contraction). This is a small effect in extreme circumstances, but it is nevertheless quite real. It was realized from experiments on the speed of light and came to become a key feature of the Theory of Special Relativity. But nobody, as far as I know, has been able to demonstrate length contraction in a simple experiment or demonstration. And I have never seen a photo-montage showing a Lorentz contracted object.
It is very difficult to hold up ruler against a physical object travelling at relativistic speeds in a straight line and be able to record both ends at exactly the same time. The closest experiments I know of come from studies of high speed collisions between atomic nuclei at the Brookhaven Relativistic Heavy Ion Collider. The heavy nuclei have a non-zero radius and the dynamics of the collisions give the results expected if the nuclei are Lorentz contracted into disks. However, the Brookhaven accelerator is not a linear accelerator and this brings into play the theoretical complications of rotating and accelerated systems (see for example the Ehrenfest rotating disc paradox). Determining the effective radii of the ions is also problematical.
The three spatial dimensions of an object in spacetime are tangible. You can see and touch and measure lengths, widths and heights. You can put a ruler next to them. Time durations on the other hand are anything but simple, especially if the object is moving. You need to specify the situation of the observer very carefully. You need to carry the same clock from one event to the other or else to use a carefully synchronised set of clocks.
Time is a consequential parameter. It is the consequence of causality. At heart maybe the only thing you can be sure of is that Event A causes Event B, then Event A occurs before Event B. This also creates the Arrow of Time. In other words time is a one way phenomenon. You can never re-measure the exact same time interval, nor can you ever measure a time duration in back to front order.
The usual way to bridge from the world of tangible spatial dimensions to a world that involves time, motion, momentum and energy is to involve the speed of light.
What is an Inertial Reference Frame? After studying the results of experiments by Bradley, Eotvos, Roemer and Fizeau (and presumably Michelson and Morley, which he failed to acknowledge) Einstein simply postulated that that the speed of light in vacuum in an inertial reference frame is always the same (299,792.458 km/sec).
By inertial reference frame he meant one which is not accelerating, rotating or in a gravitational field. A frame in which test particles weigh nothing and stay still or travel in straight lines unless compelled by a force to do otherwise.
I think that an inertial reference frame is a an idealized concepts which is impossible to find in practice. Everything in the Universe is either spinning, accelerating or affected by gravity. It was and still is common to say that an inertial reference frame is aligned to the “fixed stars”. However, no-one ever clarifies whether such stars are in our galaxy or beyond it, and what such stars can possible have to do with local physics anyway.
I all my reading I cannot find clarity about whether a satellite in orbit constitutes an inertial reference frame or not. The satellite is undoubtedly within a gravitational field or else it could not be orbiting. But the apparent effects of gravity are undone by the fact that the satellite is in free fall. Or you could consider the force of gravity to have been annulled by the effects of centrifugal acceleration. Either way you look at it test particles inside the satellite will be weightless. So are atomic clocks in this situation subject to gravitational time dilation or not?
I think this is a good question. If the answer is that the gravitational potential at which the satellite orbits does slow down the onboard observers’ clocks then they can determine whether they are free falling in gravity field by measuring the frequency of signals received from deep space, a pulsar say, on their local clock. If the signals are coming in too quickly then their clock is running slow. So they can tell that they are in fact free falling in a gravity field. This violates the Einstein Equivalence Principle, even though some authors will try to wriggle out of it by saying that the experiment is not a local one.
If the answer is no then it suggests that gravitational time dilation only occurs when matter has weight. It also suggests that a centrifugal acceleration can undo gravitational time dilation. Both aspects would be worth deep consideration. There would be interesting implications for the Clock Postulate (see an earlier essay).
As far as I can tell the answer is yes, clocks aboard an orbiting satellite are still subject to a degree of gravitational time dilation, quite apart from Special Relativity effects.
Apart from that an orbiting space station is still a potential candidate to be a localized inertial reference frame. But we have to worry about possible rotational effects.
Sagnac interferometers could be used to detect any spinning of the satellite. If the satellite is managed so that there is no spinning detected in any direction then I guess that the satellite is pretty close to being an inertial reference frame. Now let us look out of the windows of the satellite. It is generally accepted that if telescopes were positioned so that they point at very distant galaxies then those telescopes would remain pointed at those distant galaxies.
But then observers on board the satellite would perceive the Earth going round and round the satellite every orbit. And the Sun and nearly stars would all be going around and around too. So is the satellite spinning or not?
You can see that inertial reference frames are not easy to define in practice!
Einstein and the Speed of Light Between 1905 and 1911 Einstein concentrated on generalizing his description of physics and developed an approach/model that has become known as the Theory of General Relativity. By 1911 he had concluded that in the presence of gravity the speed of light is not a fixed invariant. His model of Special Relativity had to be qualified and elaborated upon. The measured speed of light in a gravitational field becomes a variable depending upon the reference frame of the observer.
His logic is contained in his paper On the Influence of Gravitation on the Propagation of Light', Annalen der Physik, 35, 1911. This predates the full description of his General Theory of Relativity by four years. The result he came up with was expressed mathematically as c’ = (1 + Φ/c2).c where Φ is the gravitational potential relative to the point where the speed of light is measured.
In other words, light appears to travel slower in stronger gravitational fields. There is a more complete description in Section 3 of ‘The Meaning of Relativity', A. Einstein, Princeton University Press (1955).
In 1915 Einstein revised this calculation to be c’ = (1 + 2Φ/c2).c In other words he decided the effect was twice a great as he first thought.
Unlike in the inertial reference frames of Special Relativity, the measured speed of light in gravitational fields depends upon the reference frame of the observer. What one observer sees as true, another observer sees as not true, or at least slightly different.
If you wanted to be mischievous you could say that Einstein’s Theory of Special Relativity is based upon his proposition that the speed of light is invariant, and his Theory of General Relativity is based upon his proposition that the speed of light is not invariant.
Time in a Gravity Well We know from the impressive achievements made in recent decades in developing GPS systems that atomic clocks at rest on the surface of the Earth run slower than identical clocks on orbiting satellites.
For GPS to work, atomic clocks on Earth have to be very well synchronised with identical clocks aboard specially designed satellites. There are a variety of relativistic effects in play but the main one is due to the fact that the earthbound clocks are in stronger gravity than the orbiting satellites. The effect of gravitation is slightly reduced by centrifugal accelerations caused by the spin of the Earth. The overall gravitational effect is about 45 microseconds per day.
The gravitational time dilation effect is then adjusted for smaller relativistic effects, the main one being a Special Relativistic time dilation because the satellites are moving fast relative to the earthbound clocks. This offsets the gravitational effect by about 7 microseconds per day, giving a net relative adjustment of 38 microseconds per day.
When the satellites were first deployed the scientists in charge were not totally confident how much fine tuning would be required to get perfect synchronisation, so they allowed for a large degree of post launch adjustment. Now they make most of the adjustments before launch.
Of course gravity can have a direct physical effect on clocks. For example, a pendulum clock could not work without it. But that it not what we are talking about here. We are talking about an impact on time itself.
The way I prefer to think about all this is to start with the experimental fact that gravity has an effect on the speed of light. Then I remind myself that the measure of time can be thought of as physical lengths divided by the speed of light. Hence time durations are affected by gravity. And then every physical quantity involving time, notably every form of energy, is also affected.
Shapiro Time Delay The Shapiro time delay effect, or gravitational time delay effect, is now regarded as one of the classic tests of General Relativity. Radar signals passing near a massive object take slightly longer to travel to a target and longer to return than they would if the mass of the object were not present. The time delay is caused by the slowing passage of light as it moves over a finite distance through a change in gravitational potential.
In “Fourth Test of General Relativity”, Physics Review Letters, 20 1265-1269, 1968, Irwin Shapiro wrote, “Because, according to the general theory, the speed of a light wave depends on the strength of the gravitational potential along its path, these time delays should thereby be increased by almost 2x10−4 sec when the radar pulses pass near the Sun. Such a change, equivalent to 60 km in distance, could now be measured over the required path length to within about 5 to 10% with presently obtainable equipment.”
This test was first confirmed by experiments that ‘bounced’ radar signals off the planet Venus when it was just visible on the far side of the Sun as seen from Earth. It has since been measured using Mercury as well, and also using satellites such as the Cassini probe.
Note that seen from afar the path taken by the photons is a curve in both directions. You might think that this is what makes them take longer, but that is not the best way to think of it. The photons are taking the quickest route possible, but they are still delayed by the presence of the gravity field of the Sun. They do actually slow down in the stronger gravity closer to the Sun.
Light in a Gravity Well If we throw a ball upwards in a gravity field the ball decelerates, comes to a temporary stop at the top of its trajectory, and falls again. If we throw it faster than the Earth’s escape velocity the ball can overcome the overall gravitational attraction of the Earth and fly off into space with a certain amount of residual velocity.
What happens to a photon ejected from the surface of the Sun? Several essays ago we discussed and decided that the photon arrives at its destination detector in a weakened state. By comparison to other photons we can deduce that it has less energy and momentum than when it started and the frequency of its effects upon being absorbed are slower. In other words it reveals that it has become red shifted.
Does this mean that photons must travel slower as they climb higher – just like the ball? No – not at all! In fact the opposite is true (to a tiny extent). In the above section we discussed that the speed of light is faster in a weak field than it is in a strong field, and this is an experimental fact. Therefore the speed of photons (as measured by a distant observer) actually increases as the photons move into a weaker and weaker gravitational field.
This seems paradoxical. The arriving photon is travelling faster when its arrives than when it started, as measured from afar, but it arrives with less energy than when it started.
To understand this I think it is useful to note that the speed of a photon (as observed from afar) has no bearing on its energy level. See my earlier essay about energy remaining the same when photons travel in media with different refractive indices. I think the energy of a photon is embodied in the packet of physical properties it takes with it rather than in the speed of that packet as deduced by an external observer.
So how then does the photon become weaker? And where did the energy that is no longer contained in the photon end up? In the example of the thrown ball, what is going on is that as the ball gains in potential energy it loses kinetic energy until eventually it stops moving for a moment and then starts to fall again. There is a tradeoff between potential energy and kinetic energy. The potential energy can be thought of a being stored in the gravitational interaction between the Earth and the ball.
Much the same thing seems to happen to a photon. As it gains potential energy it loses electro-magnetic energy so that when it arrives it is weaker (i.e. redshifted).
I think this is a partially adequate description of what happens. However, if you want to adopt the Einstein Equivalence Principle as literally true and in some ways a better description of reality, and if you want to replace the greatest force in the Universe with the mathematical trickery of curved spacetime, then you can also explain the result using the language of Doppler shifts related to accelerations in curved spacetime. It also gives the right answer, so it becomes a matter of choice which point of view you want to adopt.
If you do use Einstein’s General Relativity model then note that it is only the perturbation of the time term that is needed in order to come up with the observed results for gravitational redshifts. The full field equations are not needed and there is no need to call upon any warping in the spatial aspects of the spacetime geometry.
Textbook Conventions Textbook explanations of Special Relativity invariably adopt Einstein’s postulate that the speed of light is an invariant constant. Many go further and tidy up all their equations by putting c = 1 and measuring all distances in light-seconds. They then drop c out of all the equations. They also carry over this convention into General Relativity.
However, most of the interesting predictions and effects of General Relativity depend upon the speed of light not being an invariant constant. So (in my opinion) writing c=1 and then omitting it from the equations obscures and confuses the physics of interest. Likewise, defining the speed of light to be exactly 299,792,458 meters per second is confusing unless we call this the standard speed of notional light and allow for the fact that the actual speed of light is slightly different from this in nearly all situations of interest and experience.
Conclusions Light slows down in the presence of gravity and so it is not invariant. But what you measure as its speed depends on how you measure it. A local measurement will not detect any difference. The speed of light is fundamental to the concept, meaning and measurement of time. So unless you can get this sorted out in your own mind, your physics is destined to end up in a muddle. And you would not be alone!
3 notes
·
View notes
Text
24 Gravitational Light Bending 3Sep18
Introduction This essay continues the focus on the collection of experimental results involving light, gravity and General Relativity. It takes a closer look at the fact that the path of light is bent when it passes close to a massive object such as our Sun, and similar phenomena arising when light from distant sources passes close to a massive celestial body on its way to our telescopes.
History of the Bending of Light by Gravity In the latter part of the 17th century, just after the English Civil War, Sir Isaac Newton came up with an expression for the gravitational force between two massive objects. It was hard work - Newton almost had to invent integral calculus to arrive at his result, but it gives beautifully simple results that depend only on the masses of the two objects and the distance between their centers of mass. It then provides a good way to calculate the trajectory of planets around the Sun and these calculations accord almost perfectly with observations.
But what about the path of photons arriving at Earth from a distant star when a massive gravitating object like the Sun gets so close that the photons are forced to graze its edge?
Some people think that Newton’s Law of Gravitation implies that since the photon has no mass, there can be no gravitational force to deflect its path and hence it should not deflect at all.
But this makes several unjustified assumptions. Who says a photon has no mass? It may have no rest mass, but it does have something called relativistic mass. [The term relativistic mass is one which Einstein eventually came to regret, but the concept is still meaningful. Note that the famous Einstein equation is actually E2 = p2c2 + m2c4 and only becomes E = mc2 for an object which has rest mass m but no momentum p. A photon (or neutrino) does not have rest mass but does have momentum, so the equation becomes E2 = p2c2 and hence E = pc. It does not make much difference, but it does reveal that many commentators have only a poor grasp of what they are talking about.
Experiments on photons also show that E = hf where h is Planck’s constant and f is the frequency of the disturbance created by the photon when it is absorbed/observed, so it is easy to infer the momentum of a photon from its frequency effects when detected. It is also possible to feel the impact of photons when they are detected – (look up “solar sails” for example).
Newton did not know about relativistic mass and hence was silent about it. Nevertheless he thought of light as having a well defined path and robust nature and hence thought of light as being “corpuscular” in nature rather than some of sort of diffuse wave. So he fully expected it to be deflected by gravity (see below). In fact he even calculated the extent to which a massive object would deflect a passing “corpuscule”. Laplace went further and calculated the strength of gravity that would prevent light from escaping upwards – anticipating future considerations of back hole physics by about three hundred years.
Even if a photon has no mass of any sort then it is still not safe to assume that it cannot be deflected, for it would have absolutely no resistance to being deflected either. There is a popular conundrum that asks “what happens if an irresistible force meets an immoveable object?” Well the situation with light is a bit like the opposite conundrum – “what happens if a zero force applies to an entity with zero resistance to being deflected?”
Galileo (who died in the year Newton was born) had already realized that the rate of attractive acceleration towards a massive object is independent of the mass or composition of the attracted object. This is now called the Weak Equivalence Principle. Hence, in the absence of air resistance, a feather falls as quickly as a cannonball. Continuing the argument, an atom falls as quickly as a feather and a neutron falls as quickly as an atom. Whey then should not a neutrino or a photon fall just as quickly as anything else?
Here is a synopsis of how this issue has been handled over the centuries: • In 1704, Newton suggests the bending of light as an aside in his treatise, Opticks. • In 1784, Henry Cavendish calculates the bending of light due to Newtonian gravity but does not publish the result. The evidence of his calculation only surfaced in the 1900s. • In 1801, Johann von Soldner calculates the bending of light as it passes by a massive object in 1801 (taking 25 pages to do it!). The calculation uses Newton's theory of light as a stream of corpuscles with an unspecified mass. However, the mass of the corpuscle (photon) drops out of the calculation, and the angle only depends on the mass of the object and the closest approach to the massive object (e.g. our Sun). The angle of deflection turns out to be: angle ~ 2m/r, where m = GM/c2, M is the mass of the object and r is the closest approach distance of the photon to the object. This solution is an approximation, because it is the first term in a series. All of the other terms in the series are much smaller. Von Soldner's calculation is very close to Cavendish's, and to a first-order approximation, they are the same. • In 1911, Albert Einstein published a paper called "On the Influence of Gravitation on the Propagation of Light", which calculated the bending effect of gravity on light using his Equivalence Principle. This calculation was not based on the equations of General Relativity, since these had not yet been developed. It did rely on Einstein’s recent conclusion that gravity must have an effect on the speed of light. Einstein’s calculation in this paper was identical to von Soldner's approximation. • In 1915, Einstein finished his theory of General Relativity, and developed a full set of ten partial differential equations for the curvature of spacetime in a gravitational field. When Einstein used his full theory and recalculated of the deflection of starlight due to the Sun he obtained exactly twice the prediction he published in 1911. The additional bending was due to the curvature of space itself. [In mathematical terms it arises from non-zero off-diagonal terms in the 4x4 Riemannian metric tensor that describes the spacetime curvature]. Note that equal contributions are made by both the space and time perturbations of the metric. • In May 1919, Sir Frank Dyson and Sir Arthur Eddington and led an expedition to the equatorial African island of Principe and their colleague Andrew Crommelin led an expedition to Sobral in Brazil to observe what happened to the apparent position of stars in the constellation of Taurus when the Sun got in the way. They could see these stars because they situated their telescopes in the moving shadow path of the moon during a total eclipse of the Sun. They reported, and the Royal Astronomical Society announced, that the degree of starlight bending (‘aberration’) was exactly as predicted by General Relativity. This announcement became headlines in a world hungry for interesting good news and Einstein became a media superstar. Which may have helped him retain his job in Berlin in spite of the rise of anti-Semitism. However, it is interesting to note that there is some question as to whether or not the equipment and results of the 1919 eclipse expeditions really had the ability to conclusively determine the deflections as claimed. It is not a simple experiment. For example, the Sun’s corona has strong magnetic fields and emits a lot of plasma that can complicate the interpretation of the results, and the observed effect is tiny. It may be that the researchers injected some of their expectations into the reported results. However, subsequent and more robust observations have confirmed the deflection as predicted.
So there you have it. Modern physics interprets the bending of light by a massive object such as the Sun as being partly due to a combination of factors, neatly modeled by the spacetime solution to Einstein’s field equations for the spacetime region around the Sun.
The textbooks invariably show the stretched rubber sheet attempt to suggest what spacetime is like in Einstein’s model. There is a big depression caused by the ponderable mass of the Sun. Light comes in and is deflected slightly because of the curved topology. A bit like when you just miss a putt in golf. To be frank, it is not an analogy that I particularly admire.
A Heretical Alternative? Since the theme of these essays is to re-examine the foundations of modern physics and try to provoke some fresh thinking about them, I am going to suggest an alternative interpretation.
We already have a good explanation for half of the effect, due to classical luminaries like Sir Isaac Newton, Henry Cavendish and Simon Laplace. You can think of it as a kind of scattering effect due to the Sun’s gravity trying to pull the passing photons a bit closer. If you like you can think of the Sun’s gravity working on the mass equivalent to the photon’s energy, resisted by the inertia of the photon’s momentum.
It is the other half of the observed effect that presents the issue. I suggest that the earlier classical calculations do not get the full answer because they do not take into account the fact that gravity also slows down the speed of light. Classical physicists did not have the means to know this. But I suspect that if Isaac Newton knew that light travels more slowly when the presence of gravity is more intense, then he would have started to think about the phenomenon of refraction.
[Most transparent media have a refractive index higher than that of a vacuum, which we have assigned index value 1. This signifies that light travels more slowly in such media than it does in a vacuum. When light passes from one medium to another at an inclined angle the path it travels bends at the interface (see Snell’s Law, Fermat’s Principle etc).]
Gravity also slows down light so it is not unreasonable to conceive of “gravitational refraction”. We could say that the gravitational index of empty space in the absence of any discernable gravity fields is 1 and that near a back hole it is very high. Elsewhere it takes an intermediate value.
Note that ordinary refraction affects the speeds of photons, but not their energy level. We can deduce this by using a beam consisting of a large number of identical photons and sampling some of them at each stage of their collective journey. When the surviving photons emerge from the refractive medium they have the same energy level as those photons sampled as they attempt to enter the medium.
I suggest the same thing happens with gravitational refraction. Photons enter a region of high gravitational field and are slowed down and deflected (in accord with Fermat’s Principle, overlaid with a Newtonian gravity deflection) but when they emerge they speed up again. The photons emerge with the same energy level as when they went in.
Furthermore I suggest that if the gravitational refraction is added to the normal Newtonian scattering deflection the answer will be as observed experimentally. Curvature of spatial coordinate system not required.
Finally I would like to suggest that gravitational light bending and gravitational redshifts are closely related. In the case of gravitational light bending the photons traverse the gravitational field at high angles to the gravitational field but in typical gravitational redshift situations the photons travel more or less parallel to the direction of gravitational field. If the photons are travelling at an intermediate angle through a gravitational field, I suggest it is reasonable that the result would be a combination of the gravitational redshift effect and the gravitational light bending effect.
A spectrum? I have asked myself whether the gravitational bending of the light should create a spectrum effect. I think this is a reasonable question.
A spectrum effect might be expected if what is going on is similar to an ordinary refraction effect. Consider what happens if a beam of light passes through a medium with a refractive index greater than one, such as a glass prism. I will shun the usual explanation based on what I think is an old fashioned wave model of light and will talk about phots instead.
Consider a beam of light consisting of many phots, all travelling at the same speed in a vacuum. When they encounter a medium with a higher refractive index than 1, such as glass or water, they slow down, sometimes quite considerably. Some textbooks say this is because multiple scattering events makes their path longer, but I doubt this is true because well collimated beams of a single frequency, such as from a laser, remain well collimated and are not scattered. If the initial and final surfaces of the prism are perfectly parallel, the incident and emergent phot paths are also perfectly parallel.
If the phots emerge from a transparent medium they continue with the same energy they had before they entered it. This is a further reason to doubt the scattering explanation, for then it is reasonable to suppose that the medium itself would absorb some of the energy from individual phots. What I think happens is that the phots actually do slow down when they are in the denser medium. High energy photos slow down more than low energy phots. It would be interesting to explore the electromagnetic field reasons for this but let us not get distracted.
If the beam of light of light encounters the new medium at an angle, the path of the phots develops a kink at the interface. If the transition is more gradual the change in direction is a curved bend. Either way the light beam path is refracted. High energy phots are refracted more than low energy phots. The beam is spread out according to the energy of the phots. As far as visible light is concerned, a rainbow spectrum is formed, with red at the top and violet at the bottom.
If the light is passing through a prism say, then when it emerges at the final interface the opposite refraction occurs. Depending on the shape of the prism, the rainbow effect can be undone or not.
The whole process follows something called Fermat’s Principle. Fermat's Principle states that “light travels between two points along the path that requires the least time, as compared to other nearby paths.” From Fermat's principle, one can derive (a) the law of reflection [the angle of incidence is equal to the angle of reflection] and (b) the law of refraction [Snell's law].
In the case of refraction, think about cross country runners encountering a strip of boggy ground that slows down their running speed. If they want to minimize their overall effort and maximize their overall rate of progress, it makes sense to shorten their path across the boggy ground, even if this makes their overall path a little longer.
Returning to the case of a photon (to use the conventional term) traversing a gravity field around a spherical celestial object. The more energy the photon has, the more momentum it has and this might make it more difficult to bend its path. However, no spectrum effect is detected.
The Weak Equivalence Principle gives a naïve explanation. The more energy a photon has, the more momentum it has, and this offsets the bigger deflecting force. However, a modern physicist would probably prefer to say that all the photons, of whatever energy level, are following the same geodesic pathway as determined by the non-Euclidean four dimensional spacetime.
As a photon passes through a strong gravity field at a shallow angle. gravity slows down the speed of light (as viewed from a distance). More on the side closer to the Sun than on the other. Hence it is not unreasonable to think that the light will follow Fermat’s Principle and take the quickest overall path through the gravity field, even if this means bending away from its initial direction.
However, unlike ordinary refraction through glass or water, there is no rainbow effect. Gravitational slowdown applies equally to photons of all frequency/energy and the speed of all photons remains the same relative to each other. There is no dispersion and they all bend by the same amount.
Conclusion The bending of light by gravity is generally regarded as one of the key experimental results supporting Einstein’s theory of General Relativity, and its model of a spacetime with curvature in all of its time-time and space-time coordinate pairs. However, half of the effect was already predicted and explained in terms of classical physics.
In the next essay I will discuss the fact that gravity slows down both time and the speed of light.
We know that slowing the speed of light by passing it through a transparent medium that is not a vacuum causes the path of the light to bend in according with Fermat’s Principle. Why then is it not reasonable to consider that slowing down the speed of light by passing it through a gravity field might not also cause a degree of bending in accordance with Fermat’s Principle?
Adding the classical gravitational effect to a gravitational time-dilation refraction effect might give a satisfactory explanation for light bending in accord with experimental observation, without calling upon the full spacetime curvature model adopted by Einstein.
0 notes
Text
23 Gravitational Redshift 2Sep18
Introduction Earlier essays in this series of blogs provided a plain English synopsis of the foundations of General Relativity, presented some heretical comments about Einstein’s Equivalence Principle and introduced the Clock Postulate. In this essay I would like to take a closer look at gravitational redshifts. [Note: In this essay I will again be using references from the heavyweight textbook on gravity by Charles Misner, Kip Thorne and John Wheeler (MTW): “Gravitation” C W Misner, K S Thorne, J A Wheeler Freeman Press, 1970 ISBN 0-7167-0344]
Effects of Gravity on the Propagation of Light The effects of gravity on light are not straightforward. Einstein’s ideas evolved over more than a decade and involved discussions with and by other great scientists such as Max Planck, Hermann Minkowski, Max Born, Willem de Sitter, Max von Laue, Hermann Weyl and John L Synge. Einstein wrote on the subject repeatedly, e.g. in 1907, 1908, 1911, 1915 and 1916. His 1915 paper contained significant corrections to his 1911 formulae. His 1915 work opened with the remark “‘I return to this theme because my previous presentation does not satisfy me.”
Einstein came to the conclusion that one of the basic tenets of the Special Theory of Relativity - the constancy of the velocity of light – had to be abandoned when gravity is taken into account. Einstein embraced this consequence and made it the basis of a further prediction – the bending of light passing near a massive body. His 1911 prediction for the light bending was the same as that of classical physicists.
In 1912 Einstein’s approach to modeling the effects of gravity on light using only his principle of equivalence ran into problem after problem. This motivated him to begin studying in earnest the differential calculus of Ricci and Levi-Civita as applied to curved geometrical manifolds (involving tensors, co-variant derivatives, Christoffel symbols and the like.)
Einstein revised his 1911 calculation in his 1915 paper and came up with a prediction twice as large as the original estimate. Sir Arthur Eddington famously claimed to have verified this prediction several years later in 1919. [In spite of World War I, Eddington had received a copy of Einstein’s work on General Relativity and he quickly became an early supporter of its ideas. Eddington came up with the idea of measuring the bending of light during a total eclipse and he obtained support from the Royal Astronomical Society to do so. I think this is a nice example of scientific cooperation transcending national hostilities.]
Einstein’s theory of General Relativity eventually contained three effects of gravity on light: 1. Gravity slows the speed of light, 2. photons climbing out of a gravity well arrive redshifted, and 3. gravity bends the path of light.
The degree to which the second and third effects are were predicted by Einstein’s 2015 theory became important tests for the new theory, along with the calculation of the anomalous precession in the perihelion of Mercury (see latter essays).
As to the redshift, Einstein wrote in a letter to Arnold Sommerfeld in 1912, ”The clock goes more slowly if set up in the neighborhood of ponderable masses. From this it follows that the spectral lines of light reaching us from the surface of large stars must appear displaced towards the red end of the spectrum”.
This makes it clear that Einstein was attributing some or all of the gravitational redshift to the time dilation caused by gravity, which in turn is intimately connected to the speed of light in gravity (for as we discussed earlier, the very meaning of time is intrinsically intertwined with the concepts of distance and the speed of light).
I like these comments by two American writers in 1980 (John Earman and A Glymour): “Einstein’s early derivations of the red shift show his most characteristic style of work - heuristic, allusive, sometimes baffling, but unfailingly fruitful.” They go on to say “Altogether, there may be no other single topic which so vividly illustrates the intellectual ferment, the styles of work, the profundity and the confusion associated with the general theory of relativity.”
We can argue at length about the exact meanings of the language used by Einstein and others to describe the three effects, as many good physicists, mathematicians and philosophers have done for the last century, especially if they are inclined to the modern view that gravity is an illusion created by spacetime curvature.
The more important thing is that the three effects led to new predictions for the size of gravitational redshift and gravitational light bending which became early tests for the new Theory of General Relativity.
The success of Einstein’s new general theory in predicting the size of gravitational redshift and light bending effects has led textbook writers to assert that classical physics has shortcomings that required the genius of Einstein’s curved spacetime theory to correct.
However, I think that the experimental results are what Sir Isaac Newton, Pierre-Simon Laplace and many other great classical physicists scientists over the previous three hundred years had already anticipated to some degree and would not have been surprised at all to see confirmed.
I think that General Relativity is a powerful new approach that brings in a whole new class of mathematical tools and so lends itself to a better description of some small effects in extreme situations. But I also think that if you add Special Relativity and the fact that gravity slows the speed of light to classical physics you can get the same answers. Certainly for the three effects on light (the first result is axiomatic) and possibly also for the anomalous precession in the perihelion of Mercury.
This is what the next few essays are going to examine. Starting with gravitational redshifts.
Gravitational Redshift - The Long Hollow Rocket Imagine that the very tall elevator shaft in the previous essay has become a long hollow rocket which is in deep space somewhere and that this rocket is accelerating at a high constant rate forwards. Imagine there is a source of photons with a very tightly defined frequency range (a laser for example) situated at the back of the rocket and that this has just fired a burst of photons towards the front of the rocket. When this burst of photons was sent on its way, the back of the rocket was travelling at a certain speed. Due to the rocket’s acceleration, by the time the photons reach the front of the rocket the front of the rocket will have reached a higher speed. In other words there will be a relative speed difference increase from the back to the front of the rocket due to the acceleration that takes place while the photons are in flight.
Detectors at the front of the rocket will find the arriving photons to have lower energy (hence lower frequency and longer (redder) wavelengths) than they had when the photons started. Furthermore, if the arrangement of laser and detectors is reversed, then the detectors when positioned at the back of the rocket will find photons arriving from the laser at the front of the rocket to have acquired extra energy and thus been blue-shifted. It is a straightforward Doppler effect. (There will be infinitesimal Lorentzian effects as well but these can be safely ignored for our purposes).
Einstein’s Equivalence Principle says that the above situation is the same if the rocket is actually a very tall elevator shaft sitting in a uniform gravitational field. Photons fired upwards in the shaft will arrive redshifted and photons fired downwards in the shaft will arrive with a degree of blue shift.
The redshift effect has been confirmed by experiments such as that of Pound and Rebka at Harvard in 1960.
It is possible to persuade ourselves that light must be red shifted in this way using Einstein’s discovery that energy and mass are equivalent to each other, and applying this to a thought experiment (see MTW p187) as follows.
Imagine that a well defined amount of mass falls through gravity and does some work on the way (turning a treadmill for example). It is then entirely converted into photons that are beamed back up to the starting point. Unless these photons lose some energy they could be turned back into the same initial starting mass and the process could be repeated endlessly, performing work on every loop. But this would violate the principle of Conservation of Energy. Hence Einstein reasoned that the photons must lose energy on their way back up to the starting point.
Does Gravitational Redshift Imply Spacetime Curvature? (MTW p 187) “An argument by Schild (1960, 1962, 1967) yields an important conclusion: the existence of gravitational redshift shows that a consistent theory of gravity cannot be constructed within the framework of special relativity”.
(MTW p189) “Schild’s redshift argument … does say … quite unambiguously, that the flat spacetime of special relativity is inadequate to describe the situation, and it should therefore motivate one to undertake the mathematical analysis of curvature.”
The Schild argument builds on the experimental demonstration of gravitational redshift by Robert Pound and Glen Rebka at Harvard University in 1960.
In 1958 a way had been found to use the Mossbauer resonance effect to emit and absorb gamma rays in a very narrow and precise frequency range using solid samples containing radioactive Fe57. Pound and Rebka made use of this discovery and placed two such samples vertically in a tower at Harvard with a height difference h (and so at a gravitational potential difference of gh in the language of Newton).
General Relativity predicts photons emitted from one sample will no longer be absorbed by the other. But if the absorber is vibrated so that it obtains a range of vertical motions relative to the source, the resulting Doppler effects can restore some absorption.
It is common to see this experimental result described mathematically as ∆t2 = (1 –(Φ2 – Φ1)/c2) ∆t1 where (Φ2 – Φ1) = gh is the difference in gravitational potential.
In essence Schild invites the reader to consider a Lorentz reference frame aligned to the Earth and containing an electromagnetic wave generator at one level and a suitable detector at a higher level, both at rest with respect to each other. Schild’s argument goes like this: The bottom generator emits a wave of exactly N cycles of well defined frequency √ in time interval T and this is received by the top detector. The observer at the top detector is asked to determine how long this signal lasts. In flat spacetime the answer should be T, since the top and bottom observers are at rest with respect to each other. However, the signal undergoes a gravitational wavelength change, lengthening as it climbs up towards the top observer. N cycles of a longer wavelength should last longer than T. The conflict can only be resolved if spacetime is curved.
I agree that gravitational redshift occurs in reality, and I also accept that gravitational time dilation occurs in reality. And if the time dimension is significantly slowed by the presence of gravity, then the usual Lorentz-Minkowski flat four dimensional spacetime framework becomes suboptimal for describing what is going on in the physics.
However, I do not accept the argument that the Pound-Rebka demonstration of gravitational redshift proves that it is necessary to invoke curvature in all four dimensions of spacetime because I think the Schild argument is inherently flawed and in any case it would only introduce a degree of flexibility in the time dimension.
For a start, the Pound-Rebka experiment example used by Schild does not take place in a Lorentz reference frame at all. Very few experiments do. Inertial setups are so rare and to be almost non-existent. Orbiting space stations come close but even then there is still rotation relative to the so-called “fixed stars”. But my concern is mainly about the misuse of wave concepts.
What is the ‘wave’ talked about by Schild (as described by MTW)? It seems Schild is thinking about the emissions being electromagnetic waves with spatial properties related to their wavelength and temporal properties related to their frequency. He talks as if the wavelength gets ‘stretched’ in transit between the bottom and the top of the tower. He talks as if the signal has a well defined frequency in time T and hence acts as a type of clock.
But Einstein himself was instrumental in demonstrating that electromagnetic radiation takes the form of photons. These are emitted with precise energy level. If they have to climb against a gravitational potential then when they arrive they are detected as having less energy. That is the relatively simple experimental fact.
And as I discussed earlier (and summarise in the box below) I think the whole schizophrenic particle/wave duality concept of light is seriously old fashioned and that it can be resolved simply by following the evidence with a fresh and open mind.
I think of the precise electro-magnetic emissions in the Pound- Rebka experiment as being phots. They have no length, they do not wriggle about as they travel and they are not little clocks. They are just packets of energy with some intrinsic properties that are only revealed upon absorption.
Schild is suggesting that the bottom generator emits a wave that has a well defined frequency in time T and therefore acts as some sort of clock. Now it is perfectly possible to incorporate a beat into an overall emission of lots of phots. Just synchronise their phase at time of emission and change this in a structured manner as time progresses. It is what happens all the time in a radio signal. The signal is encoded in the overall pattern. But not in each individual phot.
Do not get mixed up between what happens to the pattern of phots and what happens to the phots themselves, which is what I think Schild (and others) have done. And forget about anything with a finite length being “stretched” somehow. Phots have no length.
All the Pound Rebka experiment does is demonstrate that phots emitted in precise circumstances at one point in a gravitational field and absorbed at another point in the gravitational field lose energy consistent with the change in gravitational potential. This shows up upon their absorption/detection/destruction as a decrease in the frequency of their embedded signal. If you insist on using the phots as a way of standardising time in reference frame covering the whole experiment, you are entitled to conclude that time runs more slowly at the bottom of the tower than at the top.
So does a gravitational redshift as demonstrated by a single Pound and Rebka experiment demonstrate that gravity has to be modeled by a fully curved spacetime approach? I do not think so. In fact I think that a gravitational redshift occurring between two specific points in flat spacetime can be understood perfectly well just using classical physics augmented by Special Relativity and the recognition that gravity slows the speed of light (and hence time).
A single gravitational redshift experiment in one particular location is not proof of full spacetime curvature. However, if you consider how to interpret the results of many different redshift experiments spread around a gravitationally perturbed region of the Universe at more or less the same time, then the argument becomes stronger.
Take Earth for instance. If we consider a whole set of Pound-Rebka experiments occurring at different locations around the Earth, then while each experiment might have something to say about its local spacetime environment and local inertial reference frames, the only way to connect all the frames is to admit curvature into the dimensions of spacetime more generally. This is the conventional route to Einstein’s General Relativity.
So I’m saying that while curving all the dimensions of spacetime is not necessary to understand gravity per se, it is still a very useful approach to modeling some subtle effects in physics in gravity fields surrounding massive celestial objects.
How to Interpret Gravitational Redshifting? Pound-Rebka and many others showed that gravitational redshift does occur, and a variety of thought experiments suggest that this is a perfectly reasonable outcome. But how should we interpret the results? It is a debate that went on for decades from about 1911 onwards and it is a question that is still open, although many minds are not.
I will let the photons speak for themselves. Let us turn them into little cartoon characters. On arrival at the top detector the photons could explain what has happened in one of three possible ways: (1) “Hi, we are from bottomland. Time runs slow down there, so please excuse us if we are a bit slow. We are all slow down there. Apart from that we are exactly like you.” (2) “Hi, we are your identical counterparts from bottomland. We have had a tough journey and now we find that we have to give up some of our energy in the form of a tax. So please excuse us for being a bit redder than when we started out.” (3) “Hi, we are your identical counterparts from bottomland. You guys seem to have been accelerating upwards while we were travelling. We can’t climb aboard your detector, even though it is identical to the one that gave us birth, unless you lower the bar a bit and retune it to a lower frequency, or push it towards us to shake off all that extra speed you’ve acquired.”
The first explanation suggests gravity causes time to slow down and any and all processes that involve time to slow down as well. This by itself is enough to distort the time dimension in a four dimensional spacetime reference frame. But it does not say anything about curvature in the three spatial dimensions and hence is not an argument for Einstein’s curved spacetime ‘geometric’ model per se.
The second explanation is akin to a standard Newtonian approach. Newton thought of light as consisting of “corpuscules” and fully expected them to be able to be influenced by gravity. See the following essay about the bending of light by the Sun.
The third explanation of gravitational redshifting uses the Einstein Equivalence Principle to suggest that what is going on is that both the bottom and the top of the tower are being accelerated in curved spacetime. This is what causes the redshift as per our discussion of the long hollow rocket. You could say that this is an explanation in terms of the Doppler effect.
A student of Special Relativity might not be surprised by gravitational redshifting since, if a photon is energy, and energy is equivalent to mass, and mass loses energy when it climbs out of a gravity well, then why would anyone not expect a photon to lose energy also? So such a student might be inclined towards explanation (2).
One of things that intriques me about gravitational redshifting is this. If the photons arrive at the top of the experiment with less energy than they started out with – where did that energy go? If the photon were a little rocket then it would end up in the heat and kinetic energy of the exhaust gases. If the photon was a solid projectile the lost energy is apparent in the gradual loss of kinetic energy. If the photon was pulled upwards by a string it would clearly come from whatever was winding up the string. But in the case of a photon it starts off with one amount of energy and arrives with an amount that is lower than that of photons being created in exactly the same way but at the ‘higher altitude’. Where did the difference go?
I think that the question goes to the heart of understanding gravity and hence is a quite profound. But most lecturers will just say “It has ended up as a reduction in negative potential energy” and leave it at that. Personally I think that this ‘papers over’ a gap in a better understanding of the situation. The same thing happens is you ask questions like – what gives matter its mass? or what gives mass its inertia? or why does a moving object want to travel in a straight line? Just giving physical phenomena names instead of explanations tends to block our minds to deeper understandings.
The first and third of the explanations require a non-flat spacetime reference frame due to distortion in the time duration dimension.
If one of the answers is correct it does not necessarily mean that the others are incorrect. In principle the correct answer might depend upon what point of view you are using. Then the best answer is then the one that is consistent with the point of view you are using. And the best point of view is generally the one that is the most convenient for your purposes at hand.
Furthermore, it is also possible in principle that the best answer requires a combination of the various explanations. In fact I think it does, as I will explain later.
So what about explanation (3) that interprets the redshift as a Doppler effect? My take on this is that if you want to adopt the Einstein Equivalence Principle, and if you want to replace the greatest force in the Universe with the mathematical trickery of curved spacetime, then this is the explanation you should use. It also gives the right answer, so it becomes a matter of choice.
But the fact that you could prefer Explanation (2) shows that the curved spacetime approach is not the only way to understand the Pound Rebka experiment. And even if you do want to use Explanation (3) you should note that this does not require you to use Einstein’s full model. If you do use Einstein’s General Relativity model then it is only the perturbation of the time term that is needed in order to come up with the observed results for gravitational redshifts. The full field equations are not needed and there is no need to call upon any warping in the spatial aspects of the spacetime geometry.
Gravitational Redshift in the Solar Spectrum Light reaching Earth from the Sun’s surface has climbed a long distance up the Sun’s considerable gravity well, and has fallen into the Earth’s smaller gravity well. The light itself comes from a large number of sources with known spectral frequencies, but these spectral lines are complicated by the extreme thermal motion of the sources, the Sun’s rotation and the Earth’s own motion. All of this creates a blur of Doppler shifts. Nevertheless it is possible to screen and correct for the blurring effects and the resultant redshifts are consistent with the expected result.
Early attempts to measure the gravitational redshift of light reaching Earth from the Sun were plagued by practical difficulties. The earliest results tended to disprove Einstein’s predictions. When Einstein’s fame and reputation soared towards the end of the second decade of the 20th century, the trend reversed and it became more fashionable to produce results corresponding to Einstein’s predictions. Modern results do confirm Einstein’s predictions.
It is interesting that the earlier attempts to explain the spectral shifts in light from the Sun did not use General Relativity. This shows that while on one level Einstein’s general theory won wide acceptance, on another level there was a reluctance to fully adopt the curved spacetime approach. General Relativity was thought of as being impressive and interesting, but also a bit too weird and not to be taken literally.
The Best Way to Interpret Gravitational Redshifts? General Relativity has now become the orthodox paradigm but even today arguments continue about how best way to tie in the experimental evidence about gravitational redshifts. Some authors/teachers prefer one type of explanation, others prefer another.
I’m pretty sure many students find this confusing. But I also think that there may be more than one way to look at it. I do not think that one view corresponds to “reality” and the others are fallacies. They are all just mental models created for our own convenience of understanding.
Here is an analogy. A cone can be seen as a triangle from one perspective and a circle from another. The real nature of a cone transcends both views.
In this spirit I object to those who insist that spacetime curvature is real and gravity is not real. I say that you can use a curved spacetime model if you like, but it is only a model. Likewise you can hold the view that gravity is a real force of nature, but you still also have to recognize the lessons revealed to us by Einstein.
My preferred way of explaining gravitational redshift is as follows. When a photon reaches a zone of space where everything has a higher gravitational potential than things did where the photon came from, it arrives in a new environment. It may have been created in outer shell of a certain type of atom, but it now finds itself unable to join similar situations in similar atoms in the new environment. It has to pay a tax to be allowed to join in. Its energy wallet now longer buys what it used to. The photon can now longer afford the sort of home it came from. It has to settle for a lower energy type of accommodation. One with lower energy levels. For example, a photon than came from a green home might have to settle for a new home in the red light district.
So if we go back a couple of sections and eavesdrop on our animated photon’s conversation when they arrive, what they are saying is (2) “Hi, we are your identical counterparts from bottomland. We have had a tough journey and now we find that we have to give up some of our energy in the form of a tax. So please excuse us for being a bit redder than when we started out.” And their newfound friends say “Don’t worry about it. It happens to everyone. And you haven’t lost any value – it is just that some of your energy is now embedded into your relationship with your new environment all around. You can have it back again should you return home.”
Conclusion Gravitational redshifts can be described in Einstein’s General Relativity model but it is not necessary to invoke the full field equations and curvature in all of the dimensions in order to do so. The basics effect was predicted well before Einstein using nothing more than Newtonian gravity and Galileo’s Weak Equivalence Principle.
Once Special Relativity is taken into account the phenomenon can still be understood in terms of differences in gravitational potential.
The mystery is why this phenomenon is considered to be one of the proofs that General Relativity is the only valid way to look at gravity. I think that what happens to photons encountering changes in gravitational potential can be described without reference to Einstein’s General Relativity field equations at all.
Here is a bit of basic logic. “If General Relativity is to be a useful model then it must not contradict the evidence of gravitational redshifts.” Let us agree that this is true. Then we also have to accept the contra-positive argument that goes as follows. “If General Relativity contradicts the evidence of gravitational redshifts, then it is not a useful model.” What we do not have to agree is the argument that “If General Relativity is consistent with the evidence of gravitational redshifts, then it is a useful model”. This statement may or may not be true. And we certainly do not have to agree without question the insistence by many modern physicists that because General Relativity is a useful model then its method of approach is the only interpretation of Nature that we should agree to be “reality”.
On this I am pretty sure that Einstein would agree.
0 notes
Text
22 Einstein’s Equivalence Principle 2Sep18
Introduction Einstein’s Equivalence Principle is the foundation of his Theory of General Relativity. To start off this discussion, let me first clarify some terms. I find that a good deal of confusion and misunderstanding comes from the ‘tyranny of language’ – terms that have become vague or ambiguous at heart, let alone in the way they are understood and used by different people.
There are several Equivalence Principles and varying expressions for all of them. The basic principle, usually attributed originally to Galileo, asserts that all material objects in free fall undergo the same acceleration when they are in equivalent position in the same gravitational field, regardless of their mass or composition. In Newtonian terms this can be expressed by saying that the inertial mass of a body is strictly proportional to its gravitational mass, a fact which Newton himself verified (on the surface of the Earth that is) by means of pendulum experiments on a variety of substances. This proposition is often called the Weak Equivalence Principle.
To illustrate my point that the physicists often express essentially the same point in different ways, here are is another textbook statement of the Weak Equivalence Principle: If a non-spinning test body is placed at an initial event in spacetime and is given an initial velocity there, and if that body subsequently moves freely, then its world line will be independent of its mass, internal structure and composition.
In 1907 Einstein was searching for a way of generalizing the theory of relativity to include the effects of acceleration and gravity and he became impressed by the fact that a person in free fall does not ‘feel’ their own weight. For instance a person who falling from a roof is momentarily weightless. This can be seen as a simple consequence of the Weak Equivalence Principle, but Einstein conjectured that it was something more fundamental.
Galileo had argued 300 years earlier that an observer in the interior of a uniformly moving ship would not notice the motion of the ship unless they looked outside. In similar vein, Einstein argued that someone in a closed room which is in free fall in a gravitational field would not be able to detect the surrounding gravitational field. He was aware of course that there would be small effects. For example two cannonballs at rest on the floor would tend to roll together because their trajectories are both aimed at the centre of the gravitational field, but he decided to ignore this for the purpose of the exercise. The usual way around this and associated “tidal effects” is to insist that the closed room be small compared to the gradient or divergence of the gravitational field. In other words the argument is confined to a small localised region of spacetime.
Taking this further, Einstein suggested that all physical processes taking place within such a closed room (and not just the trajectories of material objects) would behave in exactly the same way as if the gravitational field was absent and the closed room was in uniform linear acceleration.
To distinguish this stronger idea from Galileo’s version, I will refer to it as Einstein’s Equivalence Principle. I will use the following definition of it: “Within a small localized reference frame, a gravitational field is indistinguishable from a uniform acceleration of that frame”.
The enormous consequence of this idea is that it becomes possible to eliminate gravity when describing physics in a small localized environment. Just use a coordinate transformation to a suitable free falling reference frame and describe everything from there.
The way of describing the principle used by Einstein varies from textbook to textbook and it has evolved and changed over the century since Einstein first put it forward. For example, here is another version: “There is no experiment that can distinguish a uniform acceleration from a uniform gravitational field - the two are fully equivalent”. In this expression, the phrase “uniform gravitational field” means one which is constant throughout the frame i.e. does not exhibit tidal effects, and in which the motion of all test particles are parallel. I do not like this expression because there are no such gravitational fields, at least none that I can think of. So this sort of approach is off to a poor start and just leads to confusion.
A Mathematical Abstraction In short I consider Einstein’s Equivalence Principle to be a mathematical abstraction – a conjectural hypothesis. An assumption for the sake of a model. An excursion in mathematical imagination.
Scientists and other thinkers do this sort of thing all the time. For example economic models are traditionally based on the assumption that all the agents have perfect knowledge and make rational decisions in pursuit of maximizing their individual welfare. The trouble is that models based on unrealistic assumptions make not always correspond to the real world. Furthermore there is a danger that the thinkers, having constructed a simplified artificial reality, then migrate to that place and chose to remain in, thus losing touch with reality.
I do not think that Einstein’s Equivalence Principle reveals a fundamental insight into the nature of reality. Mainly because it is patently untrue. An observer can always tell if the reading on the weighing scale under an apple is being caused by gravity or by acceleration. They just need to look out of the window. If there is a massive planet or star outside, then some or all of the reading is undoubtedly being caused by gravity. If there is no planet or star and the Universe is accelerating past the window (e.g. as revealed by redshifts of the cosmic microwave background) then what is going on is a uniform linear acceleration. If they can detect energy being expended in applying force to the closed room (such as a lot of fuel being burnt by attached rocket engines) then they are in linear acceleration. If they can detect no such conversion of energy from one form to another, then it almost certain they are at rest in a gravitational field.
Or the observers can simply analyse any incoming starlight. If this reveals increasing or decreasing redshifts or blue-shifts, then the closed room is likely to be accelerating. If the incoming radiation has constant frequency then the closed room is likely to be stationary within its surrounding gravity field.
If a window is not available then sensitive experiments will often do the job instead. A common type of gravity field is the type found in the vicinity of a massive celestial body. These are more or less spherical fields. So place two cannonballs next to each other on the floor of the room and watch them closely. If they roll together then you are in gravitational free-fall. Or hang the cannonballs by rods from the ceiling and observe the angles of dangle very carefully. After correcting for the gravitational pull between the cannonballs themselves, if there is still a slight convergence then you are in a gravity field. Or look for tiny differences in the scale readings taken at the top of the room and at the floor. If the cannonballs weigh slightly more on the floor of the closed room than near the ceiling (i.e. there is a tidal difference), then you are in a gravitational field.
Or simply observe the physics within the closed room over time. If there is ever a change in the rate of acceleration, or it goes away, or there is a moment of deceleration, then it is probably due to some external forces and not to gravity. If the enclosed room is at rest in a gravitational field it unusual for that gravity field to fluctuate. Which is to say that the scale reading under an apple can be expected to stay steady if you are at rest in a gravity field, and if the scale reading is not steady it is likely that there is some external acceleration going on.
Lastly, if you are suddenly obliterated by the thermonuclear heat of a star, or crash violently into some massive object, then it is reasonable to suspect you were probably in free fall in the gravity field caused by that object. At that stage of affairs you might be excused for thinking that Einstein’s opinion that gravity is an illusion is a little bit academic.
It would be a poor physicist who could not come up with a few ways to detect whether gravity is at work or some sort of uniform linear acceleration.
But, I hear some people say, Einstein’s Equivalence Principle is true if the enclosed room is small enough, i.e. it is true on a small localized scale. And fair enough, it is “truer” on a small localized scale. But it seems a bit rich to establish a fundamental principle on a small localized scale and then to use it as the basis for a theory which is applied on a cosmological scale, and even to the whole Universe.
I think Einstein’s equivalence principle is not a fundamental principle of nature at all – it is just a mathematical assumption for the purposes of contriving a mathematical model in which the effects of gravitation are replaced by a description of physics based upon peculiar and complicated four dimensional reference frames.
Yes gravity can mimic the effects of a linear acceleration, and vice versa. And yes, you can model gravity in this way to a remarkable extent. And yes you can imagine a fundamental force of nature is nothing more than geometry. And a very interesting exercise it is too, with predictions that turn out to be very successful. But don’t get the mathematical modeling completely confused with actual reality or you probably fail to achieve the next and deeper level of understanding. In other words you will ‘dig yourself into a hole’, ‘paint yourself into a corner’, ‘trap yourself in a cul-de-sac’ and similar expressions. You will be trapped in a box of your own making until an experimental catastrophe rocks your mental paradigm, and even then you will probably deny the evidence or cling vainly to false hopes of rescuing it. Which is exactly what I suspect has happened in relation to the discovering of the Galaxy Rotation Curve problem and the subsequent supposition that galaxies must therefore be filled with exotic cold dark matter.
In my opinion mathematics is the study of patterns and physics is the study of patterns in nature. It therefore follows that there are more patterns in mathematics than in physics and not every mathematical relationship corresponds to a physical reality.
For example, spacetime intervals in geometric models are nearly always expressed as something squared. The square root of everything has two answers – a positive value and the same value with a negative sign in front. Usually only the positive answer corresponds to a physical reality.
To be fair to Einstein, it is not entirely clear what he meant when he used the word ‘equivalent’. He may have meant ‘is able to be modeled by’ or he may have meant ‘is identical to’.
In similar vein, he may in fact not have actually held the view that gravity is fictitious and spacetime curvature ‘real’. This interpretation may have crept in later by populist writers trying to make a bigger splash than necessary. Einstein himself was very cagey about what was meant by ‘real’. I think he was wise enough to stick to asking ‘is a convenient and useful description’.
The Strong Equivalence ‘Principle’ For the sake of clarity I should mention in passing that there is also something called the strong equivalence principle. Here is how Misner, Thorne and Wheeler express this in their classic textbook on the geometric approach to gravity “... in any and every local Lorentz frame, anywhere and anytime in the Universe, all the (non-gravitational) laws of physics must take on their familiar special relativistic forms.”
I do not know why statements like these are called principles of nature. In fact I think they are presumptuous, dangerous and arrogant assumption. Puny mankind does a few experiments in their infinitesimal corner of the Universe and in an infinitesimal moment eye within the time scale of the Universe, and then they pronounce that that their findings must hold everywhere always and forever.
Who of us can say that physics in intergalactic space, or in the middle of globular clusters, or anywhere very different from our own galaxy, or long ago, or at some incredibly distant time into the future might not be very slightly different from what we think we know now?
Might the so called constants like G or h or c do not have very slight differences in these circumstances? Who can say that physics in the very early Universe was not slightly different from what it is now? If the overall mass-density of the Universe was higher then than it is now, might not the speed of light have been slower then than it is now?
This is not an academic argument. The light reaching us now from the furthermost reaches of the observable Universe is incredibly old and is showing us a Universe from a much earlier stage in its evolution. We believe we have observed that the rate of expansion of the Universe in increasing and are spending a lot time energy and resources trying to find exotic “Dark Energy” that is driving such expansion.
Might it not be sensible to devote a small fraction of this effort into questioning our assumptions and investigating whether the evidence is trying to tell us something new about their validity? Maybe the speed of light was slower in the early Universe, or maybe the expansion of the Universe causes light in transit to become a bit less energetic. That sort of thing.
Which is not to say that the Strong Equivalence Principle is is untrue. It might be true, it might be untrue. But just to blindly and without question assume that is universally and always exactly true closes our minds to other possibilities, ideas and explanations and limits the scope of scientific enquiry.
The Clock Postulate Since physics is the study of patterns in nature, it relies heavily on measurements and these in turn rely heavily on good definitions and standards to give them consistent useful meaning. As discussed in earlier essays, the most problematic measurement is time. Time as thought of by most people is a tricky illusion. It is perfectly possible for two sane observers to be completely at odds about which event came first, or the duration intervals between two events, and for both of them to be right. (See the earlier essay on the Very Fast Train).
There is a more fundamental concept than time, and that concept is Causality. We can say that if event A is the cause of event B, then A precedes B and all observers must agree on this fact. However, when we try to talk about “when did an event take place” or “how long was the duration between two events” we encounter all sorts of problems about clocks, standards, the synchronization of clocks and the effects of different reference frames of different observers.
One thing we have learnt and have Einstein to thank for is that we have to be very careful about our definitions and approach to measurements. Another is that time always involves motion of some sort. Useful clocks generally involve regular motions.
A third thing we have learnt is that moving clocks run slow, not because their movement interferes with their inner workings, but because the speed of light is an invariant constant for all inertial observers. When the consequences of this fact are worked through logically, we get the moving clocks run slow conclusion. It has been observed countless times in careful experiments.
However, Special Relativity focuses on clocks observed in an inertial reference frame. What happens to clocks that are accelerating or rotating in a uniform fashion, or maybe even in a non-uniform fashion? And what happens to clocks that find themselves to be in a gravitational field?
We could engage in a lot of theoretical discussion to come up with the answers. But not everyone would be convinced. Better still, with benefit of over a century of experiments, we can just refer to the actual hard-core experimental evidence.
Here are some experimental facts widely regarded (even by me) as beyond dispute.
If a clock is moving in uniform rectilinear motion at relative speed v, then it runs slow by the usual Lorentz time dilation factor gamma = 1/√(1-v2/c2). If the clock goes round and round in a circle at this speed, it still runs slow by the same factor. This has been demonstrated over and over again in giant circular particle accelerators. Charged particles travel at very high speeds and are forced to bend into a circular path by powerful electric and magnetic fields. Their rate of decay, which can considered to constitute a clock, depends on how fast they go and not on whether they travel straight or in a circle.
This has lead scientists to come up with something called The Clock Postulate. The Clock Postulate says that even when a moving clock accelerates, the ratio of its rate to the rate of inertial observers’ clocks is still slow by the same factor gamma. That is, the slowdown ratio depends only on v, and does not depend on any derivatives of v such as acceleration, or on any higher time derivatives of v.
So an accelerating clock will count out its time in such a way that at any one moment, its timing has slowed by a factor gamma (γ) that depends only on the current relative speed of the clock. Its acceleration has no effect at all, except of course for the fact that it will change v over time, and hence will change gamma over time.
The clocks discussed here are idealized and should be thought of as perfect measures of time. There is no doubt that taking a simple clock such as a pendulum cuckoo clock and subjecting it to accelerations will disrupt its mechanisms and hence its ability to be a useful measure of time rates and durations.
However, I’m cautious about the way the Clock Postulate is expressed above because I think there is a fundamental difference between rotational accelerations and linear accelerations. Rotational acceleration can be achieved without the input of any extra of energy (e.g. a gliding skater can just reach out and grab hold of a post), but linear acceleration requires the input of extra energy. When whatever causes the motion to be curved stops, the body continues in a straight line, with the same momentum and energy as before, albeit in a different direction. However, the cumulative effects of a linear acceleration on momentum and energy levels remain even when the acceleration stops.
When people say that the Clock Postulate has been demonstrated countless times, I think they are referring to rotational accelerations, such as in circular accelerometers. I will withhold my opinion about linear accelerations until I read of some definite appropriate experimental results. There have been linear accelerometers in action for some time, so the results should already be available.
Measuring time intervals in an accelerating system is difficult because it requires us to integrate time between the initial and the final event and this will depend on the how the observed system got to it final speed as compared to its initial speed. If it started off slow and the rate of acceleration increased then a different answer will be obtained from the case where it started off fast and slowed down towards the end.
Some situations are relatively simple (pun intended). Consider a clock mounted at the rim of a centrifuge rotating at a very fast but constant speed. Compared to a stationary clock in the middle of the centrifuge, the rim mounted clock will run slow by the gamma factor involving just its tangential speed. Its rotational acceleration will not come into it.
One way or another, the physics of accelerating clocks can be handled by Special Relativity alone.
Now let us consider the case of a clock at rest on the surface of the Earth. It is in the gravitational field of the Earth. How does this affect its time keeping? Again let us skip the theoretical arguments and go straight to the experimental facts. The evidence from Global Positioning Systems is that clocks experiencing gravity at the surface of the Earth run slow compared to identical clocks in free-fall orbits around the Earth.
There is a counter effect due to the fact that the orbiting clocks are moving faster than the Earth bound clocks but this effect is only about 15% of the size of the gravitational effect. Earth bound clocks run slow compared to the orbiting GPS clocks by about 45 microseconds per day, offset by about 7 microseconds per day due to the special relativistic Lorentz time dilation for the moving satellite clocks. (There are also some second order effects – see another essay).
So there you have it – gravity slows time. Whether some of the clocks are in free fall or not, there is a relativistic time slowdown that depends on the gravitational potential difference between one set of clocks and the other.
But doesn’t this create a conundrum or dilemma? If gravity slows time, and Einstein’s Equivalence Principle says that gravity is the same as linear acceleration, then should not a linear acceleration, by itself, slow down clocks over and above just the velocity effect? But the Clock Postulate says that accelerations do not, by themselves, cause any additional slow down.
A student of physics would be excused for thinking as follows. If a clock's rate isn't affected by its acceleration, and if the Equivalence Principle says that a gravitational field is the same as linear acceleration doesn't that that imply that a clock’s rate should not be affected by a gravitational field? And then the fact that a clock’s rate is affected by gravity must imply that either Einstein’s Equivalence Principle is wrong or the Clock Postulate must not apply to linear accelerations.
The texts I have read so far that (to their credit) touch on this issue tend to bluster and bamboozle their way past it. Some resort to using mathematics based on curved spacetime models, but of course this is circular reasoning because we are examining the foundations that led up to the curved spacetime approach in the first place.
What I’d like to do is explore the issue a bit further using experimental facts, thought experiments and logic.
The Very Tall Elevator Shaft Imagine a very tall elevator lift shaft, and to reduce distractions imagine that the air has been pumped out. Now imagine two identical balloons each filled with a litre of water. One is lying on the floor of the lift well. The other balloon has just been dropped from the ceiling. Which balloon is accelerating?
The dropped balloon is falling downwards and is picking up an extra 9.8 meters/second of speed every second, while the bottom balloon is at rest. So clearly (?) it is the top balloon that is accelerating.
But which balloon looks like it is accelerating? The top balloon has an almost spherical shape – the same as if it is immersed and weightless in water or if is in orbit around the Earth or if it drifting about in deep space far from any planet or star. The bottom balloon has a squished and distorted shape with extra strains evident in the balloon’s latex. The bottom balloon is clearly under the influence of some forces, and as we know from Newton’s first law of motion, force equals mass times acceleration.
So the answer isn’t so obvious after all. In fact, Einstein concluded that the top balloon is travelling along a peaceful natural geodesic in curved spacetime. The bottom balloon is trying to do the same, but the floor of the lift shaft is accelerating upwards and this is causing the bottom balloon to also accelerate upwards, with attendant distortion in its shape.
Now imagine that the lift shaft is positioned on the equator of an earth sized planet which is rotating quite fast. This will create an upwards centrifugal force on the bottom balloon and a larger but otherwise similar force on the upper balloon before it is dropped. If the rotation is fast enough there comes a point when the upper balloon will not fall downwards at all if and when it is released. The centrifugal force will balance the force of gravity. Go a bit faster and the bottom balloon will rise up the lift shaft to join the top balloon.
Or keep the rotation rate constant at one revolution per day, but imagine a taller and taller lift shaft. At some point (when the top of the shaft is about 40,000 km from the centre of the planet) the top balloon will start to hover about with no further tendency to fall down. It will have reached a geostationary orbit. It will be weightless. It will be in a state of continuous free fall around the planet itself. You could open the windows and ceiling of the lift shaft and the balloon would tend to remain exactly where it is without any constraints.
If we compare the rates of clocks at the top and bottom of the lift shafts in these circumstances we already know what the answers will be. There will be a differential gravitational time dilation effect, plus there will be a Lorentzian time dilation effects due to the different speed of the clocks e.g. relative to a reference frame at rest in the solar system.
But there is a modification to the gravitational effect, which I find intriguing. My favorite text on the issue (written by a GPS expert from the group at the University of Colorado at Boulder who look after the main GPS system) says that the gravitational effect is weakened to the extent that the acceleration due to gravity is offset by any centrifugal accelerations. This is a small correction to gravitational time dilation for the earth bound clocks. But what is the effect on the gravitational time dilation for the orbiting clocks?
I think the implication is that there should be no gravitational time dilation aboard an orbiting satellite. The centrifugal acceleration exactly matches and counters the gravitational acceleration, and this eliminates the gravitational time dilation effect (?) leaving behind only the Special Relativistic time dilation connected to the satellite’s speed through space. The location of the satellite is still subject to gravity of course, and any matter there still has gravitational potential, but the gravitational time dilation effect disappears. Or does it?
This understanding/supposition seems to accord with the Clock Postulate and time dilation effects in circular accelerators on Earth. Here the centripetal acceleration of particles travelling at vast speeds along curved paths is balanced by powerful electromagnetic field forces and the observed time dilation is just the relative effect due to their speed alone. Presumably there is also a tiny local gravitational time dilation effect but this would affect both the ‘clocks’ in the accelerometer and any nearby reference clocks to an equal extent.
This discussion suggests to me a subtle point which might be crucial to understanding what is going on. Many textbooks suggest that a massive object like a star is surrounded by a spacetime which as been warped by gravity in all of its time and space coordinates (the Riemannian curvature tensor is non-zero in all or nearly all of its components). A clock passing through this reference frame cannot help but have its timekeeping altered. The textbooks and popular science articles say “time itself is slowed down”. However, the evidence suggests that what matters is the actual physical experience of the actual clock. If it is free to respond to the effects of gravity then its path will become curved in normal three dimensional space, but it will not feel any net acceleration and consequently it will not be subject to any gravitational time dilation.
I wonder what the true answer is?
The Two Fat Trains I got the germ of this idea from somewhere in my readings but I can’t remember where, and I don’t know where to put it so I am putting it here. I have thought up a paradox (to me anyway) and will express it in the form of a thought experiment. In keeping with tradition I will present it using two very fast trains.
Imagine a test particle suspended on a wire so that is situated exactly above the gap between two parallel pairs of very long straight railway lines, about three meters off the ground. Along railway line A comes a massive train travelling at a very, very fast speed, say 20% the speed of light. On railway line B comes exactly the same sort of train, travelling at exactly the same speed, but in the opposite direction. The two trains pass the test particle at exactly the same moment t, one in one direction and the other on the other side and in the opposite direction. Remove all the air in the vicinity so as the make the experiment simpler (thought experiments are convenient like this).
At moment t, observers on train B see train A travelling at 40% of the speed of light. Lorentzian length contraction applies so in reference frame B, train A has become short and hence rather fat and dense. Now both trains exert some sort of gravitational effect on the test particle. But train A, having become fat, heavier and denser, exerts more of a gravitational effect than Train B. Hence the test particle should swing towards Train A.
You can see where this is going. Observers on Train A see Train B travelling at 40% of the speed of light in the other direction. So in reference frame A, train B has become short and hence fat, heavy and dense and hence exerts more of a gravitational effect than Train A. Hence the test particle should swing towards Train B.
But the test particle cannot move in opposite directions at the same time. It is a paradox. Actually, on writing it out it seems more of a paradox for Special Relativity than General Relativity.
I haven’t seen this paradox as such, nor have I had the opportunity to discuss it with anyone knowledgeable, so I do not know what other people make of it. My personal opinion is that it is another pointer to my earlier suggestion that the Classical Principle of Relativity might not hold true at relativistic speeds.
I personally suspect that there is a background reference frame oriented to both nearby and far away galaxies, which may or may not be related to what I will call the Q field in another essay, and that it is speed relative to this background that is the cause of Lorentzian contractions.
Whichever train is travelling the fastest relative to this background is the one that will contract the most (as observed by an independent observer at rest in the background frame) and it will be that train that causes the test particle’s net defection towards itself.
I guess my point is if Special Relativity suffers from paradoxes like my symmetrical version of the Twin Paradox, and the Paradox of the Two Fat Trains described above, then General Relativity will have similar issues since it is supposed to equal Special Relativity when background gravity is absent and spacetime becomes flat.
And while we are on the subject of paradoxes, I have not yet seen a nice simple explanation of the Ehrenfest rotating disc paradox in General Relativity. I have seen complicated mathematical curved spacetime metrics address the paradox. One of the more recent attempts produces the conclusion that I think is logically required and rather obvious – the disc stays flat. Which would solve a problem if it is correct. But I think the paradox only arose in the first place due to a weakness in the ability of Special Relativity in relation to ongoing rotations.
Summary Einstein’s equivalence principle was instrumental in his thought processes leading up to his development and expression of General Relativity. However, the principle has various expressions. I think some of these expressions are defective and some of the rest are not applicable to the real world.
In spite of this shaky foundation, the predictions of General Relativity are impressive and so it has to be given a lot of credence. Which is not to say that it the perfect explanation for gravity, inertia and the physics of accelerated or rotating physical systems, or even that it is the simplest and most elegant way of describing physics at all. It might be, but I suspect there is a better, simpler or more complete model and hope that someone will one day come up with it.
0 notes
Text
21 General Relativity Basics 1Sep18
Introduction In following essays/blogs I am going to make some heretical comments about General Relativity, but before that I’d like to present a summary of its background. The text is a pastiche from numerous sources.
I’d first like to say that I think General Relativity is a work of genius and that I am well aware of its successes in numerous experiments and astronomical observations. Which doesn’t mean that I think it is necessarily the perfect and final explanation of everything.
Einstein as a Young Man To understand the theory I think it helps to understand the man, the times he lived in and what else was going on in physics and maths. But there was such a lot going on that I cannot do justice to it all in a sort blog like this essay The decades from 1860 to 1930 were a golden period in physics, predominantly in northern and central Europe, Ireland, Britain, and the United States. Major advances were made in thermodynamics, electromagnetism and in atomic and nuclear physics. It was also a time of rapid industrialisation and a lot of political turmoil, notably World War I.
Albert Einstein was born in Ulm in the German Empire Kingdom of Württemberg,on 14 March 1879. (Sidenote: Sir Isaac Newtown was born in the year that Galileo died and Einstein was born in the year that James Clerk Maxwell died).
Albert’s parents were Hermann Einstein, a salesman and engineer, and Pauline Koch. In 1880, the family moved to Munich, where Einstein's father and his uncle Jakob founded a company that made electrical equipment. In 1894 the company failed to gain a bid to supply electric lighting to the city of Munich due to insufficient capital. In search of business, the Einstein family moved to Italy, first to Milan and then to Pavia where Hermann Einstein worked on installing the first electrical power plants in the region. Einstein stayed in Munich to finish his schooling, but he resented the school's strict regime and teaching method. So in December 1894, at the age of 15, he travelled to Italy to join his family in Pavia. During his time in Italy Einstein wrote a short essay with the title "On the Investigation of the State of the Ether in a Magnetic Field".
In 1895, at age 16, Einstein took the entrance examinations for the Swiss Federal Polytechnic in Zürich. He failed to reach the required standard in the general part of the examination, but obtained exceptional grades in physics and mathematics. He went to the cantonal school in Aarau, Switzerland to complete his secondary schooling.
In January 1896, and with his father's approval, Einstein renounced his German citizenship. In September 1896, he passed the Swiss Matura with mostly good grades and the highest grades in physics and mathematical subjects. He then registered at the University of Zurich to further study physics. After graduating in 1900, Einstein spent two years searching for a teaching post. He acquired Swiss citizenship in February 1901. With the help of Marcel Grossmann's father, he secured a job at the patent office in Bern as an assistant examiner. In 1902, along with a few friends from Bern, Einstein started a small discussion group that met regularly to discuss science and philosophy. Their readings included the works of Henri Poincaré, Ernst Mach, and David Hume.
In 1903 his position at the Swiss Patent Office became permanent, although he was passed over for promotion until he "fully mastered machine technology". He also married.
In 1905, at the age of 26, he completed his PhD. He sent articles to the most prestigious scientific journal of the period, the Annalen der Physik, and four of his papers were published which became recognized as outstanding contributions to physics. They were on the photoelectric effect, Brownian motion, Special Relativity, and the equivalence of mass and energy. The paper on the photo-electric effect was cited in Einstein’s award of the Nobel Prize for Physics 16 years later.
Einstein postulated that light consists of localized particles (quanta), but the idea was rejected by most leading physicists, including Max Planck and Niels Bohr. This idea only became accepted in 1919 after Robert Millikan performed detailed experiments on the photoelectric effect, and after measurements where carried out on the scattering of light in the phenomenon which we know call Compton scattering.
Einstein’s Theory of Special Relativity leapt ahead of the work being done by great scientists such as Hendrik Lorentzin Holland and Henri Poincaré in France. In essence it established a fundamentally new way to describe physics. It was based on a set of postulates about light and about the meaning of measurement. It culminated in what became the most famous equation in physics, E = mc2.
By 1908 Einstein was recognized as a leading scientific thinker. He received offers of employment from various European universities and was eventually appointed lecturer at the University of Bern. The following year, Einstein was appointed associate professor in theoretical physics, aged 30. He became a full professor at the University in Prague in April 1911, accepting Austrian citizenship in the Austro-Hungarian Empire to do so. During his stay in Prague he wrote 11 scientific works, five of them on radiation and the quantum theory of solids.
In July 1912, he returned to his alma mater in Zürich. From 1912 until 1914 he was professor of theoretical physics at the ETH Zurich, where he taught analytical mechanics and thermodynamics. He also studied continuum mechanics, the molecular theory of heat, and the problem of gravitation, on which he worked with mathematician and friend Marcel Grossmann.
In July 1913 Einstein was voted membership of the Prussian Academy of Sciences in Berlin. Max Planck and Walther Nernst offered him the post of director at a new Institute for Physics. Einstein accepted the move to Berlin just as World War I was beginning. His decision to move to Berlin was influenced by the prospect of living near his cousin Elsa, with whom he had developed a romantic realtionship. Einstein joined the academy and thus Berlin University and became Director of Kaiser Wilhelm Institute for Physics in 1917. He was the elected president of the German Physical Society from 1916 to 1918.
Einstein did not work in isolation. Northern Europe was awash with great scientists and mathematicians. In some ways it was a competition between nations and individuals but there was also a lot of cooperation. Lorentz graciously said that Einstein had taken his own efforts to a new level. Lorentz wanted Einstein to succeed him at Leiden University but Einstein went to Berlin and Paul Ehrenfest succeeded Lorentz at Leiden.
One of Einstein’s professors of mathematics, Herman Minkowski, was surprised when he read Einstein’s paper on Special Relativity, since he was working on much the same thing. In fact, it was Minkowski who invented the concept of spacetime. Max Planck was a consistent supporter of Einstein. Ehrenfest a good friend. And so on.
Einstein’s success in Germany was in spite of anti-Semitic prejudice. One German physicist, himself a Nobel Laureate in1905, scornfully called Einstein’s work “Jewish physics”. In 1933 Einstein realised that he could not continue to live in Germany due to the rise of Nazism and he managed to move to the United States. He was always an avowed pacifist.
Origins of General Relativity In one of his books, Einstein recalled: “I was dissatisfied with the special theory of relativity, since the theory was restricted to frames of reference moving with constant velocity relative to each other and could not be applied to the general motion of a reference frame. I struggled to remove this restriction and wanted to formulate the problem in the general case.”
There were other things on Einstein’s mind as well. Lorentz’s successor at Leiden University, Paul Ehrenfest, had posed a paradox about a rapidly rotating disc. In Special Relativity, the radius of the disc is supposed to stay the same but the circumference is supposed to undergo a Lorentz contraction, thus violating the usual Euclidean ratio. This was one of the things that started Einstein wondering if the generalised geometry of four dimensional spacetime was in fact “flat”.
Curved space geometry had been developed by Bernhard Riemann just before Einstein was born and Einstein would have learnt about it at University. So this is another of the key strands involved in the development of Einstein’s approach and thinking.
But perhaps Einstein’s main concerns had to do with inertia and with gravity. Ernst Mach a had suggested that the distant stars were somehow responsible for the phenomenon of inertia since there was a striking correspondence between non-rotation and the reference frame provided by the “fixed stars”. But this suggested action at a distance, which was itself problematical. Newtonian gravity also inferred action at a distance and the force of gravity was anomalous when compared to other known forces in nature e.g. it was always one-way.
So after publishing his work on Special Relativity in 1905, Einstein started thinking about how to incorporate gravity into his new framework. In 1907, he came up with as simple thought involving an observer in free fall and another observer at rest in a gravitational field. He later said: “The breakthrough came suddenly one day. I was sitting on a chair in my patent office in Bern. Suddenly the thought struck me: If a man falls freely, he would not feel his own weight. I was taken aback. This simple thought experiment made a deep impression on me. This led me to the theory of gravity. I continued my thought: A falling man is accelerated. Then what he feels and judges is happening in the accelerated frame of reference. I decided to extend the theory of relativity to the reference frame with acceleration. I felt that in doing so I could solve the problem of gravity at the same time. A falling man does not feel his weight because in his reference frame there is a new gravitational field, which cancels the gravitational field due to the Earth. In the accelerated frame of reference, we need a new gravitational field.”
Einstein decided to further explore his idea that the observable physics in a gravitational field and in an accelerated reference frame not only looked the same but that they could be described in the same way.
Einstein embarked on what would be an eight-year quest to develop his extended theory. In 1912 he started to use a new branch of mathematics that had been developed in Germany over the previous fifty years – non-Euclidean geometry. This led him to use a curved spacetime model as a way of describing gravity. (It was later theorists that introduced the idea that curved spacetime actually replaces gravity). After numerous detours and false starts, his work culminated in presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, and they form the core of Einstein's General Theory of Relativity.
In essence, Einstein’s approach replaced the Newtonian description of gravity as a force acting instantaneously over the distance between massive objects. The Einstein description is now interpreted as suggesting that gravity does not actually exist as such. Rather, the effects of matter/stress/energy are to curve spacetime in such a way that the natural trajectories of objects gives the appearance that they are being acted upon by a external force. Gravity is not regarded as a physical force transmitted through space and time but is instead regarded as an effect caused by the curvature of spacetime. Spacetime is curved and objects moving through space over time follow the “straightest” path along the curve, which explains why their paths appear to be curved.
For example, the Sun bends spacetime around it and the natural paths of the planets around the Sun belong to the family of the shortest closed paths possible (which also require the least amount of energy), resulting in the same elliptical orbits so accurately described by Copernicus and Kepler in the 17th century.
One of the earliest successes of General Relativity was in being able to provide an explanation for a slow “anomalous” advance (precession) in the axis of the orbit of Mercury around the Sun.
Another success came via a prediction that was soon proved experimentally. When he published his complete theory of General Relativity in 1915, Einstein modified his 1911 prediction of how much bending would occur in the path of light coming to earth from a distant star and just grazing the edge of the Sun’s disc. In effect he doubled it.
In 1919, an expedition led by the British astronomer Arthur Eddington went to the west African island of Principe to observe the deflection of starlight by the Sun. It was possible to see stars (which were in the constellation of Taurus) near the Sun because Principe was in the path of a total solar eclipse at the time. Eddington, (later Sir Arthur Eddington) was a strong supporter of Einstein and was delighted to report that the observed deflection was exactly as Einstein had predicted.
Einstein became world famous and a media darling. In 1921 he received the Nobel Prize for physics. In a visit to the United States in the same year he received film star treatment and his name became synonymous with “genius”. (In truth, Eddington’s result was not conclusive and Einstein’s predictions were not conclusively verified until several decades later.)
In spite of the media headlines, Einstein’s approach was regarded by scientists of the day as a bit of a curiosity. Neither Einstein’s Theory of Special Relativity or his Theory of General Relativity was specifically mentioned in the citation to Einstein’s Nobel Prize.
In 1955, aged 76, Einstein died in Princeton from an abdominal aneurism.
In my opinion Einstein’s extraordinary rate of contribution to science between 1905 and 1915 was not matched by his subsequent achievements, but that does not detract from his great contribution to science. I regard Einstein as the second greatest physicist of all time, surpassed only by Sir Isaac Newton.
Einstein is widely regarded as having created one of the two main pillars of modern physics (the other pillar being quantum mechanics). General Relativity is regarded as an advance over Special Relativity and Newton's much earlier law of universal gravitation.
The Einstein field equations are a set of ten nonlinear partial differential equations and are very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. The curvature of spacetime is mathematically related to the energy and momentum of whatever matter and radiation are present by the Einstein field equations.
In 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. (I find this all the more remarkable because Schwarzschild was serving on the Russian front and suffering from an auto-immune disease at the same time).
Examples of alleged differences from the predictions of classical theory include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. (I say ‘alleged’ differences because in later chapters I will explore whether or not classical physics can come up with the same predictions.)
The predictions of General Relativity have been confirmed in all appropriate observations and experiments to date, and to a very high degree of accuracy. Furthermore, although General Relativity is not the only relativistic geometric theory of gravity, it is the simplest geometric based theory that is consistent with this experimental data.
However, unanswered questions remain. The one usually mentioned is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Personally I think that this is a red herring and that are many other shortfalls that are more important.
The main one (in my lonely and humble opinion) is that General Relativity fails to adequately explain Mach’s Principle and the origins of inertia. Einstein grappled with this all his life and eventually concluded he had not succeeded.
My other main objection is that General Relativity and quantum mechanics between them do not even come close to explaining the motion of stars in spiral galaxies. This is called the Galactic Rotation Curve problem.
Scientists of the 19th century were faced with a theoretical challenge of similar magnitude. They recognized their challenge as being a major issue and called it the Ultraviolet Catastrophe. Einstein and Planck made major contributions to the resolution of the Ultraviolet Catastrophe issue and in doing so they greatly assisted the development of quantum mechanics.
By way of contrast, modern scientists just leapt to an assumption about the solution to the Galactic Rotation Curve issue. They decided that the solution must be the presence of some hitherto unknown, exotic and undetected type of ‘cold dark matter’. They have been vainly looking for cold dark matter ever since. I wonder how many more decades will go by before they achieve enough humility and open mindedness to realize they might be barking up the wrong tree?
Of course if cold dark matter is ever actually discovered then I will take this comment back, with appropriate humble apologies.
General Relativity and Cosmology In 1917, Einstein applied his theory to the Universe as a whole, initiating the field of relativistic cosmology. In line with what everyone else naturally assumed at the time, Einstein assumed that the Universe was unchanging. So he incorporated a new parameter to his original field equations - the cosmological constant - to make sure that any solution to his set of equations could be adjusted to ensure that this was the case.
By 1929, the work of Hubble and others had shown our Milky Way galaxy is in fact just one of countless galaxies and also that the distances between all these galaxies is expanding. Einstein later said that introducing the cosmological constant had been a “big blunder”. However, recent astronomical evidence is suggesting that the rate of expansion is accelerating and the cosmological constant is now back in favor.
Alexandr Friedmann solved Einstein’s equations for a particular model of the Universe in 1922. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our Universe is envisioned as evolving from an extremely hot and dense earlier state.
With the developments in astronomy between approximately 1960 and 1975 General Relativity began to dominate the mainstream of theoretical astrophysics. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the General Relativity’s predictive powers. Relativistic cosmology became amenable to direct observational tests and General Relativity fared so well that it slowly became the orthodox paradigm. In 2016 another success came about when scientists were able to observed gravitational waves for the first time.
In the current era General Relativity reigns supreme and almost any alternative idea is politely or impolitely ignored. In 1999 Time Magazine selected Einstein as the most significant person of the 20th century. He is popularly considered to be the greatest scientist of all time (without, as far as I can tell, having actually performed any actual physical experiments). Personally I think that honor still belongs to Sir Isaac Newton.
The elephant in the room is that modern cosmology now requires that about 98% of the Universe has to consist of cold dark matter and dark energy. Both of these are fabrications completely outside of the rest of known physics and there is no direct well-confirmed experimental evidence for either of them. I think that the something is very wrong somewhere. I strongly suspect we are missing something fundamental, not in the Universe, but in our understanding of the fundamentals of the Universe.
Next Steps In the next few essays I am going to have some fun. • I will argue that the Equivalence Principle is just a mathematical abstraction – a conjectural hypothesis. An assumption for the sake of a model. An excursion into mathematical imaginings. And that it is patently untrue. • Then I will look at some of the classic test of relativity involving gravity and light. • Next I will challenge Schild’s interpretation of the Pound-Rebka experiments demonstrating and measuring gravitational redshifts. I do not disagree with the conclusion, just the argument itself. • After that I may look at Einstein’s insistence that there in no universally preferred reference frame. I will present a paradox that I call “Two fat trains”. And I will suggest that physics is a lot simpler if we do admit that some reference frames “are more equal than others.” • Then I want to take a fresh look at the modern attitude to spacetime. I will argue that we have got confused between our mathematical imaginings and reality. • Next I plan to explore if there are any ways to merge Newtonian gravity into four dimensional spacetime without going to the full geometric approach. As pat of this I am going to search the literature for the sort of model I have in mind (for surely people better than me have tried this already). • If this comes to fruition I will try to explain how an alternate model might be able to handle all the classic experimental verifications for General Relativity. • Then it would be very nice to find or predict some experimental or observational outcomes that do not accord with conventional General Relativity but which are explained in the new approach • And finally it would be very nice to be able to make some predictions that experimenters could then look for.
It is a Quixotic quest, but I will enjoy it and who knows. I might even come up with something. Or inspire someone better than me to come up with a breakthrough. Or at least amuse a reader or two.
1 note
·
View note
Text
20a A Note from the Author - Sep18
I have had a lot of fun re-examining the foundations of modern cosmology and challenging the current paradigms. Whether there is any merit in my new model for electromagnetic radiation, or my modified version of Special Relativity, or my solution to the galaxy rotation curve issue without resorting to imaginary Cold Dark Matter, or my fanciful Q Theory of nearly everything, only time will tell. At the very least I hope to provoke others into questioning what they are taught and to challenge anything that seems not right.
For anyone who may have read the chapters published thus far, I apologize for this 12 month break in putting more chapters online. I can reveal that I continued the journey and have drafted the rest of the story in about another dozen chapters, mainly looking at General Relativity. I now intend to put the remaining chapters online.
My main objective for this journey was my own interest. It turned out so interesting that I decided I would like share it with other people. I’m hoping that some open minded physicists might be inspired to develop some new insights. My intention is to provoke and challenge, not to pretend I know the answers.
But to have any effect at all people will need to read what I have written. In this era of information overload we are all beset by so much dross that I will be surprised and pleased if anyone invests in reading such esoteric stuff by an unknown author. Assuming they come across it at all.
I am pitching it at the level of bright young undergraduates in physics. To make it accessible I am going to keep using plain English, keep away from fancy mathematics, and keep it light and bright. It contains some solid historical information and signposts to some interesting associated topics. But I have also included some highly original speculations and ideas of my own, just to be provocative. For example my Q Theory of nearly everything.
Call me a skeptic or a heretic, a genius or a fool - I don’t care. But I do think that modern theoretical physics has dug itself into a hole and that it need to take stock of how it got there. I think that any mainstream model that has managed to lose about 95% of the Universe ought to do the same. Plus of course there are all those awkward questions and paradoxes that just seem to keep on hanging around. “We can fool all people some of the time and some people all of the time … etc.” But let us hope we never stop questioning.
When I have finished putting the remaining chapters online I think I will edit them and then maybe open up the feedback option. I also want to write the whole thing over again. The trouble with an online blog is that people see the last chapters first, thus making the adventure a bit muddled up. Conscious of what the journey uncovered and discovered and invented, I may recast the content into a small number of books.
So here goes. The next few chapters will be taking a skeptical look looking at Einstein’s General Theory of Relativity and the experimental tests that it has handled so brilliantly. I will reach conclusions that may discomfit and annoy a few people, but may resonate with a few others and may even inspire a subset of those to think deeply about some of the points I raise.
In any case I hope that you, dear reader, enjoy the rest of this journey.
0 notes
Text
20 Q Theory Part 4 – Evolution of The Modern Universe 1Sep17
Introduction
I’ve written this blog to have some fun and present some ideas different from the conventional and orthodox. This is the fourth and final chapter in a creation myth I’ve called Q Theory. It discusses the evolution of the macro features of the Universe. The central theme is that a primordial energy field called Q leads to a Theory of Nearly Everything (TONE).
In this chapter I suggest that the Q field provides the reference frame for linear and rotational inertia and that it is dragged around within huge gravitationally bound structures like galaxies. The idea was presented more seriously in an earlier blog.
We left off last time at the stage where matter was being created and the Universe had become transparent to light ….
Formation of Proto-Galaxies and Stars
The Universe becomes a soup of energetic material. Photons and neutrinos are everywhere, conveying energy from one place to another. Simple nuclei, atoms and molecules have formed.
The Universe develops clouds of low atomic weight atoms. The clouds become bigger and bigger. Some areas have more matter creation and expansion than others. This creates turbulence and rotational swirls in the Q and matter clouds.
Clouds of matter condense like the brown bits in Miso soup. I’m not sure whether to suggest that this is condensation due to gravity plus cooling, or whether the matter is left behind in rafts by the expansion of the Universe, like crusts on an expanding sphere.
The formation of giant nebulae of hydrogen gases is not smooth. There are whorls and swirls in the Q. Maybe these help create the clouds of matter. Or maybe the clouds of matter give rise to the whorls and swirls.
Clusters form and clusters of clusters form. Big clouds pull little clouds apart. There are great streams and sheets and walls and other great conglomerations of atomic matter and dust. And all the time this is happening, the matter within the conglomerations is being drawn into clouds that give rise to stars.
Formation of Stars and Heavier Elements
Within the clumps of matter gravity takes over. Clouds collapse under gravity and become dense and hot enough for nuclear synthesis to begin. Stars come into being.
Some stars feature nuclear reactions that create the full suite of elemental nuclei. I am not sure how these nuclei get their electron shells, but one way or another the thermonuclear processes in stars and subsequent phenomena create all of the first 92 of so elements in the periodic table. The most common elements are the ones that are easiest to make, or the hardest to break.
Some stars merge into each other. Some become white dwarfs. Some white dwarfs explode into supernovae. Some stars become black holes. Some become red giants. It’s all a bit chaotic.
Top Down and Bottom Up Cosmic Processes
As stars form in the swirling miso soup of gases and dust, they start to interact with each other. There are top down processes and bottom up processes.
The top down processes involve the Universe becoming increasingly filled with matter, and expanding, and breaking into pieces as its does so. Huge clouds of stars, gas and dust form and then break apart from each other to form proto-galaxies. These eventually give rise to galaxies. Stars form within the galaxies, or possibly even earlier at the proto-galaxy stage. Many of the stars are drawn into clusters.
Or maybe the last step here is in reverse order. Maybe the big clouds in proto-galaxies break up into smaller clouds and each of these then gives rise to a star cluster.
The characteristic of top-down processes is that big things form first and then give rise to details.
Bottom up processes, which are also largely drive by gravity, sees stars forming early on and then the stars arrange themselves in small groups and the small groups eventually pull themselves together into galaxies.
One way or the other, through top down processes and bottom up processes, stars find themselves in clusters, clusters find themselves in galaxies, galaxies find themselves in clusters of galaxies.
A whole fractal “Russian-doll” set of structures emerges. The incredibly huge and intricate tapestry of the Universe evolves and the process is still continuing,
Formation of Galaxies
All sorts of galaxies form. The most common form of galaxy is a spiral galaxy. This has a flatted bulge or core in the middle surrounded by a disc of stars, often arranged into what looks like training arms in pairs, and often with a bar or band of stars across the middle.
There are also elliptical galaxies, lenticular galaxies and a range of irregular galaxies.
The galaxies form out of clouds gases, dust and stars. They range in size from hundreds to million of light years across. About 60% are spiral galaxies. Most of the rest are elliptical galaxies. There are some lenticular galaxies which are somewhere in between a spiral and an elliptical galaxy. The remaining 5-10% of galaxies are irregular galaxies.
While the average distance between the stars in a galaxy is billions of time larger than the stars themselves (not counting red giants), the average space between galaxies is only a few Megaparsecs (say 3 to 5 million light years). So the average separation between galaxies is only about 50 times the diameter of the average galaxy (about 80,000 light years).
Imagine an average sized room filled with a hundred soap bubbles to get some idea of the density of galaxies in free space.
All this structure tells us an enormous amount about the Universe. We do not understand a lot of it. Theories abound but mysteries remain.
Elliptical Galaxies
Some astronomers/astrophysicists argue that elliptical galaxies are a result of interactions between spiral galaxies.
A simpler theory is that elliptical galaxies evolve from clouds of gas, dust and stars pulling together, but with low levels of aggregate angular momentum.
Bear in mind that the gravitational pull outside an elliptical galaxy is towards the centroid of the ensemble. However, within clouds of stars, many of the gravitational tugs and pulls work against each other and cancel out.
The stars form a massive n-body cluster, something like a swarm of bees. Elliptical galaxies do no have as high degree of new star formation as spiral galaxies, so maybe they are descendants of spiral galaxies.
Lenticular Galaxies
The lenticular galaxies seem to be similar to spiral galaxies, but with minimal disc formation.
Irregular Galaxies
Irregular galaxies are understood to be the result of interactions between the other types of galaxies. Astronomers can see many examples of galaxies that are interacting closely with each other or even in the process of passing through each other.
Indeed, the Milky Way is thought to be interacting with a couple of nearby dwarf galaxies and is headed fairly rapidly towards a collision with the Andromeda galaxy in about 4 billion years time.
The Milky Way also seems to contain some particularly large clusters of stars that some astronomers consider to be dwarf elliptical galaxies which the Milky Way may have “swallowed”.
Spiral Galaxies
I think spiral galaxies are particularly interesting because they have a regular but complex shape that potentially tells us a lot about their dynamics and how they were formed. They might even reveal something new and fundamental about large scale physics.
In Q theory spiral galaxies begin when a cloud of gases, stars and dust form into discrete clump or band due to the effects of gravity in the turbulent matter creation phase and resultant chaotic expansion.
The stars pull together and as they do so they start to swirl. They give up angular momentum to the galaxy as a whole. A central axis of rotation emerges. A disc forms. Stars in the halo migrate to the disc and bulge.
Spinning stabilizes a galaxy. Stars find an orbit where the gravitational pull from the rest of the galaxy is balanced by inertial tendencies to move outwards.
As per the virial theorem, the total kinetic energy equals half the total potential energy. It might not look like that but its true.
Note that the net gravitational pull is more or less zero in the middle of the galaxy. This can be explained by simple symmetry arguments.
Many spiral galaxies feature a bar of stars and dust lying across most of the disc in a symmetrical fashion. The bars look like two opposing spokes in a wagon wheel, but maybe the bar is not rotating like the spokes of a wheel at all. Maybe the bar is a dynamic effect created by a density wave in the distribution of stars in the disc. Think of a Mexican wave amongst the spectators in a football stadium. The wave travels but the spectators stay put.
The density of stars and gases and dust increases in the density wave and this creates a hot zone for the creation of new stars.
The density wave could be in the direction of the spin and travel faster than the rest of the disc. Or it could be moving slowly and become manifest itself as the rotating disc passes through that particular zone in space.
One theory is that the density wave is created because a lot of the stellar orbits are a bit elliptical and some of the orbits catch up to the ones in front. An attractive aspect of this theory is that it explains the symmetry of the bar – two equal arms in opposite directions.
Another possible avenue of investigation is that spiral galaxies formed when great filaments of matter in the proto Universe broke apart and that the bars are remnants of these spinning filaments.
The Rotation Curve Dilemma
Redshift data suggests that tangential speed of stars in spiral galaxies is much higher than expected. Classical dynamics suggests that the tangential speeds should decrease with distance from the centre. Observations suggest that they do nothing of the sort and that the graph of rotation speed versus distance from the galactic centre is more or less flat.
This is the rotation curve problem. When astronomers eventually agreed that the data was revealing a significant problem (~1970) they quickly seized upon the idea that the reason must be a lot of hitherto unknown and invisible missing mater in the halo of the galaxies. They called this imagined missing material Dark Matter.
Dark Matter
It is easy to understand why theorists were drawn to imagine same hitherto unknown forms of matter must be holding spiral galaxies together. Nuclear physicists were discovering new particles every other year or so and winning lots of Nobel Prizes. Furthermore, inventing invisible fluids has a long tradition in human history – miasmas, spirits, humors, pholistogen etc.
Once the dark matter hypothesis became established, herd effects took over and minds started to close.
However, in Q theory, spiral galaxies have a lot less angular momentum than is commonly believed. This is described in an earlier blogs in this series of Heretical Physics.
In Q theory the hypothesis is that inertia is a manifestation of the interaction between matter and the Q field. In a sense, matter is “sticky” within the Q. But the opposite is also true – Q tends to stick to matter.
When an enormous number of stars form into a giant gravitationally bound collection of stars turning slowly in space, they collectively drag some of the Q around with them, a bit like stirring a soup.
At the same time the rest of the Universe and the rest of the Q tries to hold the Q filed still. The net result is a three dimensional eddy in the Q field, with maximum movement close to the visible edge of the spiral galaxy.
The effect is clearly evident in the rotational velocities of stars as revealed by their red shifts. The stars are actually rotating at their correct Keplerian speeds in their local Q but because the Q is itself being spun around in that direction the stars look like they are going much faster.
If there is any merit in this idea at all then there is may be no need to imagine the existence of enormous amounts of exotic cold dark matter. Which could be handy bit of theory to have in case the search for Cold Dark Matter continues to as futile as it is improbable.
Expansion of The Universe (Contd.)
Around 1922, Aleksandr Friedmann applied Einstein’s equations to a model of the universe as a whole, treating it as a fluid with a more or less even distribution of mass and energy. He showed that there could be quite a few solutions to Einstein’s equation in this particular model, including an expanding universe.
In 1925-1929 Hubble showed that there are a lot of other galaxies other than the Milky Way. Furthermore they are all moving away from us at a more or less constant speed.
Theorists eventually imagined this expansion model running backwards. They came up with the idea of an initial Big Bang. This eventually held sway over steady state interpretations of the Universe.
Lets consider the expansion more closely.
There is no doubt the Universe is expanding – possibly accelerating even. In the Q Theory creation myth the expansion is not caused by some sort of explosive big bang, but rather by the formation of atoms. Each atom is enormously larger than its constituent material. The formation of particles soaks up Q, but the formation of atoms creates relatively enormous bubbles in the Q. The formation of bubbles drives everything apart, a bit like the leavening agent in a cake.
The whole Universe is a fluffy, foamy, pudding! Once matter is given a boost in one direction or another, it keeps on going until acted upon by an external force (usually gravity) to stop.
Is it just the space between galaxies that is expanding, or are galaxies expanding along with everything else? It seems to be the former. Gravity seems to be countering the expansion within galaxies.
Q theory would suggest that areas of hydrogen formation should be associated with particularly high dynamic turbulence.
The Big Bang?
It is an interesting mental exercise to try to imagine the whole evolution of the Universe. But, like a maze, it is fraught with blind alleys. Down the end on one such blind alley you will find a piece of science fiction called the Big Bang theory.
A worse name is hard to imagine. It is completely the wrong mental image. There was no time scale and no distance scale in the early stages of the Universe, so the concept of an explosion has no meaning. There was certainly no noise, so the word “bang” is pretty silly.
If everything emerged from a singularity then it would have been an infinite black hole. We now know of thousands of black holes and there is no evidence of any black hole ever exploding.
If the Universe emerged from a singularity then it would have a centre. There is no evidence of any centre to our Universe.
If the expansion of the Universe came from an initial explosion, then it would be a decelerating Universe due to the omnipresence of gravity. There is no evidence of such a deceleration. If anything, the evidence is opposite.
The list of weaknesses goes on and on.
In fact the whole mental approach of somehow standing outside the Universe and watching it evolve as if it were a movie is invalid, possibly even arrogant. Physicists know that there is no absolute time, and no absolute time scale, so why do they stay silent when book writers and journalists pretend there is?
Very Large Scale Patterns
There seems to be a lot of structure in the universe as a whole. Periodicities in the rate of expansion. Large voids. Filaments of super-clusters of galaxies and so on. There is even some suggestion that spiral galaxies tend to line up with great voids in the macro structure of the observable Universe.
There is a project afoot to map the macro features of the galaxies in quite a lot of detail. A heroic and worthwhile project. It will be interesting to see if galaxies are closer together in the earlier/older Universe. It will also be interesting to check whether there are differences between the older galaxies and the galaxies in the later/newer/closer Universe. e.g. in the rate of star formation, proportion of spiral galaxies, metal ratios in old stars etc.
Is the Expansion Accelerating?
Supernovae type 1a stars are white dwarf stars whose mass has exceeded a certain stability limit, thus causing them to explode. This creates a ‘standard candle’ – an object with much the same intrinsic brightness everywhere it occurs. Hence the dimmer a candle appears to us to be, the further away it is. This creates a very useful way of estimating the distances of far away galaxies.
However, there are several other distance scales. One involves a clever set of steps. We can see how big nearby galaxies are by direct observation. And we can measure their surface brightness. Therefore we can develop a model of size versus brightness. There is a very good correspondence between size and brightness for a fairly common class of galaxies. This is the Fisher–Tully relationship.
We can measure the brightness of galaxies further away, so if we can estimate their size somehow and compare it to their brightness, we can estimate how far they are away. There are theories for how bright certain galaxies should be. Some of these involve the classic virial theorem.
The virial theorem shows that the total kinetic energy of the particles in any self-bound and stable system has a fixed ratio to the total potential energy. (If the kinetic energy gets bigger than this, the whole system simply gets bigger, reducing the kinetic energy while increasing the total potential energy, thus restoring the balance). This theorem can help in estimates of the size of distant galaxies.
If we compare the size of stars with their age and brightness we can identify a statistical correlation that is called The Main Sequence. We can match this to our theories of the creation, evolution and death of stars. We can test these theories by the nuances in the electromagnetic radiation that they have sent in our direction, and wherever we can see them interacting with each other and with surrounding dust.
There is a special class of stars called Cepheid variables whose brightness pulsates at rates proportional to their mass. We can observe the duration of these pulses and also their brightness. Comparisons then give relative distances.
The light emitted from known sources (such as the spectral lines from hydrogen) has definite wavelengths. So if the wavelengths are longer we can conclude that the sources were moving away from us at the time of the emission (Doppler redshifts).
Studies of nearby galaxies suggested a constant rate of expansion. However, by the 1970’s the estimate for this rate of expansion still varied by ±50%. Observations from the Hubble Space Telescope (ironically) helped to pin the answer as being something close to 71 km/sec per megaparsec (proper) distance away from us.
Comparisons with other methods and models further tightened the measurement of this Hubble expansion parameter.
At the turn of the 21st century, studies of the brightness of distance supernovae became possible using space telescopes. Comparing the distance estimates to redshift data threw up a surprise. Using the best estimates of the Hubble constant and the observed Doppler redshifts of the supernovae gives a measure of their distance. But this conflicted with the calculations using their dimness. The supernovae were about 15% dimmer than expected.
Conversely, if the dimness data was used to calculate distance, then the redshift/Hubble model calculations were wrong. Either the Hubble “constant” must be bigger in recent times than it was in older times, or the light from the distant supernovae has somehow become less red. Or something totally unexpected is going on.
The answer most astronomers/cosmologists prefer is that the rate of Hubble expansion was slower in the earlier/older (and hence now further away) time than it is “now”. In other words the rate of expansion of the Universe has increased since the light from those supernovae light started travelling towards us.
Hence the need for theorists to conjecture how this might occur. Once again they leapt to the conclusion that some hitherto unknown substance was the culprit. They called it Dark Energy. The Dark Energy is supposed to be increasing the “stress/energy pressure” in the Universe, thus causing the expansion to accelerate.
However, the whole reasoning process contains a lot of assumptions, both hidden and explicit. An accelerating expansion might not be the only possible conclusion.
It is much too early to be absolutely positive that the expansion of the Universe is accelerating. So I think it is too soon, foolish and premature for all the world’s scientific talent to get on board the mental bus called “Dark Energy” and then shut the door behind them and exclude any non-believers.
The whole issue should be called the “Standard Candle Redshift Challenge.” It is a marvelous opportunity to learn something new.
Q Theory and the Accelerating Universe Conjecture
Q Theory is compatible with an accelerating expansion if the process of atom formation is continuing. In Q theory, the formation of atoms (mainly hydrogen) creates the pressure that drives the expansion.
However, Q theory likes to question hidden assumptions. Take the speed of light for instance.
Every known experiment to measure the speed of light in an inertial reference frame in a vacuum gives the same answer. But all such experiments are here and now. Who is to say that the speed of light has always been the same? If the speed of light was slower in an older (denser?) Universe, say 10 billion years ago, then all bets are off.
Furthermore, it seems that in spite of a century of special relativity, most humans keep on imagining that they can somehow imagine the evolution of the Universe as some sort of movie. But a movie has a steady external timescale. The Universe does not.
All we humans can see is a set of spherical photos of the Universe around us. Each one further away and older. The frames do arrive in what appears to be a steady rate and orderly sequence, but as a movie they leave much to be desired because most of the events out there are happening on timescales of millions of years. If we wait a few billion years we could watch some of the grander events unfold. But few of us are that patient!
Relationship of Q to Rotating Matter
Q gives matter both its inertia and its mass. All matter tries to maintain a stable consistent relationship with Q. The converse is also true. Q tries to maintain a stable consistent relationship with matter. However, it is an unequal contest. There is a lot of Q.
Except in a few astronomical situations – inside and near a massive rotating object, and inside a massive rotating system. What happens here is that the massive amounts of matter slowly and gently stir the Q.
The consequence of this is that the reference frame for inertia changes. For an object to be at rest near a massive rotating object, it has to find a compromise between the slight stirring effect of the nearby massive rotating object and the resistance to this from the Q everywhere else.
The effect may show up in the anomalous precession of the perihelion of Mercury. Although this can be modeled by imagining that spacetime is curved, in Q theory part of advance in the perihelion of Mercury’s orbit around the Sun may partly be due to a slight dragging of the inertial reference field due to the Sun’s own rotation.
This produces an immediate test for Q theory versus General Relativity. Q theory predicts that if Mercury went the other way around in its orbit then the perihelion precession effect would work backwards. In General Relativity it would still precess in the direction of the orbit. The author offers a $50 bet at even money on this, with the outcome to be decided by experiment.
Summary
In summary, Q Theory tries to conjure up the whole Universe out of one primal substance, and in the process address many very fundamental questions in physics. There is deliberately no timeline, but there are multiple stages:
Q starts to become “denser”. Positive and negative grains form but are unstable.
Annihilations produce neutrinos.
Neutrinos spin-stabilise the grains and they form electrons and positrons.
The interaction of matter with Q gives rise to inertia and to gravity.
Inertia and gravity give matter its mass.
Other properties of Q give rise to basic dynamics and thermodynamics.
The Universe becomes hot.
A highly charged subatomic “soup” forms which includes quarks and mesons etc.
Neutrons and protons form, along with their anti-particles.
Competition between competing chain reactions sees matter triumph over anti-matter.
Light nuclei form.
The Universe becomes transparent. Photons can travel freely.
Hydrogen and other atoms form.
The Universe starts to expand.
Nebulae form, and stars within nebulae.
Top down and bottom up processes build the Universe as it is today.
Q Theory Differences From The Standards Model
The main point of difference in Q theory is the hypothesis of a primal universal substance/field called Q. There are three reasons Q was conceived: (i) the evidence points that way, (ii) it is very useful in explaining lots of things and (iii) just for fun.
Q theory:
provides an explanation for both gravity and inertia
explains that gravity and inertia define mass
resolves Mach’s Principle
explains Newton’s First Law of Motion
does away with ‘action at a distance’
posits an explanation for charge parity violation
reconciles gravity and electro-magnetism
provides a description of photons consistent with Special Relativity
resolves wave particle duality
does not require spacetime curvature
suggests a reason for the expansion of the Universe
supports a hypothesis that eliminates the need for exotic dark matter
gives a possible explanation if the expansion is indeed accelerating
is open to heretical ideas such as the possibility that the speed of light might have been different in the early Universe.
Concluding Remark
I’ve enjoyed making up this creation myth. Please feel free to borrow ideas from these speculative musings if you like, and to open up your own thinking just a little bit more.
In later blogs I’m planning to make some heretical criticisms of general relativity. Though this is a very clever theory it may not be as ideal, complete and final as the dyed-in-the-wool cognoscenti assume it to be. And I’d like to try to put gravity back into physics in a simple but modern way. But first I am going to take a short holiday. Thank you for reading these blogs.
#science#physics#cosmology#big bang theory#Q theory#evolution of the Universe#creation myths#spiral galaxies#rotation curve problem#dark matter#rotational inertia#expansion of the Universe#speed of light#heretical physics#BigBangTheory#Qtheory#SpiralGalaxies#DarkMatter#RotationalInertia#SpeedOf Light#HerecticalPhysics#TheoreticalPhysics#ExpandingUniverse
0 notes
Text
19 Q Theory Part 3 - Evolution of the Universe (contd.) 31Aug17
Introduction
In the last two blogs I started describing a new creation myth, written largely for fun, but also to present some ideas and provoke some fresh thinking about the fundamentals of physics. The next few blogs will give a fanciful description of how the Universe evolved, and is still evolving, and what is driving its expansion.
The Heating Epoch
The electrons and positrons form a sea within the Q. Why don’t they all just annihilate each other? Partly because they do not encounter each other fast enough and partly because they are continually being recreated.
The Universe at this stage is cold. Kinetic energy is not yet very evident. The only things moving much are the neutrinos. But that is about to change. When electrons and positrons annihilate each other they release an intense disturbance in the Q. Electromagnetic radiation. Photons.
Photons become very important and there is much to say about them. There is a blog or two earlier on in this series all about photons (or phots as I renamed them to get away from old fashioned mental mind traps).
Photons convey energy, spin and momentum from one place to another. But at this stage in the Universe the photons cannot travel very far. There are too many charged particles in the way. So they are absorbed again.
However, the photons have both energy and momentum. When they interact with electrons and positrons they create movement in those electrons and positrons. Lots of movement. Lots of kinetic energy. Heat. Lots of heat. The temperature rises dramatically.
The Sub-Atomic Soup
The Universe now has an almost un-measureable amount of electrons and positrons, photons, neutrinos and heat.
The electrons and positrons interact with each other in high energy events. These events give rise to a whole family group of semi-stable clumps of Q called quarks. The quarks in turn form larger semi-stable clumps of mater called muons and mesons.
Eventually, clusters of 1836 electrons and positrons form which are stable. It has a mass close to 1837 because it requires some extra Q to glue it all together. The cluster has a neutral charge and is called a neutron. Or it might be an anti-neutron. Neutrons and anti-neutrons start to appear everywhere. Neutrons have spin of ½.
Free neutrons are stable but not entirely. They have a mean lifetime of 14 minutes and 42 seconds (using Earth time which does not yet exist). The most common form of decay involves the ejection of an electron and an anti-neutrino. What is left behind is a proton. This has mass 1836, charge of 1 and spin of ½. It appears to be totally stable.
It would be very interesting to find out what is special about the number 1836. Q theory has a lot of sympathy for the idea that a neutron and other subatomic particles of matter are “knots in the Q”. The hope is that there is a geometric explanation for the way that these are pieced together. But for now the best explanations are based on “chromo-quantum mechanics” and gauge theory mathematics.
In our Q theory creation myth, the Universe now consists of Q, stable particles (electrons, positrons, neutrons and anti-neutrons), lots of neutrinos (and antineutrinos), lots of photons, lots of small intermediate particles (quarks) and lots of intermediate particles (muons and mesons). Quite a soup! But this is still only the beginning.
The Universe has become filled with subatomic particles, both matter and antimatter. However, although many of the constituent elements are stable or at least semi-stable, the overall mix is not.
The stage is set for a great cataclysm – the destruction of half the Universe.
[20th century physicists have spent a lot of time smashing atoms to see what they are made of. Theorists then spend a lot of time developing mathematical representations to try to put all the bits into some sort of order. Particles are divided into Bosons and Fermions. Fermions into Hadrons and Leptons. Hadrons into Baryons and Mesons. Particles are assigned colors, strangeness and charm. Occasionally the mathematical models reveal patterns with known particles occupying most of the slots but some slots still empty. The vacancies are later filled by discoveries of new particles.]
The Great Cataclysm – Charge Parity Violation
So far the material in this myth has symmetry – equal abundance of electrons and positrons, positive and negative particles, matter and anti-matter. And all the events in this early universe have mirror images.
But each interaction promotes more interactions. Largely due to neutrinos and anti-neutrinos careering all over the place. Cascading chain reactions. So the Universe becomes unstable. Its evolution reaches a cross-roads. It is on a knife edge. It could go one way or the other. A Universe as we know it, or an anti-Universe.
A cataclysm occurs. The positive matter chain reactions win out over the negative matter chain reactions. This might have been random. Next time it might go the other way. Or it might have been triggered by the lack of complete symmetry.
(Not everything is entirely symmetric. So things only work one way. Neutrinos carry energy, not anti-energy. Gravity is attractive, never anti-attractive. Inertia is resistive, never the opposite. And some properties of nature have a finite floor, but no ceiling. For example, temperature has no upper limit but can never fall below minus 273 degrees centigrade.)
The Universe becomes filled with matter. The negative charges end up in electrons. Other negative charges combine with positive charges and end up in neutrons and protons. Left over positrons and antiprotons and antineutrons more or less disappear, except for temporary appearances following nuclear collisions.
This explains the missing anti-matter in our Universe, otherwise known as the mystery of charge parity violation.
Strong Nuclear Forces
Protons and neutrons form a small set of other stable nuclei. If they capture sufficient electrons they become various versions of the nuclei for hydrogen, helium and even lithium 7.
Q theory tries to keep everything as simple as possible. It is motivated by a desire for beauty and simplicity, order and harmony. If the storyline is becoming really complicated, with dozens of dimensions and weird mathematics, then the hope is that this is the fault of the observer and not of the creator.
Which is a long winded introduction to a short comment about the strong nuclear forces that hold atomic nuclei together. In short, Q theory is reluctant to add weak and strong forces to the panoply of creation. It already has pressure, charge, spin and the twin effects of gravity and inertia. It would like to be able to explain both strong and weak nuclear forces using no more than all of this. But that is a challenge too far at this stage.
The First Hydrogen Atoms
The next stage of this creation myth is that the small naked nuclei acquire electron shells. Eventually about 90% of the normal matter in the Universe ends up as hydrogen.
The fact that a proton and a passing electron do not simply snap together and destroy each other is of vital importance. And yet still something of a mystery.
The early concept of an electron as a little ball whirling around the proton like a satellite in orbit fails to explain more than it answers. So how are we to think of the electron in a hydrogen atom?
In Q theory an electron is the tiniest stable bit of concentrated Q. It is spin stabilised and it has a “vectorizing” effect on the Q all around it, plus a miniscule gravity effect. A proton is 1836 time heavier than an electron, but it only has the same magnitude of charge and the same magnitude of spin as the electron.
What happens if a proton encounters an electron in free space?
The gravitational attraction is very much weaker than the charge attraction and I think it can be ignored for the formation of atoms. The Q fields around the electron and the proton meet each other. As a result the electron is drawn to the proton. The electron acquires more kinetic energy. The Q field between them intensifies. The whole process goes faster and faster.
Then snap. The electron disintegrates. In an instant it becomes a cloud of Q around the proton. A shell. All that is left of the electron is its Q, which is more or less the same as its energy, its charge and its spin.
An atom of hydrogen is born. No net charge, no net spin. Stable. And enormously bigger than a proton. The diameter of an atom is about 10,000 bigger than that of its nucleus. If the nucleus of an atom were the size of a tennis ball, the electron shell would be about a kilometer away.
Our Universe now has a reasonably good unit of distance – the size of a hydrogen atom.
Larger Atoms
A helium nucleus attracts a shell with two electrons. Each electron has opposite spin. This is the only way the double shell can be formed.
As the nuclei get progressively bigger, the structure of their electron shells becomes more complicated. The most useful mathematical models involve spherical wave formations and harmonic oscillations in these. Note the Pauli Exclusion Principle – two identical electrons cannot exist in the same state.
The Universe Becomes Transparent
The formation of baryonic matter removes a lot of naked charges from the early Universe. The Universe becomes less like a hot plasma and more like the Universe we know today.
A new phenomenon appears – the Universe becomes transparent to the movement of photons. Light. In fact the whole electromagnetic spectrum. Photons stop being destroyed almost as soon as they are created. The sea of electrons and positrons in the primeval plasma has become a much less densely populated sea of nuclear material and atoms. This enables photons to travel extensively. “Let there be Light” (Genesis, Verse 4)
Bubbles in The Q
In this creation myth a remarkable thing happens when an atom of hydrogen is formed. Relative to the size of a proton, an atom of hydrogen is enormous. All of a sudden this new entity starts popping up all over the emerging Universe and it is much bigger than anything else so far.
Q theory suggests that the area between the nucleus of an atom and its electron shell(s) is a trapped zone of low pressure Q. As a result all the Q outside of the atom is put under extra pressure. The process of hydrogen formation creates pressure all over the Universe. As a result of this pressure the Universe expands.
The formation of hydrogen atoms creates ‘bubbles in the Q’ and this forces the Universe to expand, almost like the bubbles of carbon dioxide in the baking of a cake.
As with the rest of Q theory, the suggestion may have no merit. But then again it might. At least it provides a way to account for an expanding Universe without relying on some mythical big bang 13 billion years ago.
Furthermore, if the acceleration of the Universe is increasing, then the bubbles in the Q idea offers a potential explanation for why this might be so. Continuing formation of hydrogen and other simple atoms could be the source of the mysterious “dark energy”.
If the epoch of the formation of hydrogen happened quickly then maybe this epoch could be considered to be a sort of “big bang”. However, Q theory thinks the “big bang” label and the images it inspires are unfortunate.
Q theory is just a myth. But it is partly motivated by the suspicion that some of the Big Bang model is just a myth as well.
Future Expansion of The Universe
Q theory has a simple model for what causes the Universe to expand. It is just pressure from the creation of atoms. And once the matter in the Universe acquires motion there is nothing but the weak effects of gravity to stop this continuing. Until the Universe becomes stretched too far. Then maybe the pressure will become “normal” or even negative and the expansion will stop. In fact the process of expansion might eventually reverse.
I quite like the notion that the Universe might be “closed” or even that it might oscillate in the sense that this Universe might have been reborn out of the remnants of a previous Universe. However, this is a theoretical fancy of no practical use.
Anti-Gravity
For anti-gravity to exist there has to be a high pressure zone in the Q that tries to normalize itself by pushing away any surrounding matter. But concentrated Q tends to turns into matter, which is extremely concentrated Q, thus creating a low pressure zone again. If the process of Q turning into matter is prevented anti-gravity might become possible.
Whereas gravity results from matter and energy trying to fill in low pressure areas in the Q, the expansion of the Universe is the result of Q pushing matter and energy away. So, in a sense, you could say that the expansion of the Universe is driven by anti-gravity.
How Old is the Universe?
The question of “how old” is the Universe” is “ontologically naïve” and has no satisfactory meaning, and hence no satisfactory answer. You may as well ask “how heavy is beauty”?
In Q theory electromagnetic radiation can only start travelling at a secondary stage in the evolution of the Universe. The only carriers of information prior to that were the neutrinos and they are extremely difficult to detect. So, although when we look into deep space we are seeing back in time to when things were very much younger, we aren’t receiving any photons from before the epoch when the Universe became transparent.
The main problem however, is that the question “how old is the Universe” assumes and requires a notion of time which is invalid and therefore has no answer. It is not meaningful, even conceptually, to imagine the evolution of the Universe as a three dimensional movie which can be run backwards to find out what happened at the beginning.
See the essay on Special Relativity for a discussion that time is nothing more than sequential events. There is no universal or absolute time. There is no universal standard of time that has anything other than a local meaning. There is no well defined rate at which we can run the movie of the Universe’s evolution backwards.
The earliest stages of the Universe had no meaningful definition of time. The stages of the Universe after the creation of atoms exhibit a general expansion and the rate of expansion does provide a sort of timeline. But whether the rate of expansion has always been the same, or whether it is the same everywhere, is too early to tell.
Summary
Okay, this Part 3 was just a bit of fun. But it did tackle at least raise some big issues in physics – the mysteries of charge parity violation, the expansion of the Universe and why there is no anti-gravity. Hopefully it encourages others to question whether the current orthodox models solve everything – they don’t.
#creation of atoms#Q Theory#formation of atoms#cosmological pressure#expansion of the Universe#Charge parity violation#creation myths#big bang model#Qtheory#ExpansionoftheUniverse#BigBangModel#ChargeParityViolation
0 notes
Text
18 Q-Theory Part 2 ... Inertia, Mass and Gravity 20Aug17
Introduction
In the last blog I started describing a fanciful creation myth. It is written largely for fun, but also because it provides a way to present some ideas. Its intention is to provoke some fresh thinking about the fundamentals of physics, which I think has become somewhat stuck in the mud. Mired in mathematics. Too much group-think and cognitive dissonance. Individual genius lost in the herd.
The first blog described the emergence of neutrinos, electrons, photons, spin and charge out of the primordial energy field called Q. This second blog will discuss origins for inertia, mass and gravity. It will suggest a rationale for Newton’s Laws of dynamics and why Mach’s Principle was nearly right.
Newton’s Laws of Motion
When matter is formed, it is born out of Q and it retains a relationship with Q. The essence of this relationship is that it does not like to be changed. It resists changes. It requires energy to change it. This gives matter its inertia.
Many of the properties of inertia are obvious. They have become axiomatic in classical dynamics, e.g. Newton’s first law of motion … Objects remain at rest or in a state of uniform rectilinear motion unless compelled by a force to change.
But we are interested in why this is so. Why is Newton’s First Law of Motion true? Newton did not decree or ordain that it was true – he just recognised that it was true and described it very neatly.
Just calling things “a law of nature” does not suffice. We are trying to explore the origins of inertia, mass and gravity. We want to know why certain observable features of Nature are always true – what makes them a law of physics?
Inertia
Matter is “sticky” within the Q. Matter refuses to accelerate unless it is provided with more energy. This gives an object its inertia when at rest.
Matter can move at a constant speed in a straight line within Q without any resistance. The Q makes way at the front of the moving matter and moves through and around the matter and fills in behind the matter when it has passed by. Linear momentum is maintained. Energy levels are maintained. Q has no ‘viscosity’ in the face of uniform rectilinear motion.
But any attempt to increase the speed of a material object immediately creates resistance from the Q. The resistance is overcome by supplying the object with more energy. An external force has to act on the object and through through a finite distance. Extra energy and extra momentum have to be transferred to the object. It is almost as if the object “mops up” Q as the force accelerates it to its new level of speed.
When the external force stops the object maintains its new increased level of speed, energy and momentum. Momentum has been transferred from whatever exerted the force. Momentum is conserved and so is energy.
An object that acquires momentum also acquires energy. Whatever contributed such energy to the object loses a corresponding amount of energy.
Any attempt to deviate a moving object away from a straight line also immediately creates a resistance. The resistance is overcome by applying a force to the object at right angles to the direction of motion. But when such a force stops, the object resumes travelling in a straight line. It is a different phenomenon to the previous example.
If the applied force is always at right angles to the direction of motion, the object maintains a constant level of energy. The object maintains the same magnitude of momentum as it had before, albeit in a continually changing direction.
So if you push an object from behind the object goes faster and becomes more energetic and the effects persist even after the force ceases. But if you push it from the side the object does not go any faster, the change in behaviour only lasts as long as the force is applied, and the new mode of movement does not persist when the force stops. Why the difference? What is going on?
Suspicion falls immediately on how the force is applied. In the first case it is parallel to the motion, but in the second case it is at right angles to the motion.
Consider the example of a rocket in space. A steering thruster starts ejecting gases at right angles to the direction of travel and persists until the rocket performs an entire circular loop. The thruster then stops. The rocket carries on as before. Some of the linear momentum of the rocket became angular momentum until the maneuver was completed and then it became linear momentum again. The rocket is left with nearly the same energy (it is just a bit less due to the loss of fuel) and momentum as before There is a circular spray of fuel gases left behind in space. This spray of fuel gases carries away the chemical energy that was used up in the fuel burn.
Next consider a frozen lake with a smooth pole poking up through the ice. An ice skater approaches in a smooth straight glide, just to one side of the pole. Just as they pass by the pole they reach out and grab it with one extended hand. All at once they are swung around into a circular path, still at the same speed. But when they let go of the pole they head off in a straight line again.
What is so special about straight lines? Why do we not have a universe in which the natural path of moving particles is all curly and curved? Or just a random walk? It may seem to be a trivial or silly question, but I think it is fundamental. And the answer reveals something about the nature of Q.
[If you ask physicists why objects stay at rest of in a state of uniform rectilinear motion most of them will say because objects have to obey Newton’s First Law. But Newton was merely observing Nature, not ordaining its behaviour. Only a minority of physicists will see that the question is a deep one, and they will be regarded as being a bit weird.]
Applying a force to an object creates a zone of high pressure Q on one side, thus causing it to move. Once an object is moving in Q there is no further resistance to such movement, provided the object keeps on going in a straight line.
Imagine a person standing in a tram car. The tram car stops suddenly. The person experiences strong braking forces on the soles of their feet. But their bodies are flung forward relative to the tram car, especially if they are not holding on. Why?
In Q theory there is a zone of higher pressure Q behind every atom of matter in the person. When the tram car stops the Q keeps pushing the person forward. It is not – as a contemporary of Ernst Mach is reported to have said “the fixed stars that push the person down” but rather the Q all around the moving person.
In Q theory, inertia does not come from far away “fixed stars”. There is no mysterious magic action at a distance in Q theory.
Both static and moving inertia come from a direct proximate relationship between the object and the Q in which it, and everything else, is embedded.
(This part of the Q creation myth at least it fits everyday observational evidence, which is more than can be said for the other explanations, or lack of explanations.)
Curved Motion
A moving object has no cause to move away from its straight line. To make it follow a curved path, a force needs to be applied orthogonally to the direction of travel.
Consider the motion of a small particle in a circular orbit around a massive object lying at the centre of an x-y reference plane. If we plot the momentum in the y direction against time we will see a sine curve. If we plot the momentum in the x direction we will see a cosine curve. It seems like the momentum of the particle is being passed seamlessly from the y direction to the minus x direction, and then to the minus y direction and then to x direction. No energy is gained or lost while all this is taking place.
The object is said to be accelerating constantly. But is not like a linear acceleration. No work is being done. No energy is being gained or lost.
What keeps a moving object travelling in a straight line? In Q theory the answer is simply the fact that the object has no reason to do anything else. But if a force is applied at right angles to the object, then Q will become higher in pressure at that point. The net effect is then that the object’s path bends.
Q theory keeps in mind that there are two types of acceleration – one type involves a transfer of energy to the affected object and the other type does not.
Spinning and Orbiting
The main types of persistent circular motion are spinning and orbiting. In spinning, an object revolves around its own axis and is held together by internal forces.
In orbiting, objects “circle” around each other and are held together by a force such as gravity, or something as simple as a piece of string. Nearly everything in the Universe is spinning or orbiting, or both.
Orbiting is a stable phenomenon that dominates our Universe. Some types of orbits are the basis for atoms. Other types are the basis for galaxies.
Orbiting can be thought of as gravity (or some other force) fighting with inertia and coming to a stable compromise in which neither wins. The overall energy level stays constant.
Thought experiment 1: Consider a large heavy gyroscope made of crystalline glass. It is spinning rapidly in deep space. All of a sudden the glass shatters. Tiny bits of gyroscope fly off in all directions. After a few years the angular momentum that was so obvious when the gyroscope was intact becomes lost in space. One can argue that there always remains an angular momentum vector where the gyroscope once existed, but that would seem to be a bit philosophical.
The issue arises in reverse when countless bits of dust are drawn together under the action of gravity. As the dust particles coalesce it is almost certain than there will be some net angular momentum. This could end up in the form of a spinning star, with or without a solar system. Or even as a whole galaxy. Or even as a super-cluster of galaxies.
Thought experiment 2: Consider a symmetric dumb-bell spinning in empty space. If the bar suddenly shatters into innumerable fragments the two balls will head off in opposite directions – in straight lines parallel to each other and separated by the length of the former bar. Each ball will have equal but opposite momentum. The angular momentum inherent in the spinning system will eventually become lost in space.
Consider the opposite case, two balls approach each other, but slightly off centre. They each have a grappling hook and these hooks snare each other as the balls attempt to pass by. The two balls start spinning around each other. The angular momentum that was hidden in the starting conditions has now become obvious and apparent.
Thought experiment 3: Consider a spinning top or gyroscope. Every atom is said to be undergoing constant acceleration. But no energy is required to maintain the spin. The energy of the top remains constant. However, increasing the rate of spin of the top does require the input of extra energy and once such energy is transferred to the top it stays transferred to the top.
Q Theory Explanation of Mass
Matter refuses to move faster rate unless it is provided with more energy. The more matter in the object, the more force is required to accelerate it. The ratio between the force and the acceleration is the measure of the property that we call the object’s mass.
Mass is a property of matter. It is not the same as matter.
In spite of what is taught in classical dynamics, the mass of an object is not fixed. The object has a certain amount of rest mass that depends upon the Q embodied in it to begin with. But when an object moves in the Q it acquires more energy and this adds to the mass of the object. Not much at first but the effect compounds and becomes very significant at speeds approaching the speed of light.
As the speed of an object increases it gradually acquires more Q and thus requires a little bit of extra force to accelerate it. This process compounds until relativistic effects start to dominate. For example, at 87% of the speed of light, twice as much force is required to give the same amount of acceleration as when the object was at rest.
The implication of this is that matter cannot be accelerated to reach the speed of light. To travel at the speed of light the object has to be free of matter. In short, it has to be a neutrino or a photon.
Be careful how you apply the contra-positive logical argument to the statement “If it contains matter it cannot travel at the speed of light”. The logically equivalent statement is not “If it travels at the speed of light it cannot have mass.”
The logically equivalent statement is “If it travels at the speed of light it cannot contain matter”. Mass and matter are not the same thing. Closely related yes, identical no.
Gravity
Gravity is the greatest force in the Universe. It is a weak force but its effects are additive and unlimited. Gravity is the great organising force of the whole Universe. It is responsible for the wondrous arrangements of countless stars within galaxies and between all the galaxies of the Universe.
It seems somewhat crass therefore, for Albert Einstein to try to get rid of gravity using mathematical trickery on mankind’s concepts of space and time.
Einstein’s model/approach relies on a so-called principle equating localised gravity to linear acceleration, and it then goes on the reformulate Newton’s beautifully simple approach to gravity into a set of ten non-linear differential equations in a four dimensional curved spacetime that is impossible to visualise and very very difficult to solve.
What I think Einstein achieved was a very clever mathematical description of gravity and its effects, including the fact that it slows down time. But there are often different ways to describe the same thing. For example, a cone can be seen as a disc or a triangle depending on the observer’s point of view.
So is General Relativity the perfect, fully complete, final and best way to describe and think about gravity in all its many roles? I doubt it. For a start, I think that General Relativity is so mathematically complicated that many of its mathematical solutions do not correspond to physical reality.
(Here is a simple analogy. Consider the area of a disc. To calculate its radius divide by π and take the square root. There are two answers to this – one positive and one negative. But only one of the answers corresponds to reality. General Relativity has this issue in spades, especially in its cosmologies).
Here is a prosaic, more physical description of gravity using Q theory.
When the Q turns into particles, Q becomes highly concentrated, but outside of the particle it is depleted. The extent of the depletion diminishes according to the inverse square law.
Every piece of matter and every agglomeration of matter is surrounded by a lowered density of Q which gradually normalizes with increased distance away from the object.
The depleted Q creates an odd effect. It distorts time. It causes any timekeeping devices to slow down. Not through any particular physical effect, but because time as we perceive it is an illusion and is not subject to the rules we think it should obey.
The other properties are more familiar. The depleted Q creates a kind of spherical hole. Other material objects and even passing neutrinos and photons are then drawn to the hole and tend to fall into it unless they have enough inertia to escape being so caught.
Any matter encountering a zone of weak Q will tend to move into it. The rest of the Q in the Universe will try to push matter and energy into the low pressure zone. Gravity is as simple as that.
If you must use analogies think of a gravity as matter trying to vacuum up other matter. Matter creates low pressure zone in Q. The rest of the Universe then tries to push other matter into the low pressure zone.
There is no action at a distance. Matter moves under the influence of the Q around it. In Q Theory there is no action at a distance. The effects of gravity, inertia and electric charge all work through the omnipresent Q.
Nor are there any gravitons.
And gravitational waves are simply large scale disturbances in the Q. These disturbances travel through the Q at the speed of light. Unlike neutrinos and photons, which travel in particular directions, gravitational waves tend to spread out as they travel.
You might have noticed a resemblance between gravity and charge. Both obey an inverse square law relationship. But also note two fundamental differences between charge and gravity. Charge is stronger, but gravity is always attractive.
Because gravity is always attractive its effect is cumulative – the more matter there is in one place, the stronger its gravity. This makes gravity the most powerful, longest reaching influence in the Universe. It is the Great Organiser of the whole Universe.
Resisting it is inertia – the Great Resister. Without inertia everything in the Universe would have collapsed into amorphous Q long before we humans had a chance to evolve and become able to observe and question the whole marvelous panoply of creation.
Mass (continued)
Fundamental particles have three fundamental properties: charge, spin and mass. Each electron and positron has the same amount of mass. The amount of mass in an electron or positron is the smallest amount of mass possible.
Q theory argues that mass is not a fundamental intrinsic property of matter. It is a manifestation of the way that matter interacts with Q and with other matter.
Mass is a relational phenomenon, not an intrinsic fundamental property. Of course, seeing that the Universe is all about Q, there is no way that matter can escape having mass, so this is a fine point.
The two interactions that give matter its mass are inertia and gravity. If not for inertia and gravity there would not be any mass. Matter yes, but not mass.
Think about it. How can you tell if a piece of matter has mass? In essence you either weigh it in a gravitational field, or you find out how hard is it to accelerate. If neither method is convenient you can at least watch how it interacts with other bits of matter during movement.
Every scrap of matter has both gravitational mass and inertial mass. As far as well can tell from exquisitely sensitive experiments, gravitational mass and inertial mass are exactly the same.
Matter, Mass and Energy
Matter can be turned into pure energy and vice versa. That is because matter is pure energy. It is made out of Q.
In Q theory. photons and neutrinos have gravitational mass. They contain energy, and energy is Q, and concentrations of Q experience similar effects to matter, which after all consists of special concentrations of lots of Q.
In parts of the Universe that become crowded with material objects, there are a lots of physical interactions and many different forms of energy. Kinetic energy, potential energy, chemical energy etcetera.
It is almost certain that any given material object at any given time will have some kinetic energy. Everything in the Universe is moving.
Quantum Mechanics
At a very small scale, dynamics and electrodynamics are no longer smooth and continuous. Basically because the spin of stable entities has to be a plus or minus ½ or 1 and charge has to be plus or minus 1. Larger particles come in discrete specialised bundles and so do stable electron orbits. So interactions become uncertain and ‘jerky’.
On top of that there are a lot of observational difficulties because the entities are so small that simply detecting them usually involves their destruction.
The result is the weird and wonderful world of quantum mechanics, quantum electrodynamics, quantum chromo-dynamics and so on.
All of which has a bearing on the nature of Q at its most fundamental level. And vice versa.
Summary
Q is responsible for the existence of static, linear and rotational inertia in matter.
Inertia and gravity give rise to the property of matter we call its mass.
The properties of Q are also the reason why Newton’s Laws of Motion are as they are, and why mass increases as the speed of an object through the Q increases.
Gravity also works through the Q and can be thought of as matter/energy being pushed into zones of distorted or depleted Q. You can model the effects by imposing an imaginary spacetime grid and then distorting it all, but that is not necessarily the only valid approach to understanding it.
Gravity and Inertia are the Ying and Yang of the whole Universe – the Great Organizer and the Great Resister. All working with concentrations of Q in a sea of Q.
#Q-theory#origin of mass#origin of inertia#linear and rotational acceleration#Newton's First Law of Motion#action at a distance#Qtheory#OriginofMass#LawofMotion#ActionataDistance#RotationalAcceleration#SirIsaacNewton#OriginofInertia
0 notes