journalep-blog
journalep-blog
Journal EP
7 posts
Journal explorations in physics and philosophy.
Don't wanna be here? Send us removal request.
journalep-blog · 7 years ago
Text
On Parallel Universes - Bob’s Burgers S6E01
“Fate is entirely real. But, it is completely random.”
Upon first glance, it seems like a 21st century hipster cliché. But it probably isn’t since it came from Tina, a pre-teen to teen character in the TV show Bob’s Burgers, which has aired since 2011. Tina is an interesting, oddball character that has some very unique (and often funny) personality traits. When she has moments of anxiety she gets short of breath and audibly makes a huffing noise, sometimes in unending repetition and lasting multiple cut scenes. Usually her parents start out with worry, but after several hours pass they just get tired of it.
Tina comes to this decision about fate after she sat down with her family to discuss how her mom and dad met. Linda (the mom) describes it as a chance encounter in a bar with Bob (the dad), who accidentally was struck by Linda’s wild hand gestures. Bob’s mustache protects him from the blow, and Linda describes his mustache as Tom Selleck-ian (a joke before my time). They instantly fall in love. After learning this, Tina, Gene and Louise (the Belcher kids) then give their versions of what would have happened if Bob didn’t have a mustache.
Gene imagines a world where Bob doesn’t have a mustache, and after getting swatted in the face with Linda’s arm, his lip bleeds profusely causing him to go to the E.R. After passing out he wakes up as Robo-Stache: a cyborg with an artificial mustache that mostly makes everyone’s lives slightly worse, through small inconveniences. They never get married.
Louise imagines a world where Bob doesn’t have a mustache, he runs into Linda and is instantly attracted to her. After trying his best to flirt with her, she’s just not into it and doesn’t want to have anything to do with it, and says, “maybe if you had a mustache”. It was an awkward encounter at best. Having fallen hard for her, he tries a mustache growing cream. The cream actually works but consequently covers his entire body in hair. Despite these setbacks, Linda and Bob still tried to get together, but after a while it just didn’t work out. Louise tells us: “Dad lived out the rest of his life as a freak. Mom developed an allergy and left him and became a nun. But she got kicked out. Now she’s in jail.”
Tina immediately pipes up and says that it doesn’t matter, they get married anyways because of fate! After showing a series of other scenarios that are not believable, it seems increasingly likely that Bob and Linda do not get together on the average. Tina proceeds to have one of her anxiety moments and starts huffing/grunting “heh” repeatedly. She spirals down into a world where everything feels wrong. Linda gets married to a man that she was first engaged to prior to Bob. In this universe, Bob has a series of failed restaurants, so he also doesn’t fulfil his destiny as a cook (it is very clear in the show that he loves being a cook). She hates her “new dad” in this alternate universe, while she loves her dad in her existing one.
Tina eventually has an epiphany, “I get it, guys, I get it! Fate is entirely real it’s just completely random!” After Linda and Bob trade a quick glance, Bob says “Okay, yup, that’s it. Good talk guys”, and sort of shuffle the kids out of the living room to disperse them. What’s interesting here is that Bob and Linda really do love each other and have a functional marriage. Through the eight seasons, Bob and Linda have always supported each other in various rough times. They bicker at times, but never aggressively or with any tone of malice. They also have moments of deeply caring for each other and they express it. In addition, the reason the family discussion happened in the first place was because Bob began losing his mustache hair. Louise, having remembered that mom said his mustache was a big part of them getting together, provoked the conversation and got Linda to tell the truth about what happened.
Bob came to find out that hair loss was associated with testicular failure (the show doesn’t really describe it), and it was because he was using an exercise bike. After quitting the bike his mustache grows back and the episode ends. All is well.
In analyzing Tina’s statement that fate is entirely real and random, we can see that Tina’s interpretation of her reality combines her existing view that fate is real, but with the caveat that it must be completely random. This pleases her at the end of the conversation and you can tell that she feels more confident in her understanding of this whole concept. It fascinates me to no end that the basis of her statement is completely verifiable, and even scientifically falsifiable. We learn from the stories that Gene, Louise or even Tina, could create a world that might be described as a parallel universe.
Mathematically we can say that the number of these worlds is infinite. Since we know of only one world where this scenario truly works out (the current one), the probability effectively drops to zero of Bob and Linda getting married. There is one caveat here though: there’s a singularity in the universe where Bob and Linda get together. In this universe, the probability of them getting together is 100%, and so Tina’s statement that fate is real, is in this quantum physics-esque sense, verifiable.
At this singularity, the universe where Bob and Linda get together, scientists say that the wave function collapses. This always happens when scientists make a measurement on a particle that exhibits qualities of quantum mechanics. So just like a scientist making a measurement, Tina, in her own existence will become aware of this fact that she only knows of one universe where Bob and Linda get together. At some instant this will be perfectly true for her. In theory, she can construct as many universes as she wants where they either do get together or don’t. She can trace lines of logic to the past, altering things that would have surely prevented the chance encounter. Scientifically, all of these other universes are unknowable, exactly like the position or momenta for a quantum particle when you measure it. When we assign the likelihood of Linda and Bob getting together as,
Probability    =     # of universes where Bob and Linda get together                                # of conceivable universes
Where,
# of conceivable universes    =   unknowable universes + 1
 We find out that it drops to zero, everywhere, except for Tina’s reality where she can only conceive of a universe that is her own. Here the probability is 1, and Bob and Linda certainly do get together.
In the second part of her statement, that fate is random, we can’t really do much other than to look at the analogy between this situation and quantum physics. Quantum physics (and specifically, the standard model) has been the most accurate scientific theory that describes nature. The current understanding of the states of quantum systems is that they are random. Additionally, if we accept our definition of probability above, then it entirely matches the way that randomness is defined in quantum mechanics.
It’s certainly an interesting viewpoint on fate. I think often times people are one way or the other, for various reasons, that fate is either pre-determined or random. But in this episode, Tina wants to tell us that it’s not one way or the other, that it’s actually both. Amazingly, scientists have been pushing a theory for over one hundred years that our most thorough understanding of matter is that quantum particles are most generally in two states at once (superposition).
0 notes
journalep-blog · 7 years ago
Text
My Coffee Problem
The Navier-Stokes Problem
My immediate apologies for the starbucks-white-girl blog post title. But I won’t lie the inspiration for this did come from a coffee shop.
Coffee is great. But it often creates problems for me. You see, when by myself, coffee motivates me to get deeply lost in the idea that we can understand the world around us. I mean, it’s pretty amazing, right? We sit around and we think, yeah, awesome, I got this. It all makes sense. Until it doesn’t. Which is pretty much the current state of technology, modern physics, and math. All subjects which fascinate and infuriate me to no end.
One of the most interesting problems that I have come across is the Navier-Stokes Problem. It says that there is a certain relationship between the position, time, and other fundamental physical variables of any particle in a system that might be described as undergoing “changes away from thermodynamical equilibrium”. From what I know in mathematics and physics, this is considered a huge discipline that needs more looking into. There’s even some sort of prize for solving it.
This gigantic problem has to do with coffee. Such that when you look at a cup of coffee and mix milk in very slowly, each observed particle (well, as far as humans can observe), appears to be doing a special dance. Mmm, coffee. I’m kind of a coffee fiend. I have a whole ritual where I grind up the beans and put them in a sealed bag and then French press the coffee. It’s great. What the milk does it allow us to see some contrast in this dance, so that our eyes can better get a grasp of what’s happening in front of us.
It’s all pretty simple, and fairly intuitive. Pour some stuff into your coffee, it swirls around, right? No problem. Well, the problem is actually proving and showing that the solutions for our known relationship (given by the Navier-Stokes equation) actually are smooth with respect to position and time. In fact, you can even try to prove that they’re not smooth with respect to position and time and you’d still win the prize. That’s how nice math people are.
What I want to think about are the physical processes from start to finish that we observe when we interact with this phenomena. First we have to imagine that you are sitting in a brightly lit coffee shop, probably on a Saturday afternoon. The light that comes in through the window and illuminates the room will be light of many wavelengths. They will be moving at all angles and with whatever polarization state possible. What’s interesting is that we can polarize the light, and our observations of the coffee swirls are still present, and so we can further restrict the problem to just being a light of a particular optical angle. But in any case, light comes in, either gets transmitted, absorbed or reflected by the materials inside, and therefore we can see around the shop. We also might come to the conclusion that it’s completely awkward to make eye contact with someone whilst so deeply in thought about coffee.
The swirls are no different than the people. We only see them in action because light is bouncing off of the particles that comprise said swirls. Our eyes and then our brain perform quite a magic trick to piece together this information so that we see the swirling coffee. As far as the physics is concerned, there are a stream of light particles that enter into your eye, travelling at the speed of light. They all have a different wavelengths, which basically means they all carry different information. In the world of physics, relativity comes into play whenever you say “light”, thanks to Einstein, and interpreting such relativistic-information becomes very hard.
The real issue with this problem is not that the mathematics is particularly impossible. It’s that the viewpoint from which we are trying to use mathematics becomes very muddled. Some might even go as far to say that this is a paradox to which we do not know the answer. If you try to imagine particles of light (photons) streaming into your eyes at the speed of light, c, then you are probably picturing something that doesn’t travel at the speed of light. In other words, you are picturing objects in your earthly experience that move in time. But remember- our coffee cup has particles swirling around it in that are certainly not travelling at the speed of light. In fact, they are moving around just like we expect things to move around!
Without going too crazy into the details, you simply cannot do this when it comes to light particles. They are not like a stream of water, with molecules flowing through it. They do not move like a cloud does through the sky. They travel at the universal speed limit, where we have observed that no particle with mass may travel this quickly. This is a very real conundrum that we as humans do not have an answer for. There are a few different ways to visualize and think of relativity, but I’m not going to go into it here.
So you’re sitting in this coffee place and the photons (with some wavelength), that enter into your eye, supposedly have the information that contains a solution to the coffee cup problem. This just means that we have seen this dance with our own eyes, and from the elegance and even uniformity with which this happens, we surmise that we must be able to generate some math to describe it.
Why wouldn’t we? In this modern age of deterministic science and engineering, we often believe that anything that we can observe, and test must be perfectly describable by mathematics. It is a rule of the modern age. This is how technology advances, through the scientific method. Data collection allows us to draw conclusions, and modelling such data with statistics provides us a “launching off point” to make further conclusions. These models are fundamentally mathematical.
Unfortunately, observations about coffee are pretty hard to make. I mean, you can obviously smell it, and feel the warmth of the cup. You can see it and you can taste it. You can even hear it as it drips or brews or pours from whatever coffee contraption you use. So these observations are pretty easy. But they are hard to right down on paper. They are hard to quantify. Smell and taste are not understood hardly at all, on a physical level, and the sound and sight are the complicated business that we’ve been talking about up to now (the physics of matter).
You see, it’s not just a coffee cup this applies to. It also applies to changing weather patterns. A hurricane on Jupiter that has been spinning up for hundreds of years. Even the changing climate on our own blue dot. It really applies to any system travelling away from its temperature-pressure-volume-and matter equilibrium.
It’s fair to say that temperature, pressure and volume have been very concretely described in modern science. It’s taught in science curricula around the world. Temperature is simply the average energy of a particle in the room (multiplied by the number of particles), pressure is the average force that the particles may exert on a surface, and volume is space, something we intuitively understand.
There are symmetries all over the place in this problem. We see that the position is smooth with respect to time in our observations of the coffee. The dance is elegant. We can then figure out that there must be a similar relationship between momentum and force, and surely enough, there is (one is the derivative of the other). Since the derivative exists, then we can say mathematically, that these two things are smooth as well.
We also see a symmetry in the equilibrium that this thing is moving away from. Temperature is a physical unit of a single dimension (we don’t have multiple “directions” of degrees), pressure has two dimensions (force per unit area), and volume is clearly three dimensions.
There are all observations here that may be written down mathematically. And many of them actually are describable mathematically.
Matter is actually the trickiest thing to be moving out of equilibrium. Does matter comprise of nuclei, as biology would suggest? Does it consist of neutrons, protons, and electrons, as physics of the early 1900s would suggest? Does it follow our modern day understanding of matter which uses the standard model, but completely fails to describe gravity?
Whichever we choose, we must then look at this matter through the lens of relativity, which is truly what our eyes see. The biggest questions are here. And coffee is to blame.
0 notes
journalep-blog · 7 years ago
Text
On the Complexity of Nature
What I want to write about here is the complexity of nature. But I don’t want to do it in a way that’s indescribable, artistic, or even too philosophical. I want to come at this in a different way. I want to create a game. A game for which the rules are entirely simple and completely understandable. And then I want to show you just how complex nature really is by virtue of it playing this game.
If you think about large glass box filled with air then you can model that box as filled with a bunch of little spheres (particles) bouncing around. If we ignore gravity, then the problem is even simpler, as each ball only can have energy of residual motion and not attraction. The purpose of this story is to tell you that this simplicity is an illusion, one concocted by our macroscopic view of reality. The thought experiment is not wrong. Simplicity is just another way of stating the generalization of the problem at hand.
You see, when you really think about it, nature is calculating quite a lot. It has to know so much about each particle. So let’s say the box is a meter cubed, and the position of any particle can be between zero and one for each of its x, y, and z coordinates. Now, if we keep this box in reality, then we must assume that there is a fineness to which we can move one of these particles. This is the first very subtle rule of the game. A better way to think about this is by visualizing a very small three dimensional mesh such that the joining parts of the mesh represent the location of any particle. There has to be some small space between particles.
Our smallest known length is the Planck length, around ten to the minus thirty five meters. That’s quite a lot of zeros. But in any case, each of the particles in the box will have a precise position that is fits neatly on one of those increments.
Now that we have nailed down what position is, we know that the placement of each particle must be unique in the box, otherwise particles would be merged or fused- which is not allowed. Rule number two. Moreover, we can safely assume that the diameter of each of those little balls will be significantly larger than the smallest increment of motion. So not only do each of the particles have a unique location, but they cannot enter a location that is within a radius of any other particle. The closest they can get is “touching” another particle, and even then their centers are two radii apart.
This is all fine, you say, still relatively simple. But then we realize that within this box of particles there must have been an initial velocity assigned to at least one of the particles. Otherwise we get the degenerate case where all the particles are not moving, which isn’t very interesting. So, now that the box has some initial energy, the system knows how to progress to the next state. And when I say it knows, I more so mean that it simply progresses to the next state. Time doesn’t freeze just because of a potential philosophical hang-up.
This is something that is intuitively obvious and can be easily overlooked. Of course the system goes to the next state, that’s just time unfolding! Well, fortunately for you, human observer, time only goes in one direction. We see the evolution of a system with an arrow that seems to be fixed in the time forward direction. A famous physicist, Boltzmann, had called this the arrow of time. If you break an egg into a pan then there’s no undoing that process, and if you watched a video of it in reverse then you would be able to tell some trickery was happening. Well I hate to burst your bubble, but for individual particles, there is absolutely nothing in the laws of physics that dictate it must only move in the time forward sense. So a particle moving along one path in the time forward sense appears to moving in complete the opposite direction for the time reverse sense. Thermodynamics (and more specifically, statistical mechanics) has taught us that this phenomenon is better modeled by a looking at a complete collection of states. The only thing we can certainly say is that the box is currently in “one state” and will evolve into “another state”. This is how scientists avoid the philosophical issue of discussing time.
Now, for it to move into any other state it would have to have information about all the states of the system in the past, at a minimum. In fact, it somehow needs information about the future as well, and I’m taking it upon myself to convince you of that. In our simple example this future information includes all particle positions for each state. Everything is just so that the rules of this game are not broken- which I will reiterate, are just that all the balls have unique positions and that the actual “matter” of the particle do not overlap with each other. Alas, it seems the box must be omniscient. The all knowing box, if you will. Hey, maybe we could start a cult?
As a rebuke of these sentiments, you might argue, “well, a particle can’t simply teleport to another location, right? Won’t this eliminate a lot of the possibilities for which state is next?” Sadly, this is another thing that our macroscopic experience tricks us into thinking. We have to remember that our one meter cubed box is divided into space by really tiny units. Some might say, “infinitely small”. However, even pure mathematicians do not view a differential calculus term as an “infinitely small space”. They take it to mean a quantity that approaches zero. So you keep making it smaller, and smaller, but never zero. In a sense, our space is divided by the smallest version of this quantity. Now, when it comes to the “teleporting” notion- let’s recall that the particles move at some speed. One rule of the game is that the speed is not infinite. If it were, we could deduce that we have infinite space to work with, which makes even less sense. Despite this, the speed can be very, very large. Even trying to imagine something like ten kilometers per second inside a one meter box is nearly impossible, but in our game the speed of a particle could easily have this value. So yes, between any two states a particle could "teleport” between two seemingly far positions. And, when we account for this, it means that any particle can go to any particle location in the box that will ultimately be unoccupied in the next state.
In the normal thermodynamical language, we say the “system evolves until it reaches equilibrium”. This is a fancy, scientific way of saying that the motion inside the box is essentially random, but its macroscopic characteristics are not random. What’s interesting is that the system stops evolving in equilibrium, but the particles inside are still vigorously moving. I think we’re ready to try to imagine the work that nature has to do to follow the rules of this game under this condition.
Try to compute all of these values that we have described. In this day and age you would write a computer program, that would loop say, a million times and create a million “particle” objects. Each particle could have a mass, radius, position and velocity, and we could easily assign them randomly computer generated values. Next, we have to proceed to create another state ensuring that we aren't in conflict with our rules. Then, repeat. You will have to just trust me when I say that the number of states is absurdly large, on the order of at least ten to the thirty-five for a normal size box.
The computer algorithm is very simple: randomly create new particle values for each state, ensuring that they do not overlap in position with other particle values and all is well. Easy, right? Not so fast. We actually also have to ensure that every collection of particles is unique. If you end up generating a set that has the exact same set of values that you may have previously made, then that’s completely useless and you are wasting time. It does not describe a new state. In computer programming speak, you would have to iterate over the entire set every time to ensure that you produce a “new” box of particles every time. The only guaranteed method to produce all the states on an infinite timeline.
The computational complexity here is absurd. Figuring out every possible, unique configuration for the positions of every particle, down to an accuracy of a Planck length, would take longer than the entire perceived age of the universe (with current computing power). If you don’t believe me, bring this set of conditions to a programmer friend you might have, and they will tell you. It can’t be done.
But yet, the universe makes this absurd, complex calculation happen. The system evolves and on our macroscopic scale we can see the neatness to which it does so. An example of this action would be a coffee cup cooling to room temperature. Once the temperature stabilizes, the cup will maintain its liquid temperature with that of the room, and all of the particles inside will still be moving and bouncing around randomly to maintain the average energy. How does nature prescribe the motion of each particle in the liquid? I have no idea. As discussed above, it is actually impossible to know the precise position and momentum of each particle. Well, with current technology at least (upon re-reading this I am realizing this statement sounds eerily similar to the Heisenberg uncertainty principle. I really wasn’t thinking about that at all, and this is purely a coincidence. Well, as far as I can tell).
Scientifically we have known the answer to this problem for over a hundred years. The only way we can objectively analyse this type of system is statistically, and through a field aptly named statistical mechanics. Some of the most beautiful and profound equations in science have come from this study. It’s applicable in thermodynamics, chemistry, optics and even light and radiation. I find those equations and connections to be deeply profound and I often just wish I could communicate with words what has been laid out by some of the mathematical geniuses of the past.
0 notes
journalep-blog · 7 years ago
Text
Thermionic Emission: A Missing Link
I first heard about thermionic emission from my boss, as he was explaining to me one of the (many) physical processes that occur at work. As he was explaining to me the physical setup of what was going on, in the back of my mind I couldn't help but wonder how this might be a missing link between electrons and thermodynamics. The technical details were interesting, no doubt, but I wanted to know in a fundamental way how the thing worked.
I later came to find out that the setup he was describing was nearly identical to that of the setup used by experimenters in the early 1920′s when thermionic emission was being rigorously investigated by scientists. Basically you have to imagine a chamber that is at very low pressure, often called vacuum pressure. This means that most of the matter is actually pumped out of the chamber. Inside the chamber there is a small device called an ion gun- and it is aptly named, as it literally fires out charged particles into the chamber. The principle of operation of this thing is called thermionic emission, and was described to me quite well by my boss on that day. A somewhat condensed description of the phenomenon looks something like this:
When the thermal energy in a metallic object reaches a certain point some of that energy causes the electrons within its physical structure to escape the surface of the material, overcoming what is called the work function. Once the electrons escape the material, they ionize gaseous particles and form an ionization current in the gas surrounding the heated metal (small caveat here- I said we were in a vacuum chamber, but the metallic rod component of the ion is surrounding by a tube containing a gas, such as Argon). It is a fantastic case where we can clearly see statistical (or average) motion of the electrons. They were in the metal, and after they have moved through the gas (as ionized gas particles). Moreover, their momentum even gets carried out of the metal and into the gas!
After diving into the theory on this subject I have now found (what I consider to be) one of the most well written theoretical physics books. Owen Richardson was a British theoretician who one the Nobel Prize in Physics for his work on thermionic emission, and the author of “The Emission of Electricity from Hot Bodies”. A very talented experimenter, writer, and communicator, he does an incredible job of binding together many theories that existed at the time to try an explain this phenomenon.
Understand that in the early 1900′s it still wasn't entirely decided if both positive and negative electricity "existed" and if the internal structure of matter contained actual electrons or not. At the end of the 1800′s many scientists where solely convinced of the wave nature of light, thanks largely to Maxwell’s equations. Maybe similarly shocking to electrons being viewed as particles, this book by Richardson was a revelation for me. The first chapter is dedicated to describing in detail the experimental setup as well as the findings of many researchers in the field. Richardson spares no expense in explaining these points. Eventually he gets to the end of chapter one and describes the “Electron Theory”, where shortly after he writes on “Thermodynamical Considerations”. He makes many assertions in his book, and I want to talk about a few here.
Firstly, he firmly solidifies the point of view that electrons are the mobile charge carriers. Said in another way, protons and neutrons are stationary in solids and electrons have the ability to jump through a material and “carry” charge with them. In addition, many of the experimental issues that arose in the early 1900′s can be resolved by "accepting" the electron as a fundamental particle and the line of logic that follows it. In doing so, he provides a platform for solid-state physics, semiconductors, and even some of quantum physics. I realize that this is a bold statement, but I think it is well founded. This problem of thermionic emission is on the very boundary of the odd motion of electrons and how it contributes to the energy changes of the overall system.
One of the most beautiful things about the derivation of Richardson's Law (the law which describes thermionic emission) is that it is entirely based on thermodynamics. In analyzing a closed system at fixed temperature, one can use the 2nd law of thermodynamics to calculate the energy and entropy changes that are associated with these ionization currents and what kind of work they do on the surrounding environment. He treats electrons statistically (in the thermodynamic sense) as particles with mass, and when you do so, you can derive the steady-state equation for the electrons which are emitted from the hot body.
The fact that the derived equation comes out in the correct form and matches experiment is a verification of the following points (and certainly more besides):
1) On our macroscopic scale, it is completely accurate to understand electrons as particles just as much as an atom of Hydrogen is a particle 2) From this macroscopic/statistical vantage point, electrons have properties such as mass, charge, and follow all of the normal field laws 3) This problem is on the border of statistical and quantum mechanics: within it, we can conclude that statistical mechanics is a limiting case of quantum mechanics when we study something on a certain scale
At first glance it may appear that these notions reject quantum ideas. But all this thermionics stuff seems lack luster, that's just because it didn't *disprove* anything. The problem is the essence of thermodynamics and what it is capable of describing- even the statistical action of electrons en mass.
Another thought that I am sort of tacking on the end here is that for me, this is another amazing example of a scientific problem where we have bridged two fields. This problem provides us a link between statistical mechanics and quantum mechanics. We often like to describe one as the limiting case of the other. From a more philosophical standpoint, I’d like to say that reality is all the same thing. We just have particle words, phrases, and boxes to describe the reality we observe. I really do believe that there is a set of mechanics that describes all of these phenomena and there are a set of general formulas that can describe all of these conditions regardless of scale.
Why does the boundary of statistical and quantum mechanics depend on what is considered “macroscopic”? How is it that “macroscopic” provides this set point to use a different set of theories, as opposed to the quantum ones? Maybe asked a better way... on the scale of stars, do we insignificant humans just operate on the laws of quantum physics? Now that’s a hard ticket to sell because of causality, the smoothness to which we perceive things, and so on...
0 notes
journalep-blog · 7 years ago
Text
Thermodynamics and the Counting Problem
As an introduction to what is to come in this post, I’d like to provide a bit of background on what exactly I am writing about. When deriving the equations of thermodynamics I always assume a closed system. This typically involves a theoretical closed box that has gaseous particles inside. For simplicity, we can image that they are all the same, say hydrogen gas molecules. Each particle has a position and a momentum and carries on in the typical Newtonian fashion, moving and colliding with other particles inside the box. Another assumption is that the box is in a heat bath, meaning that it will be able to instantaneously transfer energy in and out of the system at the boundary, at will.
In following Josiah Gibbs’ method in figuring out the various energy distributions as a function of state, an important concept is the density-of-states. The density of states, or density function, is a theoretical measure of how “many” system states can fit into a unit of phase volume. More and more definitions seem to be needed!
A singular system state is a unique configuration of all the p’s and q’s of the system particles (p = momentum, q = position). Imagine them as a paired coordinate system, so if there’s 5 particles, for example, one system state could be represented as 5 pairs of p’s and q’s:
(0, 1) (0.2, 2) (2, 2) (2.5, 1) (4.4, 0)
The above represents a unique configuration, with ordering being important. So simply switching one pair of p and q values changes the state entirely. A phase volume is a theoretical volume measurement that uses all of the changes in the p’s and q’s to comprise it:
Δp1 Δq1 Δp2 Δq2 ...
The purpose of this post is not to dive into the density function, but rather, to explore the question, “are the number of states countably or uncountably infinite?”. It’s a question that has fairly serious ramifications in the development of thermodynamics. It is an a priori assumptions that the number of states is countably infinite. Questioning this assumption may render the thermodynamical limit false when considering a situation on the quantum level.
The Counting Problem
Does the counting problem infer the quantization of energy? If we have a real system of n particles, each with p, q values, we can apply the following bounds in a very straightforward manner,
0 < qi < L 0 < ϵ < ϵ’’ where ϵ’’ is finite and ϵ = f(p’s and q’s)
We effectively want to give every state a unique name; the easiest way to do this is by numbering. Decimals imply hierarchy, but integers are unique. A state is composed of n (p, q) coordinates. Since p and q are continuous there are infinitely many states. However, they must be more than infinite because the p’s and q’s are un-countably infinite. Another realization is that a specific energy may be associated with multiple states. This greatly implies a counting problem, since the number of states is countably infinite (due to them being labelled by integers) yet the number of energies is un-countably infinite.
However; if the system energy band is divided into discrete chunks, then the list is not infinite. To make it infinite once again, the upper bound must be removed.
If there is degeneracy, and we combine like-energied states, once again we have the issue of there being more energies than states. However; there is no mapping problem anymore. Once accounting for degeneracy, a state and the energy associated with it are synonymous. Now we can clearly see that the system energy is “more” countably infinite than the states are, due to the amalgamation of like-energied states.
Some statements:
- The largest way q can change with state = L - The smallest change in momentum is not equal to zero in this construction, therefore momentum is discretized - How one state differs from another is only by a measure of time, when you really think about what constitutes a property of state
If we allow the infinitesimals of p, q and energy to be the same as their respective quantization intervals, then we may alternatively define momentum:
p = m dq/dSj
The denominator infinitesimal represents the “change in state that is smaller than anything measurable”. The smallest change of state that is possible is equal to one. This is by virtue of states being given integer labels.
Closing a Chapter
I’m not entirely sure, but I may be closing a chapter in my life of physics studies. My goal in studying thermodynamics was to find the “gap” in theory that would lead me to the quantum physical understanding of reality. I feel that I have personally found a gap. The gap in logic is revealed in what I call the counting problem.
For a real, bounded and continuous system energy, there are infinitely many energies for which the system can have. This number is uncountable (and infinite). Since states are labelled with integers, the states must therefore be countable. The first issue is that, degeneracy aside, there is an incompatibility in the pairing of a state and an energy. There are not enough states to represent all the possible energies. They cannot be all accurately labelled. Given that many states may have the same energy (due to p, q ordering), this worsens the problem.
The second issue is that we should not be able to apply index integers to a list whose elements themselves are un-countably infinite (states being the lists, p’s and q’s being the elements). The initial assumption in thermodynamics is that the p’s and q’s are continuous variables. So even when they are perfectly well bounded, there are an uncountable number of possible p and q values.
Summing over N total states therefore makes no sense (there is no way that the sum, over an integer index, could encapsulate every state).
A simple but effective resolution to this problem is to quantize the system energy and unbound it. This makes the list of energies infinite and countable. As a result, we find that p and q are also quantized to varying degrees. With n real particles in a spatially bounded system, we find that the momentum is quantized, but unbounded (both positively and negatively). It follows then that the states are infinite and indeed countable. Under this pretext, the summation is once again valid.
A mapping can be made clearly between the probability of state j and the energy of state j. More simply, an energy can be directly associated with a state number, j. Many energies and probabilities will be identical due to degeneracy (p, q ordering), yielding several states that are effectively the same. Probability of state and energy of state do not care about the ordering.
In my definition of an infinitesimal, it is simply something smaller than anything measurable. This is the final nail in the coffin; this number is not zero, but follows several arithmetic rules similar to the number zero because we have no way of quantifying it. The quantization of energy, position and momentum, on the most fundamental level, must be on a scale smaller than anything we can measure. I say this a bit brashly; as it is quite a bit of theoretical conjecture, but mostly for my own psychological easement. What I ask myself is this: do the rules of algebra apply to a non-zero number that is smaller than anything measurable? Personally, I don’t think so.
This yields a correct formulation of the mathematics involving these quantized values, better known as calculus. This moves the question in a new (and more precise) direction: what are the orders of magnitude of the infinitesimals of p, q and energy? And this, I feel, is a direct segue into quantum mechanics.
0 notes
journalep-blog · 8 years ago
Text
The Fundamental-ness of Gravity
One of the most common questions I ask myself in regards to the study of physics is, "which law is more fundamental?". In the case of gravity, I find this question to nearly always be on my mind. Skipping past some of the historical progressions on gravity, we land on the notion that the objects in our solar system following elliptical paths around our sun, with the sun being one of the foci. This is the observed pattern, but it provides no explanation as to "why" it happens. Kepler was the first guy to sort the how and not the why.
Through a now well understood line of reasoning, Newton concluded that essentially, if an apple falls when we drop it to the surface of the earth, does the moon also fall? The answer of course is yes, and the moon is "falling" to earth constantly, trapped in an elliptical orbit. This was one of the most significant discoveries in modern science and it can be most simply stated as the inverse square law. It simply means that two massive objects will attract each other with a force that is proportional to the inverse of the square of the distance between them. Yup it’s a mouthful. It's very easy to communicate with mathematics; slightly more difficult with words.
Another way of stating this law is that if two objects are say, a distance of X apart from each other, then if you move them to a distance of 2X apart, the force of attraction between them is only 25% of the first case (rather than 50%, which is more intuitive). The amazing simplicity of this statement (the inverse square law), is perfectly geometrically equivalent to objects travelling in ellipses. To be more precise, the objects travel along a surface ellipse traced out on an ellipsoid. Simple mathematics and geometry can be used to prove this.
However, one of the major issues of this law is that it is "non-local". If object A and object B have an attractive force between them, each object needs to know exactly how far it is from the other to determine this attractive force. Since force explains how objects move in time (how their velocity changes), the objects need to somehow "know" the location of another object at each instant in time to determine the magnitude of the force that is moving itself.
This is a very tricky concept. How can all the objects in space "know" where every other object is in space, relative to itself? This is called an action at a distance, and it is very troublesome when you start thinking about the rate at which information can travel. As a brief segue, it was known since about the middle of the 1700′s that light does not travel instantaneously (that it has a finite speed), and they even knew the value of its speed. Using a "schedule" for the motions of the planets based on Newton's laws, a consistent discrepancy was noticed that was outside the margin of error for such equations. For example, one of Jupiter's moons could be seen because of the shadow it cast while travelling in front of the surface of Jupiter. Depending on whether Jupiter was on the far side of the sun or the near sun in its orbit (relative to earth), this schedule of motion would either be ahead of time or behind time by a very predictable value.
The only logical conclusion here is that it takes light longer (when Jupiter's on the far side of the sun) for light to leave the surface of the sun, reflect off of Jupiter, have one of its moons cast a shadow, and then ultimately reach the surface of the earth where we observe it. As it turns out, applying the inverse square law in conjunction with this idea is very, very troublesome. An object would have to know about the positions of massive objects that are potentially light-years away, and supposedly have an instantaneous reaction to the forces exerted on it by those masses. If light traveled instantaneously, then this is not a problem. The instantaneous force on an object could always be known. But since we know that light has a finite speed, the problem becomes nearly impossible. And here we are going to make another segue into electricity.
One of the most interesting things in physics to study (in my opinion) is Gauss' Law. It's a law that is studied in the field of electrostatics and it is based on the field theory of electricity. The field theory is quite simple: for a positively charged object, field lines "leave" the surface of that object and point outward. For a negatively charged object, the field lines "come into" the surface of the object. The field strength is proportional to the inverse of the square of the distance between the source of the field and any point in space. Positively charged objects are coerced in the direction of field lines while negative charges move in the opposite direction. The rules are very simple.
In this way, the electric field is defined for all points in space, irrespective of what particles are actually out there. The field then becomes the object of interest as opposed to the two particles that are either attracting or repelling each other. Gauss' Law takes this observation one step further and says that if you perform a closed surface integral around an electric field source then the result is equivalent to the charge inside that enclosed space multiplied by a constant. It's quite simple to prove and even first year physics students would come across it in an introductory course on electrostatics.
Although the statement and the math are simple, the suggestion has now entirely changed the problem. The field itself is completely defined only by the shape of the enclosure and the charge held within (and a constant, I suppose, but it doesn't change with time). A further definition in this field theory is that you can easily obtain the force on any particle in the field by simply multiplying the particle's charge by the value of the field at that point.
All of a sudden the problem is localized; and what I mean by that is that we do not need to know the distance between any two particles anymore, just the value of the field at a singular point. The interesting observation is that this exact same line of reasoning can be used in the case of gravity, with one exception: in the case of gravity (for some unknown reason), there are only "negatively charged masses". This means that field lines always point in towards massive objects. The localization of this problem in fact allowed Einstein to generalize the theory of gravity with the notion that light has a finite speed. This part is fascinating all on its own, however, it is not primarily what I am concerned with here.
I am now thinking about the fact that there are two perfectly, equivalent ways (mathematically) to describe gravity. Both of them indeed produce the elliptical orbits of our celestial bodies that we may observe with telescope. However, the question now is which one is more fundamental? Is it the inverse square law? Or is it the field theory?
Both have implications that, in my opinion, are quite substantial. The inverse square law is certainly easily describable, and even now is the standard way to teach kinematics in high school (the variation with distance between the earth and any falling objects near its surface is so small that we can take the force on the object to be constant). A perfectly equivalent way of describing this is that the field varies by a very small amount for the distances which we are studying in kinematics, therefore the changes in the field can be neglected (constant field). Edit: After re-reading I realize I have this backwards. Essentially, F = ma, is the "field" version of gravity, using an acceleration "field", which is what they teach in high school.
Abandoning the inverse square law for a moment, a question comes to mind: what actually is a field? Luckily there are perfectly reasonable examples from reality that we can pull from. A field could be a literal field: if you imagine a grassy field that's got all sorts of humps and valleys in it, then we could write down the height of the field with respect to sea level (for instance) at each point in the plane. Then the value of the field would simply represent the height of the field above sea level at each point.
There is a seamless transition from using this example to all of a sudden talking about wave theory. Applying the same logic to water waves, we can easily see that we can represent the height of a water wave using a field. The wave height can be described with a field and we also notice that it may vary in time in a very predictable fashion, with certain boundary conditions. For instance, if you imagine a rectangular swimming pool, you'll notice that the waves (roughly speaking) have a closed boundary at the edges. The wave height won't be perfectly "zero" at the edges, but there will be some tendency for the water waves to reflect at the edges of the pool. This being the case, we can mathematically describe the water "field" using wave theory quite accurately.
The crests and troughs of the wave at every point (the value of the field) will sinusoidal-y vary with time. Due to water tension, each point on the wave will be linked to the next as well. This creates vary uniform motion with respect to both position and time. The reason that this example is so significant, is because light (which Maxwell determined in the 1800′s is an electromagnetic wave), is subject to these exact same rules.
If you allow an electromagnetic wave to enter a box that has perfectly reflective walls, then standing waves will form inside much like our water waves in the pool. The only caveat is that the electric field varies in all directions, as well as time. This is impossible to accurately visualize, because the field value is not simply the z-coordinate (spatially) of the wave. However, the fluctuation of the wave at each point can be measured, as described earlier, simply by taking the ratio of the force that a particle (an electron) experiences, divided by its charge.
There are many issues to this example, that are basically the exploration of quantum physics and the black-body problem, however, it illustrates the closely connected nature of field-theory and wave-theory.
Bringing back the story to gravity, we are now faced with somewhat of a philosophical problem: if it is indeed true that the field theory of gravity is more fundamental, then what relation does it have to wave theory? Recent experiments on gravitational waves are clearly indicating that the relation is very likely. In this sense we may view the wave front of the "gravitational field" in the exact same sense that we view the wave front of an electromagnetic wave. They both can only carry information at the speed of light.
With ever mounting evidence that the field theory of gravity is more fundamental than the inverse square law, what does this mean? How does this guide future gravitational physics? These are all questions I ask myself. Of course, I don't have any of the answers, but I think about them nonetheless.
0 notes
journalep-blog · 8 years ago
Text
How a Lecture and a few Pints Changed my Life
For a bit of insight here, this blog is going to be a place where I dump some of my ponderings and writings on physics and the philosophy of nature. The idea sort of popped into my head and so... well here we are. But before I do that I feel like a bit of background is needed to understand what my viewpoint is here.
Two or three years ago, I had just finished studying Maxwell's equations in about the fourth year of my engineering degree. I was sort of happy with physics at this point in my life. 'These beautiful equations that tie together electricity and magnetism must fundamentally explain reality', I would tell myself. Actually, I wasn't even quite finished the course when I randomly attended a lecture on quantum physics that was given at my university. It wasn't a class, it was a nighttime lecture given specifically on some of the recent findings by Dr. somebody or other... I could probably go back and figure it who. I attended the lecture and was completely confounded. The lecturer had us all convinced that quantum physics was simultaneously this mysterious field all about wave functions and uncertainty and probability, and yet was the most accurate physical theory that existed. I remember that he was really into supersymmetry. At the end I was even brave enough to take the microphone and ask a question. I don't remember exactly what it was, but I remember my connotation basically being, 'there's no way you can do anything with this information'. He responded by pulling up an experiment that showed a measurement down to an order of magnitude of ten to the minus twenty. I shut up pretty fast at that point.
After the lecture, myself and a few of my engineering friends went to the grad bar to discuss things. After a couple pints I found myself in an intense argument with one of my computer engineering colleagues. The details of the argument are a bit blurry, but basically he was suggesting that quantum computing will happen and is just the next logical progression of technology, and I was arguing that it shouldn't happen; that we are apparently toying around with reality on the smallest scale. We agreed to disagree.
Shortly thereafter I had committed myself to studying this new-fangled quantum physics. I told myself I would learn it and understand what all this racket was about. It didn't take long before I found myself completely overwhelmed in the philosophy behind it, and underwhelmed in the mathematics of it. When you go out and ask for an instruction manual on quantum physics you learn about discrete states, discontinuous math and combinatorics. What they don't tell you is that it is really much more complicated than that. I remember being frustrated.
Quite a bit of research later (let’s say a few months), I find a problem that satisfied my explanation of the world through electromagnetics; the black-body radiation problem, or also known as the ultraviolet catastrophe. Jumping into things head first is pretty much the only way I know how to do things, so that’s what I did. I read everything I could find on it. Soon my understanding of the problem through the lens of electromagnetics was very unsatisfactory, and I found myself once again puzzled by a mess of ideas that I knew nothing about. Fast-forward a year and a half of struggling through the problem and I’m finally able to derive the equations, beginning to end, diagrams, math and all. The result (which is not news, by the way, it was figured out in the early 1900's by some very smart people like Einstein and Planck) startled me. To put it in a nutshell here, the only solution to this supposed catastrophe is to assume that the energy of a system can only change by a discrete non-zero amount. When you do this, the result comes out beautifully accurate, but basically tells calculus and smooth functions to go f*ck themselves.
There were all kinds of phrases inherent in the solution of this problem that threw me off whenever I read about it. Writers would talk about "probability distributions", "canonical ensembles" and on top of that the Boltzmann constant and temperature were part of the solution (I had known about these from my education in science, but very little else). Why did temperature come into play? This theoretical box just has E-M waves bouncing around inside? Having no answer, I ended up travelling down the road of thermodynamics; ie. chaos theory.
I quickly found out that scientists had known of the laws of thermodynamics since the 1700's (maybe earlier?) purely through empirical methods. In other words, we observed the phenomena and made very clever mathematical deductions from said observations. What I began to learn is that during the turn of the 19th century, physicists were asking why these laws are the way they are, not simply that they are. Enter: statistical mechanics, and a new field for me to study.
I started, again, reading everything I could on it until I drilled down to the most fundamental source (which Einstein deferred to because basically said his thoughts on it weren't quite right). This source was the book Gibbs produced in 1902, and it constitutes over another year of my life of studying. This studying was hard; it required a level of rigor that I had never experienced. It is intensely mathematical (and unfortunately very dry), but provides a clear derivation of an idea that bridges the gap between our understanding of heavy-particle-like phenomena, and the world that we experience, which is on the macroscopic scale. Like in the case of the black-body problem, the equations come out beautifully and give theoretical explanations for things like the ideal gas law, and the first law of thermodynamics. It even tells us something that may or may not be intuitive, that temperature is fundamentally the average energy per particle (multiplied by some stuff).
One engineering degree later, and about a million more questions, here we are. I am still nowhere near "comprehending" quantum physics. I do not know why the laws are the way they are, but I have realized that after filling notebooks (and notebooks and notebooks...) with physics, I have had some progress and personal revelations along the way.
At this point I think it's an appropriate disclaimer to say that this blog is not entirely scientific, and if you are now shaking your head because you think it should be, you’re in the wrong place. As I said before, this is essentially going to be the digitization of things that were otherwise going to stay hidden in journals.
0 notes