>The Dirac equation can be therefore interpreted as a purely geometric equation, where the mc2 term directly relates to spacetime metric. There is no need to involve any hypothetical Higgs field to explain the particle mass term.
What happens to the Higgs field excitation and the Higgs boson, given the experiments confirming their existence? If this paper explains phenomena more effectively, does it require us to reinterpret these findings?
Any good theory probably needs to explain or reinterpret past phenomenon that has already been proven by experiments under the previous paradigm.
Part of what made Einstein's theories so good is that they reinterpreted past theories while providing new explanations that fixed major unknowns in the science of the time. Both for GR and QFT.
Am I missing something but the whole point of gauge theory (connections on a principal bundle) is that this is true, right? U(1) gauge theory gets you electromagnetism as a purely geometric result already?
Yes, but the "geometry" in question is not the geometry of spacetime, it's the geometry of spacetime plus an abstract space that's sort of "attached" to spacetime. (In the original Kaluza-Klein viewpoint, it was viewed as an extra 5th spacetime dimension, basically a circle at every point of spacetime.)
What this paper appears to be doing (although I can't make complete sense of it) is to somehow derive Maxwell's Equations (or more precisely a nonlinear generalization of them--which seems to me to mean that they aren't actually deriving electromagnetism, but let that go) as a property of the geometry of spacetime alone, without any abstract spaces or extra dimensions or anything of that sort.
Why would nonlinear generalizations be an issue? Wouldn't adding some constraints get us Maxwell's Equations? It seems significant that this can be done at all even if its not complete but maybe I'm missing something. It reminds me of Einstein's original geometrization, possibly even a breakthrough if it turns out to have uses in further development of theory.
You're right that he is just rederiving electromagnetism through local U(1) gauge symmetry. He define his metric as g_{\mu\nu}=A_\mu A\nu, which is a gauge dependent metric that gives you Maxwell's equation in the covariant formulation when you identify the gauge field A_\mu with the vector potential. Sprinkling geometric algebra in gives a feel of novelty but these results is at least one hundred years old.
Purely geometric, except I suppose you still need Coulomb's law and relativity. Both of which can be easily put in a geometric framework.
The rest is just how magnetism emerges from this, and Einstein already figured it out. This guy explains it pretty well in layman's terms: https://www.youtube.com/watch?v=sDlZ-aY9GN4
This is upstream of that: geometric in the sense that it doesn't involve gluing an additional a U(1) field to spacetime at every point (from which coulomb's law etc. emerge).
Related question:
What resources are there that might teach one about Maxwell‘s equations and the electromagnetic field tensor arisig from relativity? The magnetic field is a description of the electric field with relativistic effects. Is there a way of describing electromagnetism without the magnetic field?
Forgive my ignorance but isn't this proven to be a dead end? There is this Kaluza Klein theory that proposes EM as the fifth dimension that has been ruled out, and Einstein spent large part of his later years trying to integrate EM into the GR geometric framework, with no success, mainly because he didn't know about strong and weak nuclear force as the other two fundamental force besides EM and gravity.
"As the electrodynamic force, i.e. the Lorentz force can be related directly to the metrical structure of spacetime, it directly leads to the explanation of the Zitterbewegung phenomenon and quantum mechanical waves as well."
Cool because traditional QM wave function waves are not electromagnetic waves even though they seem to be the same thing in a double slit experiment.
For people wondering what "geometric" means here, they say: "the electromagnetic field should be derived purely and solely from the properties of the metric tensor".
I'm not sure if that's exactly it.
Question: Is there any relationship between this and Axiomatic Thermodynamics? I recall that also uses differential geometry.
AFAICT the idea is that there are no "fields" or "forces" acting "in space", but the space itself bends just so that the normal mechanical motion through it looks the way the electromagnetic phenomena look.
But it can't be quite "the same deal", because gravity obeys the equivalence principle, and electromagnetism does not. (Nor do the other known fundamental interactions.) The paper does not appear to address this at all.
Not "same" as in unifying EM and GR, but rather "same" both can be described as geometric regimes in spacetime (though GR be metric compatible and EM in this formulation requiring a relaxation to semi-metricity.
From the conclusion:
>Charge is therefore to be understood as a local compression of the metric in the spacetime, which relates to longitudinal waves as described in [12]. This provides some aesthetical features into the model, as electromagnetism seems to be orthogonal to gravity in the sense that current theory of gravity is a theory based on metric compatible connections.
From what I can see, this is just a particularly obfuscatory way of saying the same thing I said in response to philipov upthread, where I described how there are different tensors in GR to model gravity and EM. It's not in any way a theory that derives EM from properties of the metric tensor. It adds additional degrees of freedom that aren't in the metric tensor, and then tries to obfuscate what it's doing.
The classic GR line is "the stress-energy tensor tells spacetime (i.e. the metric tensor) how to bend and spacetime tells the stress-energy tensor how to move".
Okay, so this is another attempt to unify quantum field theory and gravity. By using gravity to get quantum fields, rather than by trying to quantize gravity.
If the paper is attempting to express electromagnetism in terms of the metric tensor, then it is putting it into a form that makes it potentially compatible with gravity, which is also a metric tensor. Quantum theories use a completely different type of math, and trying to express gravity in that way (quantizing gravity) results in a bunch of broken equations. If both systems can be described using differential geometry, that is a step in the direction of unifying the theories, even if it's not a hole-in-one.
> If the paper is attempting to express electromagnetism in terms of the metric tensor, then it is putting it into a form that makes it potentially compatible with gravity, which is also a metric tensor.
But the metric tensor of spacetime, in General Relativity, which is our best current classical theory of gravity, only explains gravity. Gravity, by itself, uses up all of the degrees of freedom in the metric tensor. There aren't any left for electromagnetism or anything else.
To get classical electromagnetism, you need to add another, different tensor to your model--a stress-energy tensor with the appropriate properties to describe an electromagnetic field. Of course doing this in standard GR is straightforward and is discussed in GR textbooks; but it does not involve somehow extracting electromagnetism from the metric tensor. It involves describing electromagnetism with the stress-energy tensor, i.e., with different degrees of freedom from the ones that describe gravity.(And if you want to describe the sources of the field, you need to add even more degrees of freedom to the stress-energy tensor to describe charged matter.)
The paper does not, as far as I can see, address this issue; the authors don't even appear to be aware that it is an issue. Which makes me extremely skeptical of the paper's entire approach.
The most irritating kind of junior devs to work with are the ones who refactor code into abstraction oblivion that nobody can decipher in the name of code deduplication or some other contrived metric.
That phenotype is well-represented in mathematical physics.
> Mathematicians and computer programmers use abstraction to opposite ends
I claim to be qualified in both disciplines. With this background, I disagree.
If you are very certain what you want to model, abstractions are often very useful to shed light on "what really happens in the system" (both in mathematics and computer science, but also in physics).
The problem with applying abstractions in computer programs (in this way) lies somewhere else: in business, users/customers are often very "volatile" what they want from the computer program, instead of pondering deeply about this question (even though this would be a very good idea). Thus (certain kinds of) abstractions in computer code make it much harder to adjust the program if new, very different requirements come up.
In math (i am not a mathematician), abstractions are a base to build on. You define some concept (e.g. a group, a set, whatever) then you prove things about it, building ever more complexity around your abstraction.
This works great because in math your abstractions don't change. You are never going to redefine what a group is. If you need something different,maybe you define some related concept, a ring, a semigroup, or whatever, but you never change your original abstraction. It is the base you build on.
As a result you can pack a lot of complexity. E.g. if something is a group, what are all the logical consequences of that? Probably so many you can't list them all, and that's ok. The whole point of math is to pick out some pattern and figure out what that entails.
In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much. The things the abstraction entails should be as limited as reasonably possible as everything it entails is a potential depedency that will get messed up if anything changes. Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
Just think about the traditional SOLID principle in OOP design. Its all about ensuring abstractions are as isolated as possible from each other.
To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them. In computers its the opposite. They should be the tip of the complexity pyramid.
P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I disagree with the idea that computer science has an inverted use of abstractions. Unfortunate naming aside, computer science is basically mathematics applied to computation and data (still theoretical!) and software engineering (a good name, if only more people followed it) is applied computer science. Abstractions (models) must be the basis of the codebase. The JVM is an abstraction. Assembly is an abstraction. Threads are an abstraction. And so on. Of course, software engineering adds the complication of changing specifications and hence changing abstractions. Don't confuse poor abstractions for a reason to not have abstractions. Indeed, we have abstractions everywhere.
To be clear, when i mean abstractions in computer programming, i mean things like classes and polymorphism; abstractions that are used to structure code bases.
I think abstractions in CS (Turing machines, etc) or other building blocks in computer systems (OS interfaces,computer languages, etc) are a different story and much more similar to how abstractions are used in math.
I think abstractions for structuring code are just a different kind of CS-math abstraction. Particularly, a programming language is a heavyweight abstraction that provides the ability to create further abstractions. Good abstractions for structuring code have a lot to do with good abstractions elsewhere; the goals and scope are different, but the principles are the same.
I have a feeling that our arguments are not that different (though not identical), but just phrased in very different words:
> This works great because in math your abstractions don't change.
This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
> In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much.
Here my opinion differs: isolation is at best just one aspect of abstraction (and I would even claim that these concepts are even only tangentially related). I claim that the better isolation is rather a (very useful) side effect of some abstractions that are very commonly used in software development. But on the other hand, I don't think that it is really hard to come up with abstractions for software development that would be very useful, but don't lead to better isolation.
The central purpose of abstraction in computer programs is to make is easier to reason about the the code, and being able to avoid having to write "related" code multiple times. Similar to mathematics: you want to prove a general theorem (e.g. about groups) instead of having to prove one theorem about S_n, one theorem about Z_n etc.
You actually partly write about the aspect of reasoning about the code by yourself:
> Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
In this sense using more abstractions is a particular optimization for the goals:
- you want to make it easier to reason about the code abstractly
- you want to avoid having to duplicate code (i.e. save money since less lines have to be written)
But this is not a panacea:
- If the abstraction turns out to be bad, you either have to re-engineer a lot, or you will have a maintenance nightmare (my "volatility of customer requirements" argument). Indeed, I claim that the question of "do we really use the best possible abstractions in our code for the problem that we want to solve" is nearly always neglected in software projects, because the answer is nearly always very inconvenient, necessitating lots of re-engineering of the code.
- low-level optimizations become harder, so making the code really fast gets much more complicated
- since abstractions are more "abstract", (depending on the abstraction) you might need "smarter" programmers (who can be more expensive). For an example consider some complicated metaprogramming libraries of Boost (C++): in the hands of really good programmers such abstractions can become "magic", but worse programmers will likely be overwhelmed by them.
- fighting about the "right" abstraction can become very political (for low-level code there is often less of such a fight, because here "what is more performant is typically right").
---
Concerning
> To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them.
This is actually not a bad idea to organize code (under my stated specific edge conditions! When these specific edge conditions are violated, my judgment might change). :-)
---
> P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I am not a particular fan of Haskell, but I think the Haskell fans' flaw lies in a very different point: they emphasize very particular aspects of computer programming, and, admittedly, often come up with clever solutions for these.
The problem is: in my opinion there exist aspects of software development that are in my opinion far more important, but don't fit into the kind of structures that Haskell fans appreciate. The difficulty is thus in my experience convincing Haskell fans that such aspects actually matter a lot instead of being unimportant side aspects of software development.
> This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
Yes. I would say that this is a defining feature of computer programming - change. The complexity of change is what computer programmers primarily want to use abstractions to deal with (where such a concern is really absent to a mathematician. Everything is immutable to them).
And yes, reasoning about the code is part of that too, but in computer programming often its in the form of being able to reason about a code base that is slowly shifting under you as other programmers make changes in other parts (in the context of a large project with many devs. I suppose its a different story for a solo project)
To bring it back to the original start of the thread, i guess what i'm saying is that what makes an abstraction good for math is different then what makes it good for a computer program, so naturally they are going to look a little different.
And my original point is related to the idea that the kind of abstractions that are useful for math are sometimes harmful when applied to code, especially in the hands of unseasoned developers.
I think sometimes you have to build the abstraction hell to completion and live with it for a while to truly realize it is in fact inferior. And even then, in science sometimes it never dies fully but lives on in some niche where it has desirable qualities.
From a cursory reading I think they're referring to the idea that an electron traveling at v < c is a superposition of an electron traveling at c and a positron traveling at c. Which also is where the zitterbewegung comes from, because the two waves don't "cleanly" overlap, there's a bit of (not-yet-experimentally-verified) interference.
https://en.wikipedia.org/wiki/Feynman_checkerboard
and the work of David Hestenes:
Zitterbewegung in Quantum Mechanics
https://davidhestenes.net/geocalc/pdf/ZBWinQM15**.pdf
Zitterbewegung structure in electrons and photons
https://arxiv.org/abs/1910.11085
Zitterbewegung Modeling
https://davidhestenes.net/geocalc/pdf/ZBW_mod.pdf
What happens to the Higgs field excitation and the Higgs boson, given the experiments confirming their existence? If this paper explains phenomena more effectively, does it require us to reinterpret these findings?
Part of what made Einstein's theories so good is that they reinterpreted past theories while providing new explanations that fixed major unknowns in the science of the time. Both for GR and QFT.
What this paper appears to be doing (although I can't make complete sense of it) is to somehow derive Maxwell's Equations (or more precisely a nonlinear generalization of them--which seems to me to mean that they aren't actually deriving electromagnetism, but let that go) as a property of the geometry of spacetime alone, without any abstract spaces or extra dimensions or anything of that sort.
*typo
The rest is just how magnetism emerges from this, and Einstein already figured it out. This guy explains it pretty well in layman's terms: https://www.youtube.com/watch?v=sDlZ-aY9GN4
"Collective Electrodynamics: Quantum Foundations of Electromagnetism" https://www.amazon.com/Collective-Electrodynamics-Quantum-Fo...
Cool because traditional QM wave function waves are not electromagnetic waves even though they seem to be the same thing in a double slit experiment.
I'm not sure if that's exactly it.
Question: Is there any relationship between this and Axiomatic Thermodynamics? I recall that also uses differential geometry.
That is, the same deal as with gravity in GR.
But it can't be quite "the same deal", because gravity obeys the equivalence principle, and electromagnetism does not. (Nor do the other known fundamental interactions.) The paper does not appear to address this at all.
From the conclusion: >Charge is therefore to be understood as a local compression of the metric in the spacetime, which relates to longitudinal waves as described in [12]. This provides some aesthetical features into the model, as electromagnetism seems to be orthogonal to gravity in the sense that current theory of gravity is a theory based on metric compatible connections.
From what I can see, this is just a particularly obfuscatory way of saying the same thing I said in response to philipov upthread, where I described how there are different tensors in GR to model gravity and EM. It's not in any way a theory that derives EM from properties of the metric tensor. It adds additional degrees of freedom that aren't in the metric tensor, and then tries to obfuscate what it's doing.
But the metric tensor of spacetime, in General Relativity, which is our best current classical theory of gravity, only explains gravity. Gravity, by itself, uses up all of the degrees of freedom in the metric tensor. There aren't any left for electromagnetism or anything else.
To get classical electromagnetism, you need to add another, different tensor to your model--a stress-energy tensor with the appropriate properties to describe an electromagnetic field. Of course doing this in standard GR is straightforward and is discussed in GR textbooks; but it does not involve somehow extracting electromagnetism from the metric tensor. It involves describing electromagnetism with the stress-energy tensor, i.e., with different degrees of freedom from the ones that describe gravity.(And if you want to describe the sources of the field, you need to add even more degrees of freedom to the stress-energy tensor to describe charged matter.)
The paper does not, as far as I can see, address this issue; the authors don't even appear to be aware that it is an issue. Which makes me extremely skeptical of the paper's entire approach.
That phenotype is well-represented in mathematical physics.
I claim to be qualified in both disciplines. With this background, I disagree.
If you are very certain what you want to model, abstractions are often very useful to shed light on "what really happens in the system" (both in mathematics and computer science, but also in physics).
The problem with applying abstractions in computer programs (in this way) lies somewhere else: in business, users/customers are often very "volatile" what they want from the computer program, instead of pondering deeply about this question (even though this would be a very good idea). Thus (certain kinds of) abstractions in computer code make it much harder to adjust the program if new, very different requirements come up.
In math (i am not a mathematician), abstractions are a base to build on. You define some concept (e.g. a group, a set, whatever) then you prove things about it, building ever more complexity around your abstraction.
This works great because in math your abstractions don't change. You are never going to redefine what a group is. If you need something different,maybe you define some related concept, a ring, a semigroup, or whatever, but you never change your original abstraction. It is the base you build on.
As a result you can pack a lot of complexity. E.g. if something is a group, what are all the logical consequences of that? Probably so many you can't list them all, and that's ok. The whole point of math is to pick out some pattern and figure out what that entails.
In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much. The things the abstraction entails should be as limited as reasonably possible as everything it entails is a potential depedency that will get messed up if anything changes. Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
Just think about the traditional SOLID principle in OOP design. Its all about ensuring abstractions are as isolated as possible from each other.
To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them. In computers its the opposite. They should be the tip of the complexity pyramid.
P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I think abstractions in CS (Turing machines, etc) or other building blocks in computer systems (OS interfaces,computer languages, etc) are a different story and much more similar to how abstractions are used in math.
> This works great because in math your abstractions don't change.
This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
> In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much.
Here my opinion differs: isolation is at best just one aspect of abstraction (and I would even claim that these concepts are even only tangentially related). I claim that the better isolation is rather a (very useful) side effect of some abstractions that are very commonly used in software development. But on the other hand, I don't think that it is really hard to come up with abstractions for software development that would be very useful, but don't lead to better isolation.
The central purpose of abstraction in computer programs is to make is easier to reason about the the code, and being able to avoid having to write "related" code multiple times. Similar to mathematics: you want to prove a general theorem (e.g. about groups) instead of having to prove one theorem about S_n, one theorem about Z_n etc.
You actually partly write about the aspect of reasoning about the code by yourself:
> Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
In this sense using more abstractions is a particular optimization for the goals:
- you want to make it easier to reason about the code abstractly
- you want to avoid having to duplicate code (i.e. save money since less lines have to be written)
But this is not a panacea:
- If the abstraction turns out to be bad, you either have to re-engineer a lot, or you will have a maintenance nightmare (my "volatility of customer requirements" argument). Indeed, I claim that the question of "do we really use the best possible abstractions in our code for the problem that we want to solve" is nearly always neglected in software projects, because the answer is nearly always very inconvenient, necessitating lots of re-engineering of the code.
- low-level optimizations become harder, so making the code really fast gets much more complicated
- since abstractions are more "abstract", (depending on the abstraction) you might need "smarter" programmers (who can be more expensive). For an example consider some complicated metaprogramming libraries of Boost (C++): in the hands of really good programmers such abstractions can become "magic", but worse programmers will likely be overwhelmed by them.
- fighting about the "right" abstraction can become very political (for low-level code there is often less of such a fight, because here "what is more performant is typically right").
---
Concerning
> To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them.
This is actually not a bad idea to organize code (under my stated specific edge conditions! When these specific edge conditions are violated, my judgment might change). :-)
---
> P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I am not a particular fan of Haskell, but I think the Haskell fans' flaw lies in a very different point: they emphasize very particular aspects of computer programming, and, admittedly, often come up with clever solutions for these.
The problem is: in my opinion there exist aspects of software development that are in my opinion far more important, but don't fit into the kind of structures that Haskell fans appreciate. The difficulty is thus in my experience convincing Haskell fans that such aspects actually matter a lot instead of being unimportant side aspects of software development.
> This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
Yes. I would say that this is a defining feature of computer programming - change. The complexity of change is what computer programmers primarily want to use abstractions to deal with (where such a concern is really absent to a mathematician. Everything is immutable to them).
And yes, reasoning about the code is part of that too, but in computer programming often its in the form of being able to reason about a code base that is slowly shifting under you as other programmers make changes in other parts (in the context of a large project with many devs. I suppose its a different story for a solo project)
To bring it back to the original start of the thread, i guess what i'm saying is that what makes an abstraction good for math is different then what makes it good for a computer program, so naturally they are going to look a little different.
You ignore the reality of nature at your own peril.
Besides, you can just use computers automate the wrangling of this hell. It's what they are good at, after all.
Uh ...
And then I got a bot check on researchgate, first time and I download a lot of papers from them.
Nothing like that for me. I just clicked the big "article pdf" button at the bottom of the page.
Direct link to full pdf:
https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...