@johncarlosbaez@mathstodon.xyz
@johncarlosbaez@mathstodon.xyz avatar

johncarlosbaez

@johncarlosbaez@mathstodon.xyz

I'm a mathematical physicist who likes explaining stuff. Sometimes I work at the Topos Institute. Check out my blog! I'm also a member of the n-Category Café, a group blog on math with an emphasis on category theory. I also have a YouTube channel, full of talks about math, physics and the future.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

When Maxwell realized in 1862 that light consists of waves in the electromagnetic field, why didn't anyone try to use electricity to make such waves right away? Why did Hertz succeed only 24 years later?

According to 𝘛𝘩𝘦 𝘔𝘢𝘹𝘸𝘦𝘭𝘭𝘪𝘢𝘯𝘴:

"Since he regarded the production of light as an essentially molecular and mechanical process, prior, in a sense, to electromagnetic laws, Maxwell could elaborate an electromagnetic account of the propagation of light without ever supposing that ether waves were produced purely electromagnetically."

In 1879, a physicist named Lodge realized that in theory one could make "electromagnetic light". But he didn't think of creating waves of lower frequency:

"Send through the helix an intermittent current (best alternately reversed) but the alternations must be very rapid, several billion per sec."

He mentioned this idea to Fitzgerald, who believed he could prove it was impossible. Unfortunately Fitzgerald managed to convince Lodge. But later he realized his mistake:

"It was FitzGerald himself who found the flaws in his "proofs." He then proceeded to put the subject on a sound theoretical basis, so that by 1883 he understood quite clearly how electromagnetic waves could be produced and what their characteristics would be. But the waves remained inaccessible; FitzGerald, along with everyone else, was stymied by the lack of any way to detect them."

In 1883, Fitzgerald gave a talk called "On a Method of Producing Electromagnetic Disturbances of Comparatively Short Wavelengths". But he couldn't figure out how to 𝘥𝘦𝘵𝘦𝘤𝘵 these waves. Hertz figured that out in 1886.

johncarlosbaez, to random
@johncarlosbaez@mathstodon.xyz avatar

Is there a chance that the physicist Oliver Heaviside was really Wolverine?

image/jpeg

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@TruthSandwich 😆

Heaviside had a tough life. One small example:

"In those days the Royal Society would publish in its proceedings virtually anything one of its fellows submitted, and in 1893 Heaviside sent in the first two installments of a paper “On operators in physical mathematics.” But pure mathematicians objected to the cavalier way he handled divergent series, and when he submitted a third installment, it was sent to a referee and rejected. Heaviside was incensed; if handled properly, he said, his methods gave demonstrably right answers, and that ought to be justification enough. “Shall I refuse my dinner,” he said, “because I do not fully understand the process of digestion?” Despite the objections of “rigorists,” Heaviside’s operator methods later came into wide use, especially among engineers, and probably influenced the thinking of Paul Dirac, who learned them during his initial training as an electrical engineer."

Now I understand why Heaviside said "This series is divergent, therefore we may be able to do something with it."

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@TruthSandwich - the mathematician knows math 𝑖𝑠 reality and the physical world is just a distraction devised by the devil.

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@NickPizzoOceans - I'd enjoy 𝑇h𝑒 𝑀𝑎𝑥𝑤𝑒𝑙𝑙𝑖𝑎𝑛𝑠, since I'm interested in the history of electromagnetism.

I've been working in Maxwell's house on 14 India Street in Edinburgh lately, and it has a picture of Heaviside in it... so he is respected there.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@highergeometer - I tried, failed and gave up.

MathOfSecrets, to random
@MathOfSecrets@mathstodon.xyz avatar

Does anyone know what the “more complicated possibility” referred to here is? (It’s from https://www.quantamagazine.org/strangely-curved-shapes-break-50-year-old-geometry-conjecture-20240514/ ) I assume it’s referring to something compact, since there are topological planes with positive curvature everywhere, like the paraboloid, which don’t seem particularly complicated?

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@MathOfSecrets - probably they mean a manifold that's topologically a sphere but not a "round" sphere, i.e. one with constant curvature. I assume that's what "standard surface of a sphere" means here - a round sphere.

codyroux, to random
@codyroux@mathstodon.xyz avatar

Ideas for side projects? I'm very bored.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@codyroux - people seem to be telling you to do things quite similar to what you're already doing. If you're bored, maybe it's because you're already doing too many things quite similar to what you're already doing! So how about learning some chemistry, or music theory... or not learning something, but just playing an instrument, or going on a walk with a specific purpose, or gardening?

https://www.youtube.com/watch?v=a6d7dWwawd8

dmm, to random
@dmm@mathstodon.xyz avatar

On May 17, 1902, Valerios Stais discovered the Antikythera Mechanism in a wooden box in the Antikythera shipwreck on the Greek island of Antikythera. The Mechanism is the oldest known mechanical computer and can accurately calculate various astronomical quantities.

As Tony Freeth says, "It is a work of stunning genius" [1].

A few of my notes on the Mechanism are here: https://davidmeyer.github.io/astronomy/prices_metonic_gear_train.pdf. The LaTeX source is here: https://www.overleaf.com/read/ndpvkytkhmbv.

As always, questions/comments/corrections/* greatly appreciated.

References

"The Antikythera Mechanism: A Shocking Discovery from Ancient Greece", https://www.youtube.com/watch?v=xWVA6TeUKYU

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@dmm - there must be still more to learn about this. But I don't even know all the stuff in the Wikipedia article! I'll read your page.

ngons, to random
@ngons@mathstodon.xyz avatar

Decagon dissection.

#Tiling #Geometry #MathArt #MathsArt

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@ngons - beautiful! It's got an internal glow.

dpiponi, to random
@dpiponi@mathstodon.xyz avatar

Looked up speed of snails on Google to see if my USPS package "moving through network" from San Francisco is literally going at a snail's pace. Looks like snails would have to be 3 times faster to beat my package.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@dpiponi - when the fantasy of package delivery by drones fell through, they switched to snails.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

The precise location of the boundary between the knowable and the unknowable is itself unknowable. But we 𝑑𝑜 know some details about 𝑤ℎ𝑦 this is true, at least within mathematics. It's being studied rigorously in a branch of theoretical computer science called 'meta-complexity theory'.

For some reason it's hard to show that math problems are hard. In meta-complexity theory, people try to understand why.

For example, most of us believe P ≠ NP: merely being able to 𝑐ℎ𝑒𝑐𝑘 the answer to a problem efficiently doesn't imply you can 𝑠𝑜𝑙𝑣𝑒 it efficiently. It seems obvious. But despite a vast amount of work, nobody has been able to prove it!

And in one of the founding results of meta-complexity theory, Razborov and Rudich showed that if a certain attractive class of strategies for proving P ≠ NP worked, then it would be possible to efficiently crack all codes! None of us think 𝑡ℎ𝑎𝑡'𝑠 possible. So their result shows there's a barrier to knowing P ≠ NP.

I'm simplifying a lot of stuff here. But this is the basic idea: they proved that it's probably hard to prove that a bunch of seemingly hard problems are really hard.

But note the 'probably' here! Nobody has 𝑝𝑟𝑜𝑣𝑒𝑑 we can't efficiently crack all codes. And this too, seems very hard to prove.

So the boundary between the knowable and unknowable is itself shrouded in unknowability. But amazingly, we can prove theorems about it!

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@davidsuculum - that's nice; now I want to follow the precise proof of this "paradox" and its assumptions. But beware: in English "any" means both "some" (∃) and "all" (∀). From your description I thought the paradox was claiming

"If some truth can be known then it follows that every truth is in fact known"

which is crazy, but in fact it claims

"If every truth can be known than it follows that every truth is in fact known"

This seems interesting and perhaps reasonable, since I believe not every truth can be known.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@franchesko - the article I linked to mentioned meta-meta-complexity:

.....

Given the truth table of a Boolean function, determine whether it has high or low circuit complexity. They dubbed this the minimum circuit size problem, or MCSP.

[....]

MCSP is a quintessential meta-complexity problem: a computational problem whose subject is not graph theory or another external topic, but complexity theory itself.

Kabanets knew that he and Cai weren’t the first to consider the problem they had dubbed MCSP. Soviet mathematicians had studied a very similar problem beginning in the 1950s, in an early attempt to understand the intrinsic difficulty of different computational problems. Leonid Levin had wrestled with it while developing what would become the theory of NP-completeness in the late 1960s, but he couldn’t prove it NP-complete, and he published his seminal paper without it.

After that, the problem attracted little attention for 30 years, until Kabanets and Cai noted its connection to the natural proofs barrier. Kabanets didn’t expect to settle the question himself — instead he wanted to explore why it had been so hard to prove that this seemingly hard problem about computational hardness was actually hard.

“It is, in a sense, meta-meta-complexity,” said Rahul Santhanam, a complexity theorist at the University of Oxford.

But was it hardness all the way down, or was there at least some way to understand why researchers hadn’t succeeded in proving that MCSP was NP-complete? Kabanets discovered that, yes, there was a reason.

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@aadmaa - of course, the question about what we can know about nonmathematical questions is infinitely harder than for mathematical ones. It may also be more important. But having studied it as a youth, I eventually decided it was too intractable to be worth spending more time on. I like working on things where I can make progress.

The question of unknowability for math is by comparison so clear and self-contained that I find it fascinating how even here we sink into deep quicksand of a self-referential sort. But since math is so clear and self-contained, we can prove amazing theorems about this quicksand. For example we can sometimes rigorously 𝑝𝑟𝑜𝑣𝑒 that for certain propositions, if we are unable to prove them, we are also unable to prove that we are unable to prove them. And so on.

The recent rise of meta-complexity theory is showing that such questions are relevant to cryptography, the study of efficient algorithms, etc. That's pretty amazing!

https://simons.berkeley.edu/programs/Meta-Complexity2023

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@TruthSandwich - Alas, that doesn't parse because P ≠ NP is a single task, while the terms P and NP make sense only for 𝑓𝑎𝑚𝑖𝑙𝑖𝑒𝑠 of tasks, like multiplying numbers or factoring numbers into primes. There are 𝑙𝑜𝑡𝑠 of numbers, and we say multiplying numbers is in P because the time it takes grows polynomially as the numbers get bigger.

I could go on, but this is what happens when you make a joke to a mathematician.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@leemph - my go-to formalization of "knowability" in math (and only in math) is "provability". For example, we'd know P ≠ NP if we could prove it. What I'm hinting at in my post is that P ≠ NP is unprovable. But while Goedel and many subsequent logicians were able to prove many interesting statements are unprovable (starting from various standard axioms), so far people have only been able to show that certain approaches to proving P ≠ NP (so-called "natural proofs") lead to dizzying consequences, like our ability to tell the difference between random numbers and pseudorandom numbers.

It's completely possible that P ≠ NP is unprovable and it's also impossible to prove it's unprovable, and it's also impossible to prove this.

For example, we know that if Goldbach's conjecture is true but unprovable, it's also impossible to prove that it's unprovable. So there are cases where uknowability shrouds itself in unknowability.

So, anyway, this is where my thoughts are on this - not broader concepts of "knowing".

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@leemph - regarding 1, you're exactly right: in math most of us have accepted that "mathematical knowledge" is relative to a set of axioms and deduction rules. This started with the discovery of non-Euclidean geometry, where "truths" of Euclidean geometry turned out to be consequences of axioms that no longer hold if you switch to a different sort of geometry. Later Goedel showed that any sufficiently powerful consistent finitely axiomatizable theory could never prove or disprove all statements formulated in its own language: there's always some statement P such that we can add either P or not(P) to the axioms and get a new such set of axioms that's still consistent.

  1. In constructivist logic to prove something exists means that you can, at least in principle, exhibit an example. In classical logic there are cases where you can prove something exists but not exhibit an example. So your attitude to this question is closely allied to whether you prefer classical or constructivist logic. Most really smart mathematicians realize that this, too, is another case of the relativity in part 1. I.e., neither constructivism or classical logic is "really true": they are just alternative sets of axioms, and it's worth exploring both.

  2. "If we prove T then do we have a proof that it is provable?" In the logics I know, exhibiting an example of something counts as a proof that it exists. So yes, in both classical and constructivist logic, we can get from a proof of P to a proof that P is provable. Part 2 was about the more problematic converse.

"In which deductive system should that meta-proof be carried out?" There are many choices. This is another instance of the relativity in 1.

(1/2)

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@leemph wrote: "<<For example, we know that if Goldbach's conjecture is true but unprovable, it's also impossible to prove that it's unprovable. So there are cases where unknowability shrouds itself in unknowability.>>"

Sorry, this was a stupid sentence, and you were right to be confused. Here's what I was trying to say:

"For example, if Goldbach's conjecture is true but unprovable, it's also impossible to prove that it's unprovable. So there are cases where unknowability shrouds itself in unknowability."

And normally I avoid using the word "true" in this context, since it doesn't really mean much to say a mathematical statement is "true" except as a shorthand for it being provable. If I were trying to be precise, I would have said this:

"For example, if neither Goldbach's conjecture nor its negation is provable, it's also impossible to prove that either of those is unprovable. So there are cases where unknowability shrouds itself in unknowability."

(2/2)

johncarlosbaez, to random
@johncarlosbaez@mathstodon.xyz avatar

Chemistry is like physics where the particles have personalities - and chemists love talking about the really nasty ones. It makes for fun reading, like Derek Lowe's column "Things I Won't Work With". For example, bromine compounds:

"Most any working chemist will immediately recognize bromine because we don't commonly encounter too many opaque red liquids with a fog of corrosive orange fumes above them in the container. Which is good."

And that's just plain bromine. Then we get compounds like bromine fluorine dioxide.

"You have now prepared the colorless solid bromine fluorine dioxide. What to do with it? Well, what you don't do is let it warm up too far past +10C, because it's almost certainly going to explode. Keep that phrase in mind, it's going to come in handy in this sort of work. Prof. Seppelt, as the first person with a reliable supply of the pure stuff, set forth to react it with a whole list of things and has produced a whole string of weird compounds with brow-furrowing crystal structures. I don't even know what to call these beasts."

https://www.science.org/content/blog-post/higher-states-bromine

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@gregeganSF - they should make up a vileness scale, like Moh's hardness scale.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

Tolstoy: "Happy families are all alike; every unhappy family is unhappy in its own way."

Mathematics: "Real tori are all alike; every complex torus is complex in its own way."

To be precise, a 'n-dimensional real torus' is a real manifold of the form V/Λ where V is an n-dimensional real vector space and Λ ⊆ V is a lattice of rank n in this vector space. They are all isomorphic.

An 'n-dimensional complex torus' is a complex manifold of the form V/Λ where V is an n-dimensional complex vector space and Λ ⊆ V is a lattice of rank 2n in this vector space. These are not all isomorphic, because there are different ways the lattice can get along with multiplication by i. For example we might have iΛ = Λ or we might not.

And so, it's possible to write a whole book - and indeed a fascinating one - on complex tori. For example a 1-dimensional complex torus is an elliptic curve, and there are whole books just about those.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@battaglia01 - sorry to take so long to reply. Are you still interested in this question?

"Eisenstein series give us a moduli space for 2D lattices, which can also parameterize the subgroups of a 2D free abelian group. But is a similar representation possible for 3D, 4D, etc lattices?"

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@battaglia01 - Lattices in ℂⁿ give complex tori, and the nice ones give 'abelian varieties': complex tori that are projective varieties. Only the latter really act like generalizations of elliptic curves. The moduli space of elliptic curves, and the theory of modular forms, generalizes to higher dimensions this way. But people discovered you must work with abelian varieties equipped with an extra structure, a 'polarization', to get this to work. When we do, we're led to study 'Siegel modular forms'. But unfortunately it seems the Eisenstein series trick for getting modular forms doesn't work in higher dimensions, because this sum over points ℓ in a lattice Λ⊂ℂⁿ:
[ \sum_{\ell \in \Lambda} \frac{1}{(z-\ell)^n} ]
only makes sense when (n = 1). You might hope that some trick would save us, but I haven't seen any analogue of Eisenstein series in higher dimensions. I'm just learning this theory, so I might have missed something, but I've looked around.

https://en.wikipedia.org/wiki/Siegel_modular_form

pschwahn, to random German
@pschwahn@mathstodon.xyz avatar

Non-semisimple Lie groups are so weird. Weyl's unitarian trick does not work for them. So I need to constantly remind myself that:

  1. representations of GL(n,ℂ) are not determined by their character,
  2. not every finite-dimensional representation of GL(n,ℂ) is completely reducible,
  3. Finite-dimensional GL(n,ℂ)-representations are not in 1:1-correspondence with finite-dimensional U(n)-representations.

However these work when you look only at irreducible representations, or when you replace GL by SL (and U by SU). The archetypical counterexample is given by the (reducible but indecomposable) representation
[\rho: \mathrm{GL}(1,\mathbb{C})=\mathbb{C}^\times\to\mathrm{GL}(2,\mathbb{C}):\quad z\mapsto\begin{pmatrix}1&\log |z|\0&1\end{pmatrix}.]
(Example shamelessly stolen from: https://math.stackexchange.com/questions/2392313/irreducible-finite-dimensional-complex-representation-of-gl-2-bbb-c)

Turns out that entire StackExchange threads can be wrong about this (for example https://math.stackexchange.com/questions/221543/why-is-every-representation-of-textrmgl-n-bbbc-completely-determined-by), so be wary!

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt

"However for a group such as GL(𝑛,ℝ) it looks like the issue persists - a representation such as 𝑔↦|det(𝑔)|ʷ for non-integer w will be real-analytic, but it will not correspond to a representation of U(n)."

U(n) is not a maximal compact subgroup of GL(n,ℝ) - it's not even contained in GL(n,ℝ). So there's no way to restrict a representation of GL(n,ℝ) to U(n), and you shouldn't expect an equivalence (or even a functor) from the category of representations of GL(n,ℝ) to those of U(n).

The maximal compact subgroup of GL(n,ℝ) is O(n), so you can restrict representations of GL(n,ℝ) is O(n). But a bunch of different real-analytic representations of GL(n,ℝ) restrict to the same representation of O(n), like all the representations 𝑔↦|det(𝑔)|ʷ. If I remember correctly this particular example is the "only problem". Of course it has spinoffs: you can tensor any representation of GL(n,ℝ) by a representation 𝑔↦|det(𝑔)|ʷ and get a new one which is the same on O(n).

I hope I'm remembering this correctly: every finite-dimensional smooth representation of GL(n,ℝ) is completely reducible, and every irreducible smooth representation comes from one described by a by Young diagram, possibly tensored by a representation 𝑔↦det(𝑔)ʷ where w is some real number, possibly also tensored by a representation 𝑔↦|det(𝑔)|ʷ where w is some real number.

It's a lot easier to find treatments of the 'algebraic' representations of GL(n,ℝ), and it's even easier to find them for SL(n,ℝ).

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt - Ugh! I should have stuck with rational representations, which is what people usually talk about when studying representations of linear algebraic groups.

I'm pretty sure that every finite-dimensional rational representation of GL(n,ℝ) is completely reducible, and every irreducible rational representation comes from one described by a by Young diagram, possibly tensored by a representation 𝑔↦det(𝑔)ⁿ where n is some integer.

(There is some overlap here since the nth exterior power of the tautologous representation, described by a Young diagram, is also the representation 𝑔↦det(𝑔).)

It's annoying that the basic facts about finite-dimensional representations of GL(n,ℝ) aren't on Wikipedia! Someday I'll have to put them on there... once I get enough references to make sure I'm not screwing up!

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt - right, that's one way to proceed. I've been doing a lot of work lately with representations of GL(n,𝔽) for 𝔽 an arbitrary field of characteristic zero. For subfields of ℂ this trick of complexifying and reducing to the case 𝔽 = ℂ works fine. But in fact the representation theory works exactly the same way even for fields of characteristic zero that aren't subfields of ℂ!

It's not that I really care about such fields. I just find it esthetically annoying to work only with subfields of ℂ when dealing with something that's purely algebraic and shouldn't really involve the complex numbers. So I had to learn a bit about how we can develop the representation theory of GL(n,𝔽) for an arbitrary field of characteristic zero. Milne's book 𝐴𝑙𝑔𝑒𝑏𝑟𝑎𝑖𝑐 𝐺𝑟𝑜𝑢𝑝𝑠 does this, and a preliminary version is free:

https://www.jmilne.org/math/CourseNotes/iAG200.pdf

but unfortunately it's quite elaborate if all you want is the basics of the representation theory of GL(n,𝔽).

(For 𝔽 not of characteristic zero everything changes dramatically, since you can't symmetrize by dividing by n!. Nobody even knows all the irreps of the symmetric groups.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • GTA5RPClips
  • provamag3
  • InstantRegret
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • lostlight
  • All magazines