@johncarlosbaez@mathstodon.xyz
@johncarlosbaez@mathstodon.xyz avatar

johncarlosbaez

@johncarlosbaez@mathstodon.xyz

I'm a mathematical physicist who likes explaining stuff. Sometimes I work at the Topos Institute. Check out my blog! I'm also a member of the n-Category Café, a group blog on math with an emphasis on category theory. I also have a YouTube channel, full of talks about math, physics and the future.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

The precise location of the boundary between the knowable and the unknowable is itself unknowable. But we 𝑑𝑜 know some details about 𝑤ℎ𝑦 this is true, at least within mathematics. It's being studied rigorously in a branch of theoretical computer science called 'meta-complexity theory'.

For some reason it's hard to show that math problems are hard. In meta-complexity theory, people try to understand why.

For example, most of us believe P ≠ NP: merely being able to 𝑐ℎ𝑒𝑐𝑘 the answer to a problem efficiently doesn't imply you can 𝑠𝑜𝑙𝑣𝑒 it efficiently. It seems obvious. But despite a vast amount of work, nobody has been able to prove it!

And in one of the founding results of meta-complexity theory, Razborov and Rudich showed that if a certain attractive class of strategies for proving P ≠ NP worked, then it would be possible to efficiently crack all codes! None of us think 𝑡ℎ𝑎𝑡'𝑠 possible. So their result shows there's a barrier to knowing P ≠ NP.

I'm simplifying a lot of stuff here. But this is the basic idea: they proved that it's probably hard to prove that a bunch of seemingly hard problems are really hard.

But note the 'probably' here! Nobody has 𝑝𝑟𝑜𝑣𝑒𝑑 we can't efficiently crack all codes. And this too, seems very hard to prove.

So the boundary between the knowable and unknowable is itself shrouded in unknowability. But amazingly, we can prove theorems about it!

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@davidsuculum - that's nice; now I want to follow the precise proof of this "paradox" and its assumptions. But beware: in English "any" means both "some" (∃) and "all" (∀). From your description I thought the paradox was claiming

"If some truth can be known then it follows that every truth is in fact known"

which is crazy, but in fact it claims

"If every truth can be known than it follows that every truth is in fact known"

This seems interesting and perhaps reasonable, since I believe not every truth can be known.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@franchesko - the article I linked to mentioned meta-meta-complexity:

.....

Given the truth table of a Boolean function, determine whether it has high or low circuit complexity. They dubbed this the minimum circuit size problem, or MCSP.

[....]

MCSP is a quintessential meta-complexity problem: a computational problem whose subject is not graph theory or another external topic, but complexity theory itself.

Kabanets knew that he and Cai weren’t the first to consider the problem they had dubbed MCSP. Soviet mathematicians had studied a very similar problem beginning in the 1950s, in an early attempt to understand the intrinsic difficulty of different computational problems. Leonid Levin had wrestled with it while developing what would become the theory of NP-completeness in the late 1960s, but he couldn’t prove it NP-complete, and he published his seminal paper without it.

After that, the problem attracted little attention for 30 years, until Kabanets and Cai noted its connection to the natural proofs barrier. Kabanets didn’t expect to settle the question himself — instead he wanted to explore why it had been so hard to prove that this seemingly hard problem about computational hardness was actually hard.

“It is, in a sense, meta-meta-complexity,” said Rahul Santhanam, a complexity theorist at the University of Oxford.

But was it hardness all the way down, or was there at least some way to understand why researchers hadn’t succeeded in proving that MCSP was NP-complete? Kabanets discovered that, yes, there was a reason.

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

johncarlosbaez, to random
@johncarlosbaez@mathstodon.xyz avatar

Chemistry is like physics where the particles have personalities - and chemists love talking about the really nasty ones. It makes for fun reading, like Derek Lowe's column "Things I Won't Work With". For example, bromine compounds:

"Most any working chemist will immediately recognize bromine because we don't commonly encounter too many opaque red liquids with a fog of corrosive orange fumes above them in the container. Which is good."

And that's just plain bromine. Then we get compounds like bromine fluorine dioxide.

"You have now prepared the colorless solid bromine fluorine dioxide. What to do with it? Well, what you don't do is let it warm up too far past +10C, because it's almost certainly going to explode. Keep that phrase in mind, it's going to come in handy in this sort of work. Prof. Seppelt, as the first person with a reliable supply of the pure stuff, set forth to react it with a whole list of things and has produced a whole string of weird compounds with brow-furrowing crystal structures. I don't even know what to call these beasts."

https://www.science.org/content/blog-post/higher-states-bromine

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@ChateauErin - sounds fun, I will check out the YouTube video when pedaling at the gym. But I am glad that as a mathematician the things I work with can only demolish my brain from the inside.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@gregeganSF - they should make up a vileness scale, like Moh's hardness scale.

TruthSandwich, to math
@TruthSandwich@fedi.truth-sandwich.com avatar
johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@TruthSandwich - nice! I hadn't seen that.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

Fun article by John Psmith featuring some ferociously competitive mathematicians and physicists. A quote:

.....

In the 1696 edition of Acta Eruditorum, Johann Bernoulli threw down the gauntlet:

"I, Johann Bernoulli, address the most brilliant mathematicians in the world. Nothing is more attractive to intelligent people than an honest, challenging problem, whose possible solution will bestow fame and remain as a lasting monument. Following the example set by Pascal, Fermat, etc., I hope to gain the gratitude of the whole scientific community by placing before the finest mathematicians of our time a problem which will test their methods and the strength of their intellect. If someone communicates to me the solution of the proposed problem, I shall publicly declare him worthy of praise.

Given two points A and B in a vertical plane,
what is the curve traced out by a point acted on only by gravity,
which starts at A and reaches B in the shortest time."

This became known as the brachistochrone problem, and it occupied the best minds of Europe for, well, for less time than Johann Bernoulli hoped. The legend goes that he issued that pompous challenge I quoted above, and shortly afterward discovered that his own solution to the problem was incorrect. Worse, in short order he received five copies of the actually correct solution to the problem, supposedly all on the same day. The responses came from Newton, Leibniz, l’Hôpital, Tschirnhaus, and worst of all, his own brother Jakob Bernoulli, who had upstaged him yet again.

(1/2) (The fun part about Newton comes in part 2.)

https://www.thepsmiths.com/p/review-the-variational-principles

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

What’s fun about this story is that if it’s true, then it provides us with a nice rank-ordering of the IQs of early 18th century scientists. He may have gotten all the responses on the same day, but back then letters took very different amounts of time to travel to different cities. So his brother Jakob, right next door in Italy, had weeks to work on the problem while Johan’s challenge slowly made its way to London and Newton’s response slowly made its way back. As it happens, one of Newton’s servants left a journal entry stating that one night the master arrived home from the Royal Mint, found a letter from abroad, flew into a rage, stayed up all night writing a response, and sent it out in the next morning’s post. If Newton really did solve in one night a problem that took Bernoulli weeks and Leibniz and l’Hôpital at least a few days, then this gives a sense of the fearsomeness of his powers. Newton’s own comment on the topic was simply: “I do not love to be dunned and teased by foreigners about mathematical things."

.....

For the rest of the story, go here:

• John Psmith, Review: 𝑇ℎ𝑒 𝑉𝑎𝑟𝑖𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑃𝑟𝑖𝑛𝑐𝑖𝑝𝑙𝑒𝑠 𝑜𝑓 𝑀𝑒𝑐ℎ𝑎𝑛𝑖𝑐𝑠, by Cornelius Lanczos, October 20, 2023,
https://www.thepsmiths.com/p/review-the-variational-principles

(2/2)

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@dougmerritt - I definitely think about it! I don't tend to think of Fermat as an otherworldly genius, I think of him as a very smart lawyer who corresponded with lots of mathematicians and physicists. But looking at his biography, I realize I may be underestimating him. Newton was an otherworldly genius of an articulate and versatile sort: the sort who changes the course of civilization. Ramanujan seems like an inarticulate, specialized genius with a completely incomprehensible intuitive ability to spot fascinating formulas - perhaps a mere curiosity in the grand scheme of things, but extremely puzzling.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@pieter - Then I won't read that.

chrisamaphone, to random
@chrisamaphone@hci.social avatar

probability: ability to be probed

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@chrisamaphone - that's probably true.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

There's a dot product and cross product of vectors in 3 dimensions. But there's also a dot product and cross product in 7 dimensions obeying a lot of the same identities! There's nothing really like this in other dimensions.

We can get the dot and cross product in 3 dimensions by taking the imaginary quaternions and defining

v⋅w= -½(vw + wv), v×w = ½(vw - wv)

We can get the dot and cross product in 7 dimensions using the same formulas, but starting with the imaginary octonions.

The following stuff is pretty well-known: the group of linear transformations of ℝ³ preserving the dot and cross product is called the 3d rotation group, SO(3). We say SO(3) has an 'irreducible representation' on ℝ³ because there's no linear subspace of ℝ³ that's mapped to itself by every transformation in SO(3).

Much to my surprise, it seems that SO(3) also has an irreducible representation on ℝ⁷ where every transformation preserves the dot product and cross product in 7 dimensions!

It's not news that SO(3) has an irreducible representation on ℝ⁷. In physics we call ℝ³ the spin-1 representation of SO(3), or at least a real form thereof, while ℝ⁷ is called the spin-3 representation. It's also not news that the spin-3 representation of SO(3) on ℝ⁷ preserves the dot product. But I didn't know it also preserves the cross product on ℝ⁷, which is a much more exotic thing!

In fact I still don't know it for sure. But @pschwahn asked me a question that led me to guess it's true:

https://mathstodon.xyz/@pschwahn/112435119959135052

and I think I almost see a proof, which I outlined after a long conversation on other things.

The octonions keep surprising me.

https://en.wikipedia.org/wiki/Seven-dimensional_cross_product

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@internic - what do you mean by "mean"? Only in 3 and 7 dimensions can we define a dot product and cross product obeying the usual identities. In 3 dimensions we can define the cross product using the exterior product and Hodge duality as you say, but in 7 dimensions we cannot: the only way I know uses octonions.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@dougmerritt - you've got the right idea. There's an equivalence between "normed division algebras" (of which there are only four: ℝ, ℂ, ℍ and 𝕆) and "vector product algebras" (which are vector spaces equipped with a "dot product" and "cross product" obeying some familiar axioms).

To get from a normed division algebra to a vector product algebra you take its vector space of "imaginary" elements - something you can define systematically - and define a dot product and cross product on that space of imaginary elements using the formulas I gave:

v⋅w= -½(vw + wv), v×w = ½(vw - wv)

Conversely there's a way to go back from a vector product algebra to a normed division algebra.

So, just as there are only four normed division algebras, there are only four vector product algebras. The normed division algebras have dimensions 1, 2, 4 and 8. The vector product algebras have dimensions 0, 1, 3, and 7. Only the last two are interesting!

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@dougmerritt - the vector cross product in 7 dimensions is a very mysterious thing, but luckily it's just a spinoff of the octonions: to the extent we understand the octonions we understand it. I've been thinking about it for a long time. I thought I understood it pretty well.

The thing that's interesting me now - the thing I learned just yesterday! - is that just as 3d rotations act on 3d vectors in a way that preserves their dot product and cross product, 3d rotations also act on 7d vectors in a way that preserves their dot product and cross product. That's bizarre.

The "irreducibility" business is a way of saying that we're not getting this to happen using a cheap trick. If we dropped the irreducibility condition, we could think of 3d rotations as 7d rotations that just happen to only mess around with 3 of the coordinates. We are not doing that here!

So this is weird. By the way, in general, 7d rotations DON'T act on 7d vectors in a way that preserves their dot product and cross product. Only certain special ones do.

(It takes 28 numbers to specify a general rotation in 7 dimensions, and they all preserved the dot product of 7d vectors. Far fewer preserve the cross product too: those can be specified using only 14 numbers.)

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@internic - The cross product is not associative! But you got the idea. Let me spell it out in painful detail. A "vector product algebra" is a vector space with an inner product I'll call the dot product and denote by

⋅: V² → ℝ

and also a bilinear antisymmetric operation called the cross product

× : V² → V

obeying

u ⋅ (v × w) = v ⋅ (w × u)

and

(u × v) × u = (u ⋅ u) v - (u ⋅ v) u

These are what I meant by "the usual identities". They imply a bunch more identities.

There exist only four vector product algebras! The only interesting ones are the 3d and 7d ones, since in the 0d and 1d examples the cross product is zero.

For more, try:

https://www.math.uni-bielefeld.de/~rost/data/vpg.pdf

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@dmm - don't be shy about asking questions, and don't be scared to ask 'elementary' questions. I really like explaining math. Whatever might help you, I'm willing to give it a try.

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn - In our previous conversation I conjectured a representation of SO(3) on the imaginary octonions, and by now I've checked it really is a representation. Once we can know it's irreducible, we know it's the spin-3 irrep.

To get this rep we start by picking a basic triple of imaginary octonions, say i, j, ℓ. They span a 3d space. Any rotation of this 3d space rotates our basic triple to some other basic triple. This gives an automorphism of the octonions and thus a transformation of the space of imaginary octonions. So we get a rep of SO(3) on the 7d space of imaginary octonions! That's all there is to it.

We could either prove this is irreducible, or check by computation that it's the spin-3 rep. I can explain more explicitly how rotating our basic triple defines an automorphism of the octonions. Showing this gives the spin-3 rep may be easier at the Lie algebra level.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@internic @ppscrv - it's painful to type.

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn - Oh my god, how dumb I am! span{i,j,ℓ} is definitely invariant because I'm rotating those vectors into linear combinations of themselves. So... back to the drawing board.

By the way, it was not completely trivial to check that if we rotate i, j, ℓ we get a new basic triple, because the concept of basic triple is defined using octonion multiplication. So when I succeeded, I thought I was being clever. I seem to be getting a bunch of SO(3) subgroups of G₂ (each basic triple gives me one), but none that act irreducibly on the imaginary octonions.

I suspect that a 'generic' SO(3) subgroup of G₂ will act irreducibly. Sometimes 'generic' things are the ones for which there's no simple formula. But I still hope there's something nice going on here. I want to connect 3d geometry to 7d or 8d geometry in a nice way.

There's a nice way to build the octonions from ℝ³ where you take the exterior algebra Λ(ℝ³), which is 8-dimensional, and give it a multiplication. SO(3) acts on Λ(ℝ³) in the usual way, and this preserves the octonion multiplication, but unfortunately it acts reducibly since it preserves each grade Λⁱ(ℝ³).

So we need something more gnarly.

monsoon0, to random
@monsoon0@mathstodon.xyz avatar

A proof is an amazing wonderful 🎉 thing… So I am wondering why the word “only” is in this sentence: “Researchers have obtained only mathematical proofs that quantum computers will offer large gains over current, classical computers” https://www.nature.com/articles/d41586-023-01692-9

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@monsoon0 - yes, the math of quantum computing is very helpful. I believe the article was trying to say, in a clumsy way, that the math of quantum algorithms is far ahead of our technology for implementing these algorithms, and there's no guarantee that we'll succeed in implementing them.

As for @drdunk's statement, it's a fact that nobody has proved really good lower bounds on the asymptotic complexity of classical computations For this reason, nobody can prove that quantum algorithms are definitively faster, asymptotically, than classical ones could ever be. What people do instead is prove that some quantum algorithms are faster, asymptotically, than the current 𝑏𝑒𝑠𝑡 𝑘𝑛𝑜𝑤𝑛 classical algorithms.

Consider Shor's algorithm for prime factorization:

https://en.wikipedia.org/wiki/Shor%27s_algorithm

This is one of the main examples of a quantum algorithm that seems to beat what we can do classically. The Wikipedia article states what's known about it. Shor's algorithm for factoring an integer N takes time that's bounded by a polynomial in log N (Wikipedia states a very precise bound that's below O((log N)³)). The current best known classical algorithm takes time that grows faster than any polynomial in log N. But nobody has proved that it's impossible to use classical algorithms to factor integers in a time that's bounded by a polynomial in log N. It's widely believed to be true, but not proved.

To my mind, the most interesting thing about this is the question of why it's so hard to prove good lower bounds on the complexity of algorithms! There's something very deep about this, which "meta-complexity" theory is trying to tackle:

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

BartoszMilewski, to random
@BartoszMilewski@mathstodon.xyz avatar

I'm struggling with the definition of the category of elements--the direction of morphisms. Grothendieck worked with presheaves (C^{op} \to \mathbf{Set}), with a morphism ((a, x) \to (b, y)) being an an arrow (a \to b) in (C). The question is, what is it for co-presheaves? Is it (b \to a)? nLab defines it as (a \to b) and doesn't talk about presheaves. Emily Riehl defines both as (a \to b), which makes one wonder what it is for (𝐶ᵒᵖ)ᵒᵖ→𝐒𝐞𝐭 , not to mention (C^{op}\times C \to \mathbf{Set}).

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@madnight - whoops, I should change "prehistory.pdf" to "history.pdf" in the URL I gave... but Bartosz gave a better link: the arXiv is always better.

Okay, I see what you mean about "weeds".

pschwahn, to random German
@pschwahn@mathstodon.xyz avatar

Non-semisimple Lie groups are so weird. Weyl's unitarian trick does not work for them. So I need to constantly remind myself that:

  1. representations of GL(n,ℂ) are not determined by their character,
  2. not every finite-dimensional representation of GL(n,ℂ) is completely reducible,
  3. Finite-dimensional GL(n,ℂ)-representations are not in 1:1-correspondence with finite-dimensional U(n)-representations.

However these work when you look only at irreducible representations, or when you replace GL by SL (and U by SU). The archetypical counterexample is given by the (reducible but indecomposable) representation
[\rho: \mathrm{GL}(1,\mathbb{C})=\mathbb{C}^\times\to\mathrm{GL}(2,\mathbb{C}):\quad z\mapsto\begin{pmatrix}1&\log |z|\0&1\end{pmatrix}.]
(Example shamelessly stolen from: https://math.stackexchange.com/questions/2392313/irreducible-finite-dimensional-complex-representation-of-gl-2-bbb-c)

Turns out that entire StackExchange threads can be wrong about this (for example https://math.stackexchange.com/questions/221543/why-is-every-representation-of-textrmgl-n-bbbc-completely-determined-by), so be wary!

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt

"However for a group such as GL(𝑛,ℝ) it looks like the issue persists - a representation such as 𝑔↦|det(𝑔)|ʷ for non-integer w will be real-analytic, but it will not correspond to a representation of U(n)."

U(n) is not a maximal compact subgroup of GL(n,ℝ) - it's not even contained in GL(n,ℝ). So there's no way to restrict a representation of GL(n,ℝ) to U(n), and you shouldn't expect an equivalence (or even a functor) from the category of representations of GL(n,ℝ) to those of U(n).

The maximal compact subgroup of GL(n,ℝ) is O(n), so you can restrict representations of GL(n,ℝ) is O(n). But a bunch of different real-analytic representations of GL(n,ℝ) restrict to the same representation of O(n), like all the representations 𝑔↦|det(𝑔)|ʷ. If I remember correctly this particular example is the "only problem". Of course it has spinoffs: you can tensor any representation of GL(n,ℝ) by a representation 𝑔↦|det(𝑔)|ʷ and get a new one which is the same on O(n).

I hope I'm remembering this correctly: every finite-dimensional smooth representation of GL(n,ℝ) is completely reducible, and every irreducible smooth representation comes from one described by a by Young diagram, possibly tensored by a representation 𝑔↦det(𝑔)ʷ where w is some real number, possibly also tensored by a representation 𝑔↦|det(𝑔)|ʷ where w is some real number.

It's a lot easier to find treatments of the 'algebraic' representations of GL(n,ℝ), and it's even easier to find them for SL(n,ℝ).

johncarlosbaez,
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt - Ugh! I should have stuck with rational representations, which is what people usually talk about when studying representations of linear algebraic groups.

I'm pretty sure that every finite-dimensional rational representation of GL(n,ℝ) is completely reducible, and every irreducible rational representation comes from one described by a by Young diagram, possibly tensored by a representation 𝑔↦det(𝑔)ⁿ where n is some integer.

(There is some overlap here since the nth exterior power of the tautologous representation, described by a Young diagram, is also the representation 𝑔↦det(𝑔).)

It's annoying that the basic facts about finite-dimensional representations of GL(n,ℝ) aren't on Wikipedia! Someday I'll have to put them on there... once I get enough references to make sure I'm not screwing up!

johncarlosbaez, (edited )
@johncarlosbaez@mathstodon.xyz avatar

@pschwahn @AxelBoldt - right, that's one way to proceed. I've been doing a lot of work lately with representations of GL(n,𝔽) for 𝔽 an arbitrary field of characteristic zero. For subfields of ℂ this trick of complexifying and reducing to the case 𝔽 = ℂ works fine. But in fact the representation theory works exactly the same way even for fields of characteristic zero that aren't subfields of ℂ!

It's not that I really care about such fields. I just find it esthetically annoying to work only with subfields of ℂ when dealing with something that's purely algebraic and shouldn't really involve the complex numbers. So I had to learn a bit about how we can develop the representation theory of GL(n,𝔽) for an arbitrary field of characteristic zero. Milne's book 𝐴𝑙𝑔𝑒𝑏𝑟𝑎𝑖𝑐 𝐺𝑟𝑜𝑢𝑝𝑠 does this, and a preliminary version is free:

https://www.jmilne.org/math/CourseNotes/iAG200.pdf

but unfortunately it's quite elaborate if all you want is the basics of the representation theory of GL(n,𝔽).

(For 𝔽 not of characteristic zero everything changes dramatically, since you can't symmetrize by dividing by n!. Nobody even knows all the irreps of the symmetric groups.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines