@leemph@mathstodon.xyz
@leemph@mathstodon.xyz avatar

leemph

@leemph@mathstodon.xyz

(σ+1)0=σ(σ+1)

Sometimes the beauty hidden inside a random equation can drive you across incredible landscapes. My journey started from the mesmerizing cloudy cerulean sky of philosophy of mathematics and led me to breathtaking earthly eden of category theory.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

In the real world, the rope in a knot has some nonzero thickness. In math, knots are made of infinitely thin stuff. This lets mathematical knots be tied in infinitely complicated ways - ways that are impossible for knots with nonzero thickness! These are called 'wild' knots.

See the wild knot here? There's just one point where the stuff it's made of needs to have zero thickness. So we say it's wild at just one point. But some knots are wild at many points.

There are even knots that are wild at every point! To build these you need to recursively put in wildness at more and more places, forever. This is hard to draw. I'd really like to see a good try.

Wild knots are extremely hard to classify. This is not just a feeling - it's a theorem. Vadim Kulikov showed that wild knots are harder to classify than any sort of countable structure that you can describe using first-order classical logic with just countably many symbols!

Very roughly speaking, this means wild knots are so complicated that we can't classify them using anything we can write down. This makes them very different from 'tame' knots - knots that aren't wild. Yeah, tame knots are hard to classify, but nowhere near that hard.

(1/3)

https://www.youtube.com/watch?v=o7U3yvMF8Sw

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez this seems to be adjacent* to the theme of the last post about metacomplexity. Is that secretly following an invisible Ariadne's thread?

*(stupid comment) Like, imagine to study computability problems using borel reducibility on a suitable space of algorithms.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

The precise location of the boundary between the knowable and the unknowable is itself unknowable. But we 𝑑𝑜 know some details about 𝑤ℎ𝑦 this is true, at least within mathematics. It's being studied rigorously in a branch of theoretical computer science called 'meta-complexity theory'.

For some reason it's hard to show that math problems are hard. In meta-complexity theory, people try to understand why.

For example, most of us believe P ≠ NP: merely being able to 𝑐ℎ𝑒𝑐𝑘 the answer to a problem efficiently doesn't imply you can 𝑠𝑜𝑙𝑣𝑒 it efficiently. It seems obvious. But despite a vast amount of work, nobody has been able to prove it!

And in one of the founding results of meta-complexity theory, Razborov and Rudich showed that if a certain attractive class of strategies for proving P ≠ NP worked, then it would be possible to efficiently crack all codes! None of us think 𝑡ℎ𝑎𝑡'𝑠 possible. So their result shows there's a barrier to knowing P ≠ NP.

I'm simplifying a lot of stuff here. But this is the basic idea: they proved that it's probably hard to prove that a bunch of seemingly hard problems are really hard.

But note the 'probably' here! Nobody has 𝑝𝑟𝑜𝑣𝑒𝑑 we can't efficiently crack all codes. And this too, seems very hard to prove.

So the boundary between the knowable and unknowable is itself shrouded in unknowability. But amazingly, we can prove theorems about it!

https://www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez what can be a precise enough definition of knowability?
The article seems to take a complexity-theoretic pov. I'd like to make it a little bit broader and less formal. Do I know P(x) iff I can compute P(x)=true or false for x? Often it seems that, in math, some problem seems obscure and hard, then, once you understand the solution it feels quite easy. Then there is that old von Neumann quote: "in math you don't understand things, you get used to them". I'm a bit joking here but maybe here I'm just confusing myself by blurring what it means "to know" something VS "to understand" something.

In category theory, I get that feeling reading Lawvere or thinking about yoneda, another possible perspective on "knowability" seems to exists. You know an object by knowing its relationships (arrows) to other objects. Sometimes it is enough to know an object X just by knowing how it relates just to a small collection of basic "already known" objects. I believe that's what Lawvere refers to "subject", what is already familiar/known, interacting with the "objective", what is to be known, i.e. unknown. From this POV, take the disjoint Union of two categories, say 1+1, for one object the other is "unknowable" in some sense.

I wonder if there is some deep connection between various naive mathematical ways of understanding knowability.

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez I see, knowability:=provability, I like this choice. I'm not a philosopher, not a mathematician, but this seems problematic and exciting for the same reason.
Let's call a proof of T a finite succession P_n of formal expressions leading from an expression P₀ that is given (assumption) or alredy proved, to an expression P_n=T, such that each P_k is obtained by the previous P_n's using a finite set of deduction rules. A proof becomes a tangible object.

Assume knowable(T):="exists a proof of T" then the problems I see are
1- T is knowable becomes a relative concept: it depends on the assumptions (axioms) and the deduction rules. So everything becomes knowable by assuming the right set of axioms;

2- is T known if we can exhibit a proof of T, in a finite amount of time, or if we can just show that exists at least a proof, even if potentially we can not point to it? Knowable implies known?

3- if we prove T then do we have a proof that it is provable? In wich deductive system that meta-proof should be carried?

In brief, I believe that with knowable something stronger that "there exists a proof" is meant, but I dont know what. Also the fact that proofs are relative to the axioms seems interesting for this interpretation of knowability.

When you say the following, I'm completely lost

<<For example, we know that if Goldbach's conjecture is true but unprovable, it's also impossible to prove that it's unprovable. So there are cases where uknowability shrouds itself in unknowability.>>

If in this phrase "we know" means "it is provable" then my mind stops working. If it is impossible to prove it is unprovable, how can we know it is "true but unprovable"?

boarders, to random
@boarders@mathstodon.xyz avatar

Reasons I dislike formalism in the philosophy of mathematics:

  1. Those that follow it tend to have the view that mathematics as a human practice should cause no philosophical puzzlement, that this perennial question about the nature of mathematics as “synthetic a priori” is not just a question in need of disillusion - but one that couldn’t possibly rouse some deep sense of mysticism.

  2. Explaining mathematical practice as the making of certain meaningless marks in certain orders is like explaining language as certain movements of air caused by certain manipulations of the throat.

  3. Formalism ultimately seeks to find some “autonomous discursive practice” - a language game one could play though one plays no other - which emerges spontaneously from nothing, which is deeply misguided to anyone who had read Quine or Sellar’s criticism of logical positivism.

  4. Taking formalism seriously means that, perhaps, someone like Frege is the first mathematician - what were Gauss, Euler and Galois doing before them? Funnily enough, Bertrand Russell says something like this somewhere (which one should probably take as no endorsement)

leemph,
@leemph@mathstodon.xyz avatar

@boarders I remember a quick description of the formalist position as one of the two possible answers to the question of what is mathematical knowledge about: math is a science (content conception) vs math is language (formalistic conception).
So ultimately math is, for the formalist, language about language, the science of the form.

But this seems an hasty conclusion. Your point 2 seems about this.
Math seems more like, in essence, the science of the connection between content and form.

Maybe I don't understand the formalist position at all but it seems like the claim is: even when doing semantics, we interpret symbols, forms, as other symbols. So all we are doing is symbols as abstract things intepreted in symbols as concrete things. I feel there is a kind of monism hidden here in the claim.

BartoszMilewski, to random
@BartoszMilewski@mathstodon.xyz avatar

Sufficiently advanced determinism is indistinguishable from free will.

leemph,
@leemph@mathstodon.xyz avatar

@BartoszMilewski ... sufficently free, free will is indistinguishable by chaotic behaviour.

julesh, to random
@julesh@mathstodon.xyz avatar

Idle thonk: the fundamental theorem of arithmetic, in the form "the free commutative monoid on the free monoid on a point is multiplicative positive integers" is, like, the simplest not-boring consequence of the periodic table of monoidal n-categories

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez I love to see positive rationals in the same way!
Firstly because I like to wonder: what is like a world where multiplication came first and doesn't know nothing about addition? And then ask myself how addition behaves on the generating elements of Q, i.e. the primes. I know something similar is what Number theory hammers hard on, but I'm totally ignorant of it.

Secondly, I like this because of iteration of set theoretic functions. Given a monoid M, I'd switch by viewing its elements as embedded in the totality submonoids of M, ordered by inclusion. Then we somehow switch from the nature of M to something that has a multiplicative flavour... idk this is handwavy... i'ts just some pseudo-philosophical talk for me.

Lastly, but connected to discrete dynamics. Viewing it like that is natural: we can display the information of each MSet X, in the form of the action category X//M (weak quotient). And this is how I usually visualize them.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

What happens in math if there are no equations? All you have are things, processes that turn one thing into another, meta-processes that turn one process to another, and so on... forever!

If this is too scary you can truncate it at the nth level. Then you're dealing with an 'n-category'. This has things (called 'objects'), processes (called 'morphisms'), meta-processes (called '2-morphisms') and so on up to n-morphisms. You can use equations... but only between n-morphisms.

In this talk I explain the periodic table of n-categories - a fundamental structure that emerges when you think about this stuff.

I put a lot of work into making it fun and easy to follow... and I think it worked!

(Alas, the video quality is still not great, but it's better than last week's lecture where I introduced n-categories. The volume is low so you have to really crank up your speaker... and the only way I have to boost the volume of a video also makes the file a lot bigger.)

https://www.youtube.com/watch?v=X1PkkqDwf8Y

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez @JPQ And are these the same anyons @DavidJaz talked about in a recent video, linked with Schreiber and Sati's work on quantum computation? Is something known about Microsoft's effort and the math it is based on?

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

Does reality make sense - or does it just feel that way most of the time? Jamais vu, a kind of opposite of déjà vu, is when something familiar loses its usual meaning and starts seeming strange.

You can bring on jamais vu by staring at a word. For example, psychologists told people to look at the word 'blood' for 3 minutes. Some reports:

24 seconds: “… b and d look like each other turned backwards, hence meaningless.”
60 seconds: “o's look unfamiliar, staring.”
72 seconds: “b and d look like p and q upside down.”
179 seconds: “a collection of letters”.

Has this happened to you? It happens to me quite easily.

It's also been found that repeatedly checking something can eventually lower your confidence in what you're checking. Like if you check to see if a stove is off 20 times in a row.

Some psychologists call this 'semantic satiation', which hints at a beginning of a theory for what's going on. But there's been much less study of jamais vu than déjà vu, so little is known about it.

The paper I'm reading has a cute title: "The the the the induction of jamais vu in the laboratory: word alienation and semantic satiation." You can read it for free here:

https://hal.science/hal-03257557/

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez I experimented this many times and even on purpose while doing Buddhist meditation... so I believe this is well known in ancient eastern meditative traditions. This is also a particular case of a particular state of mind, or of consciousness, that is one of the central concepts in the saga of Carlos Castaneda, once famous writer (novelist /anthropologist), back in the '70s.

I'm not aware of the academic literature on this, but the point is that once you unlock this state you somehow understand first hand that the shapes, the structure and meaning of perception are somehow superimposed in a second stage after the raw perception data is presented to our inner eye. I guess middle-age philosophers discussed of this and later the Gestalt school too... maybe also Heidegger's distintion between ontic and being but I'm not sure.

If you think about it the Gestalt shifts, eg. the rabbit-duck image, the old Lady-young Lady drawing and so on, cause the cohexistence of two different "structures" over the same set of "raw perceptual elements". So to see the other meaning of the image you actually have to forget, "unsee", the old meaning of the image. This reminds me of forgetful functors and dual objects.

Maybe perception comes in degrees and the phenomenon you describes is just going down that ladder of forgetting structure. We go down the ladder to more concrete and less structured perception untile we may hit a meaningless stream of uniterpreted and unstructured perception... this reminds me of Mach somehow, but it is what Castaneda called "seeing".

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

When etiquette expert Emily Post was asked about the polite way to eat spaghetti, she replied:

"There is no polite way to eat spaghetti."

Similarly when we want to write a fraction that's bigger than 1, we have two main choices: use an improper fraction, or a vulgar fraction.

(The pedants can now step in and correct me. It was just a joke.)

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez like eating pizza. If you eat it at it's meant to be eaten, it's not very classy and feels goofy. But if you do it in the polite way, "knifying and forking"... you'll look like a fool, at least here in Italy x).

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez I'm a nobody so I'm not an etiquette expert. Decades ago in Italy the uses of the spoon in helping twisting spaghetti with a fork was accepted and looked quite elegant. I suspect that now galateo-wise it is deprecated.

But cutting spaghetti... it's painful. That's admissible for kids only hahah.

boarders, to random
@boarders@mathstodon.xyz avatar

In a category, the generalized elements of a limit are given by the limit in Set of the generalized elements. For example, the T-elements of a product A x B, ( \mathrm{t} \in_{T} A \times B ) are the same as the T-elements of A and the T-elements of B (i.e. the product of T-elements of A and the T-elements of B as sets).

This is not true of colimits and one sense in which limits are easier despite being a dual concept.

leemph,
@leemph@mathstodon.xyz avatar

@boarders It often seems like there is a broken symmetry betwen dual concepts. I still don't know what is the source of this difference, maybe it is only apparent. Any hint for what should I look for, keywords etc..?

BartoszMilewski, to random
@BartoszMilewski@mathstodon.xyz avatar

My intuition tells me that a set has the least structure (it's a discrete category). Conversely, the category of sets has the most structure, since its morphisms don't have to preserve any structure. Does it make sense?

leemph, (edited )
@leemph@mathstodon.xyz avatar

@BartoszMilewski I feel it's the opposite. In set we have empty structure, so the maximal numbers of maps because to be a map you don't have to satisfy any condition. The more structure you add the less solutions there are to the problem "a map X—>Y that respect a structure".

(1/2)

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez @BartoszMilewski are there formal criteria for deciding what counts as structure and what counts as property? For example we can define groups to be monoids with additional structure i:M—>M, i.e. inverse map, satisfying some property, or treating the existence of inverse as a property.

Then how some kind of properties can magically upgrade to structure?

dpiponi, to random
@dpiponi@mathstodon.xyz avatar

New Chomsky paper on Hopf algebras. Seriously.

https://arxiv.org/abs/2306.10270

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez @dpiponi @MartinEscardo I must be me missing the point here but the definition in the picture seems completely reasonable to me. Being out of context, I don't know what were the notation/terminology established in the book but If I had to define a "multiplication law" starting from zero, I could ask it to be a binary relation on (GxG)xG s.t. for every pair (a,b) exists an unique c. The text in the quote is basically asking the relation to be a function.

More philosophically, closure seems interesting to me: given a binary function * on a set G, it extends to a binary function on the powerset P(G). An element S in P(G) is *-closed iff

S²\substeq S in P(G)

Told like that, I find closure becomes suddently be very cool, rhyming with idempotency.

leemph,
@leemph@mathstodon.xyz avatar

@boarders Now, phrased like this makes me feel stupid. Are we really including the definition of function in the def. of group?
It seems to me as if we are just including in the definition of group that the group law is a function withous assuming it.

Probably this must have to do with formal logic and what is the standard practice and terminology nowadays. Now I'm not lucid enough to rememeber how it is handled formally the difference between relational and functional symbols at the syntactic level, but when we define what a group is we say that it is a set equipped with a function we are already intepreting the theory of groups giving a model. And we assume the law to be a function.

Maybe I'm just making a big confusion here.

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez @boarders @OscarCunningham
"(For example, once we know this machinery, we can instantly see posets are not described by a one-typed algebraic theory by noting that the forgetful functor from posets to sets is not faithful!)"

Can you clarifiy this? I see here you are alluding to the non-algebraic nature of some theories/monads. If I remember correctly some structures asking for infinitary operations are inherently non-algebraic (like topological spaces).

The theory of posets would ask for a set (of stuff?) equipped with a binary relation (structure in the Bourbaki sense) satisfying some axioms (not equation, so probably not properties?) Yet I really don't see instantly why posets are non algebraic in the technical sense you mention, even of I know it in my heart, let's say. Sure it is not an equational theory and there are not functional symbols in its signature.

Also, let Poset a category where objects are sets equipped with a binary relation that is a p. Order. A morphism (X,r)—>(Y,s) is a set theoretic function compatible with the p.orders. so shouldn't the forgetful functor be faithful? It shouldn't forget stuff... only structure (at most).

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez @boarders @OscarCunningham
"But it doesn't reflects isomorphisms"

Addendum: for ppl that are not so bright, here an example of what should be "obvious"...I wasn't able to come up with a counterexample on my own and now I'm wondering if this is the minimal one.

https://math.stackexchange.com/q/3594857

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez @boarders @OscarCunningham
"if you're wondering how we get this 'reflects isomorphisms' criterion"

Sure, if T is a theory whose models in Set are of the form (X, F_i) where each F_i is n-ary for some n, then let φ be a bijection
(X,F_i)—>(Y,G_i) respecting each F_i, being the structure attached to our stuff made of functions the "proposition of homomorphism" translates in

φ• G_i =F_i • (φx...xφ)

where • is composition in Set, the Rhs is obtained by universal property of products. Let ψ be its inverse

ψ•φ• G_i • (ψx...xψ)=ψ•F_i • (φx...xφ)•(ψx...xψ)
G_i • (ψx...xψ)=ψ•F_i • (φψx...xφψ)
G_i • (ψx...xψ)=ψ•F_i

Overall this was quite a wild ride and I think I'm not able to go back to the initial point connecting all the dots. So one point is that if U:C—>Set is algebraic, a property that can be expressed equivalently in the monad theoretic, the Lawvere theoretic and the universal algebra theoretic way (as stuff equipped with functional structure satisfying equations) then U must reflect isos. of Set back to C.

All of this was to make the point that it is good, and more interesting, to define algebraic structure as stuff equipped with functions, after I tried to play devil's advocate claiming it was also reasonable, as Dan Piponi's book did, to define groups as sets equipped with a 3-ary relations satisfying functionality (closure).

I hope I was able to follow everything but that leaves me with an immediate doubt. Are we saying that when a functor to Set forgets structure (not full) without reflecting isos, then the structure it forgets is non-algebraic in nature?

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez yeah but your last remark must have to do, I suspect, with the fact that the lattice operations subsume, contain, all the information of the order in a way that the order structure becomes rendundant: it is encoded in the algebra and you can recover it.
It is weird... if you add the right extra properties to a non algebraic structure, sometimes you turn the old structure into a new algebraic one.
I should study more examples of this... I'm sure I can recall some of them if I out some time into it ... and come back here when I have done it.

Anyways, just my last tought before I stop. Apparently when dealing with multi-sorted algebraic structures, eg. Modules, the story about reflecting isos seems to be more delicate. Isn't it? Projecting into one of the two sorts could be a source of non-reflecting isos forgetful functors (non faithfully and not full)...
If this is correct then there must be categories of algebraic structures that are not exhibited as algebraic over Set by their naive forgetful functors.

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez about the two possible categories of lattices, thx! That was great! Now it makes really sense... Now I realize I was a bit sloppy and confused about it.

"If you do things right" is the key point.

About forgetful functors to Set^n. This trick is pretty clever actually, I wish I was able to see it myself. Thanks for the patience.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

I'm very glad I left Twitter. But if Twitter was like heroin for me, Mastodon is like methadone. It doesn't give me enough of a high to keep me addicted. But it could be a useful step on my road to recovery!

With not enough people to talk to here, I've been spending less time on general-purpose social media and more time on specialized sites like the nLab and the Category Theory Community Server. I'm getting into more conversations that actually help me with my work, and I hope I'm helping students more. I miss the lively diversity of community, but I don't know what to do about that.

Mind you, I'm very grateful for all the nice replies I get to my posts here! I'm especially happy that some experts are replying to my amateur posts about music theory. But these replies feel few and far between.

To use another analogy: if Twitter felt like an echo chamber, Mastodon feels like an anechoic chamber... or padded cell. That's probably why most of the friends I eagerly welcomed here are gone now:

https://mathstodon.xyz/@johncarlosbaez/109262546588286591

I'm not sure where they went. Back to Twitter? That's not an acceptable option for me. Back to real life? That would be interesting.

I'm not planning to "quit" Mastodon. But if you've noticed my posts here slacking off, this is why.

leemph,
@leemph@mathstodon.xyz avatar

@johncarlosbaez a bit off topic. I also tried to start a kind of diary of my math posts on the internet (mails, forums, social networks) in particular of the answers I got from experts. Not because I feel they are things of absolute value but for backup reasons: call it a melancholy-drive. In, say, ten years I'd like to read my questions old and answers for them to see my progress and having a bookmark of little things that interest me.

Do you have some practical tips on how do you manage this backup process, eg. In your diary, without going mad or making tremendous effort? Some kind of workflow. If you already posted about it in your site I'm sure I missed it.

Thank you in advance.

johncarlosbaez, (edited ) to random
@johncarlosbaez@mathstodon.xyz avatar

The 20th century was the century of fundamental physics. While we saw immense progress toward discovering the basic laws governing matter, space, and time, this has slowed to a crawl since 1980, despite an immense amount of work. Luckily, there's plenty of exciting progress in other branches of physics: for example, using the fundamental physics we already know to design surprising new forms of matter! Like all other sciences in the 21st century, physics must also embrace the challenges of the Anthropocene: the era in which humanity is a dominant influence on the Earth’s climate and biosphere.

Last week in Santa Fe I gave a public lecture on the future of physics. You can watch it here on YouTube:

https://www.youtube.com/watch?v=Pu8jkCqKHpY&list=PLuAO-1XXEh0ZiJlRKz7EuODAdIOjC5-1l&index=1

Since my talk was announced on the marquee of a theater, and I myself misread it as a concert by Joan Baez, I asked the audience how many had been expecting her. About 10, it seems! I planned to sing a bit of a song, but thought better of it at the last second.

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez I like what seems an attempt to sketch an alternative classification of the kind of civilizations. From matter-based to energy-based and now to information-based. This can suggests, as you do, a progression where meaning first, and wisdom later could be the next steps.

I believe it has become very common to use energy as the meter of civilization's advancement, along with mastery over matter. More raffinate and large scale control over matter, i.e. resource harvesting and production, requires more energy. The Kardashev scale is an example of this mode of tought that penetrated in the sci-fi and speculative discourse.

So the idea implicit in your talk seems to me very suggestive and original. Is this paradigm already proposed somewhere. Do you think that theoretically a measurable, matematically founded, access and control over elusive concepts/qualities of meaning and wisdom may come from category theoretic approaches or from somewhere else?
Hearing your talk the work of Tai-Danae Bradley first and also the use of modalities in a Lawvere/Schreiber-like way to model qualities just popped into my mind.
But maybe I'm completely misled in reading your words during the talk.

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez I'm sorry, now that I think about it I guess I was tricked by a "false-friend". English is not my language. In my language it has the same metallurgical literal meaning but we also use it in a more figurative way, an analogy I guess, something that in the English use is missing.

Btw, I'm not kidding: If you sum this idea of progression with all the parts about quasiparticles you have enough ingredients to make some elegant and beautiful SciFi novel out of your talk. Imagine a civilazation that not only measures its functioning and outputs in term of meaning, but also that imitates the highly optimized design provided by the structures you find in nature to inspire its technology.
Maybe such a civilization would measure every project, machine, process and human activity, not only in term of the energy needed for harvesting resources, sustain the project, run the machine or the activity, i.e. its cost in money units. It would also take into account some kind of energy, information and "meaning footprints".

It sounds good, but also a good SciFi story. Not sure if this is good or bad :/.

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez Raffinate, as an adjective, applies to a substance, a liquid, that has undergone a specific process. In my language we can use it in its figurative meaning. I believe the closest English term is refined, e.g. a refined person, usually referring to their manners I guess.

Now I'm pretty sure that in English raffinate can not be used in a figurative way, but also refined didn't sound adequate to my ears. I used raffinate as an adjective applied to "control" to mean that the control, or better the technology, was the result of a long "purification" and refining process.

Ps: I agree with your last sentence. I often focus on distopian visions of the future, I believe many of them are realistic. But the future has, must have, some degrees of freedom left... and imagining better, credible, futures may be one important ingredient to making them a little more possible. I believe your talk was able to paint such a picture very well.

johncarlosbaez, to random
@johncarlosbaez@mathstodon.xyz avatar

Hardcore math post: Morita equivalence.

(For people who just followed me, this is a signal that you may want to skip this post if you're not into math.)

Roughly, two rings are said to be 'Morita equivalent' if they have equivalent categories of modules. For example, any ring R is Morita equivalent to the ring of n×n matrices with entries in R. This is really a key example of Morita equivalence. But you'll notice that if R is commutative, this ring of matrices is not - unless n = 1, when it's just R. So you might wonder: if two commutative rings are Morita equivalent, do they have to be isomorphic?

And then answer is yes! This turns out to be well-known, but it was fun figuring it out myself after struggling to understand an annoying proof in a book.

My proof goes like this: for commutative rings R, you can recover R up to isomorphism starting from its category of modules! In this case, every element of R acts as an endomorphism of every module, in a natural way. So we get a map from R to the ring of natural transformations of the identity functor on RMod. And this map is an isomorphism:

https://ncatlab.org/nlab/show/center+of+an+additive+category#examples

So if you hand me the category of modules I can get back the ring, up to isomorphism - if that ring is commutative. So Morita equivalent commutative rings have to be isomorphic.

Why did I say "roughly" above? Because every time I mentioned categories I really meant Ab-enriched categories: that is, categories where the homsets are really abelian groups. If you've got a mere category, then the natural transformations of the identity functor form a monoid. But if you've got an Ab-enriched category, then the natural transformations of its identity functor form a ring.

leemph, (edited )
@leemph@mathstodon.xyz avatar

@johncarlosbaez "someone else must have added that."
'Twisted equivariant laughs in the background...'

Anyways... thank you for the yoneda hint. I believe you already knew how to find a connection between Z(C) and Z(Psh(C)) but descovering a bit of it on my own was really a journey.

It did take a bit for me to realize the real meaning of having a monoid map going from an arbitrary M to Z(C), I was lost in this for most of my short amount of free time.

Since what follows must be something like exercise 3, left to the reader, of Grothendieck's Category theory for the kindergarten, I'll try to be short (and sloppy). If this doesn't work experts will better dismiss a low effort post than a wall of text with extra-polished notation.

The strategy goes like this. Given a functor F:C—>D we get, among other pretty exciting things I'm still investigating, functors
F*:[D,D]—>[C,D]
Fʻ:[C,C]—>[C,D]
By restrictions to the identity component we obtain two monoid morphisms
F₁:Z(D)—>End(F)
F₂:Z(C)—>End(F)

Question. During my exploration I discovered a new word: Whiskering. By whiskering, I believe we get two actions of the centers on F, from the left and from the right. We can horizontally compose the identity transformation of F by elements of Z(C) and of Z(D). My sixth sense tell me that those are exactly the same actions I defined earlier.

1/3

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines