smallcircles,
@smallcircles@social.coop avatar

For the question of "Why use ?" has never been answered. There should be clear merits to wade through all the complexity that this choice brings, right?

Yes, its ultra flexible, and you can define your own semantic , and theoretically it could provide a robust extension mechanism to AP protocol. Except that right now it doesn't.

What's the vision of a Linked Data ? What great innovative would it bring, that makes it worthwhile?

happyborg,
@happyborg@fosstodon.org avatar

@smallcircles I don't know about AP, but in general, the great hope I have for RDF/Linked Data is interoperabilty.

It gets us some of the way there but in part, due to limited take up outside niche areas, this potential hasn't yet been realised.

smallcircles,
@smallcircles@social.coop avatar

@happyborg yea, LD in general faces a long, hard struggle to find widespread adoption, except in these niche areas.

evan,
@evan@cosocial.ca avatar

@smallcircles

Using JSON-LD lets us mix in existing vocabularies like vcard, schema.org, and foaf.

It also lets us build and add extensions without name conflicts.

smallcircles,
@smallcircles@social.coop avatar

@evan yes, that much is clear. It is the level where things make sense: reuse well-understood vocabularies.

Except my mix'n match of common JSON-LD props doesn't match yours, and we can only understand each other going the full monty on linked data support.

Or avoid the convergence in how we defined the same semantics differently by having a specs site somewhere and agree to do things the same way to a certain extend, and get reliable expectations when interfacing/integrating our apps.

evan,
@evan@cosocial.ca avatar

@smallcircles standardizing extension vocabulary definitions is a good idea, yes. Either as SocialCG reports, or with ad hoc standards on other sites.

smallcircles,
@smallcircles@social.coop avatar

@evan indeed. As you know I advocate for the bottom-up 3-stage Standards Process of Ecosystem --> FEP/SocialHub --> W3C SocialCG/WG

Where we should avoid that what is first emerging in the ecosystem is so non-standard that it'll never rise from 3rd-stage into more formal specs.

That means a set of best-practice guidance that is best provided from the SocialCG/WG for the entire fedi to use.

evan, (edited )
@evan@cosocial.ca avatar

@smallcircles I don't think SocialHub has that privileged place as a clearinghouse. But in general I think that developing and implementing extensions and then bringing them into the main context through the new extension policy is the right series of steps.

smallcircles,
@smallcircles@social.coop avatar

@evan that's fine. is currently the default place where 's are discussed.

For each FEP the editor team ensures that a discussion thread on the forum is present. Other than that there is a tracking issue in the FEP repo on that can track other places where discussions take place and feedback can be collected from.

SocialHub isn't important, but the FEP Process is, as 2nd-stage in the Standards Process. The place that sits between no formality and formality.

smallcircles,
@smallcircles@social.coop avatar

@evan

Key to the proposal is that the ecosystem itself is decentralized while it works in many independent hubs and communities to evolve the decentralized technology.

There are already a bunch of hubs that recognize neither the FEP/SocialHub nor the W3C, like the Podcasting Index.

Their work is just as valid as what other people do. The grassroots ecosystem driven by the Commons determining the general direction of the Fediverse.

evan,
@evan@cosocial.ca avatar

@smallcircles There's a lot to like about FEPs. I agree, having disparate groups working on focused areas is very useful. I don't think that an extension needs to pass through SocialHub to get included in the AS2 context.

smallcircles,
@smallcircles@social.coop avatar

@evan indeed. The SocialHub is just a default discussion place and not required in the process, yet discussing there ensures it reaches a community with many devs active in AS/AP and knowledge is also conserved in one place for the longer term.

This in contrast to huge amounts of insights being passed around in toots in numerous exchanges between people.

The 3-stage Standards Process should encourage discussion on SocialHub for FEP's, just as it should encourage further follow-up towards W3C.

steve,
@steve@social.technoetic.com avatar

@smallcircles @evan One of my concerns about the FEP process is that it is actually too tightly coupled to SocialHub for the purpose of FEP discussions. It's natural to comment on an FEP "issue" in the issue tracker. The current approach means the discussion is sometimes fragmented between SocialHub and Codeberg. I think SocialHub is very valuable. I'm just not convinced this is the best way to use it for FEPs.

smallcircles,
@smallcircles@social.coop avatar

@steve @evan

Valid concern, might be discussed, for time being, on SocialHub/FEP.

As I see it, its not different than any other boundaries that having multiple tools always throw up. E.g. the SocialCG / W3C orgs, their various repo's and the mailing list.

A forum, esp. @Discourse is very well-suited to long-form discussion and has way more functionality to organize than an issue tracker.

Looking at "busy" trackers, eg. Mastodon I'm glad not having to deal with that. And they have forum too.

steve,
@steve@social.technoetic.com avatar

@smallcircles @evan I've already discussed it on SocialHub. I personally have benefited greatly from seeing W3C and Mastodon discussions about issues in the issue comments (same for many other projects). But that is a tangent. My point is that SocialHub is a required element of the current FEP process.

smallcircles,
@smallcircles@social.coop avatar

@steve @evan

It's not required, just part of the default procedure. An author that submits a can just as well choose to collect feedback in other channels, update insights on the tracking issue, and improve the text with PR's without hitting up the forum.

A decentralized ecosystem will evolve in many different ways. The bottom-up Standards Process needs good procedures and see them popularized, so they become natural. There's much to improve, both in FEP Process and also .

steve,
@steve@social.technoetic.com avatar

@smallcircles @evan Ah, thanks. I see now it's not explicitly required by the process meta-FEP. I had a different impression based on related discussions on SocialHub. So the FEP process allows the FEP discussions to happen anywhere: SocialHub, the FEP issue comments, the SocialCG mailing list, Matrix, here, any combination of these, ... good to know. 😉

smallcircles,
@smallcircles@social.coop avatar

@steve @evan

I feel that it is important that does not assert authority. As a dev community on it should be an attractive place to be part of and interact with others. That's all. So it is just one hub in the decentralized ecosystem.

We seek collab with other communities, and have a liaison with the .

We facilitate the Process, but the process itself stands on its own.

So parentheses in 3-stage Standards Process: Ecosystem --> FEP (SocialHub) --> W3C.

luceos,
@luceos@fosstodon.org avatar

@smallcircles @steve @evan no one is taking ownership of a grey area that needs a massive amount of work to make less grey. I wish there would be a team, regardless of their official status, that would work on standards for the fediverse without it being mastodon or otherwise prioritizing own interest over standards.

evan,
@evan@cosocial.ca avatar
smallcircles,
@smallcircles@social.coop avatar

@luceos
@steve @evan

Well, yes. These teams exist, but they are volunteer-driven. Both the and the . But also other places. The FEP is an ideal process where anyone can participate address their grey area of interest, whether its niche or not. Then mature and let it go further to W3C. Or go directly W3C for major stuff. Important is to spread awareness of 3-stage Standards Process and encourage participation.

indieterminacy,

@steve @smallcircles @evan Personally I resent the need to interact with GitHubs world (and by symmetry empathize with others needing keys to numerous other git forges to participate).

I consider it a positive thing that SocialHub emerged and would like to think that things intersect in nicer ways than the silos which repos can easily become.

Also, I dont think there is a need to be dependent on SH but would encourage cross posting (working across fediverse tools and instances is the pinnacle)

evan,
@evan@cosocial.ca avatar

@smallcircles there are other ways to do this. But the conjunction of JSON, namespaces, and W3C made JSON-LD the obvious choice.

smallcircles,
@smallcircles@social.coop avatar

@evan if we can define some intuitive best-of-both-worlds good-practices for JSON-only and LinkedData folks, then it is a problem solved. Except that JSON-only impls probably can't deal with the more flexible output coming from a more advanced Linked Data based endpoint, but that is the tradeoff to choose as a dev, I guess.

evan,
@evan@cosocial.ca avatar

@smallcircles JSON-only implementations usually work fine. I think having a context doc that incorporates extensions and external vocabularies and untangles naming conflicts will help out a lot.

smallcircles,
@smallcircles@social.coop avatar

@evan the thing is that there's so many ways to create one's own sauce in a @context document. I once took (bit messy) notes on Peertube looking at the msg formats and context, where they were the first 'colonizers' of the semantic meaning, like any new app type free to invent without consequences. It is the second and third and Nth app in a similar space where things will devolve into whack-a-mole driven development to see all these apps interoperate.

https://notes.smallcircles.work/pVj9KvBwSh6_it1jNjSxiw#

steve,
@steve@social.technoetic.com avatar

@smallcircles The data is literally "linked data". Although there are other options for working with distributed linked data than using Linked Data tech, I'm wondering what alternative you have in mind. Many developers use ad hoc implementation-specific techniques to manage it, but it's not clear to me that this is an improvement over LD.

smallcircles,
@smallcircles@social.coop avatar

@steve I merely observe the huge struggle with LD. Where experts "get it" and non-experts hate it. Where there's a ton of discussion on how it should be used correctly. How its so very easy to use some LD flexibility that makes things hard for JSON-only folks.

AP on protocol level is about msg exchange, mostly Activity{Object}. Yes, technically all linked data, but it may also be usable semantic graphs once it enters your own data store, and be closed-world concrete msg definitions until then.

edsu,
@edsu@social.coop avatar

@smallcircles @steve just curious, does this come up when implementers need to read AP and AS2, and since there are potentially multiple ways of saying the same thing, they need to add a JSON-LD parser to their application?

smallcircles,
@smallcircles@social.coop avatar

@edsu @steve

Sorta of. When writing your own app you can invent your msg formats. If you are JSON-only just include a @context that covers what you send over the wire. You then also only accept msgs that conform exactly to what you defined, and not JSON-LD valid variations. So now you effectively created a dialect of ActivityPub.

The next project in the same field may do things similar'ish but not exactly the same, and interop becomes 1-to-1 app integrations. Whack-a-mole driven development 😬

edsu,
@edsu@social.coop avatar

@smallcircles @steve ok I can see that. I guess dialects aren't necessarily a bad thing though, If something like that 3 stage standardization process you proposed is in place, and if there is agreement about how to express context objects?

smallcircles,
@smallcircles@social.coop avatar

@edsu @steve

They aren't. @helge came up with the term 'dialect' and also mentioned today that some kind of minimal subset of linked data might suffice. I guess it is a focus on increasing chances / lowering barriers to interop, and that we can come a long way by specifying what is minimally required plus a set of best-practices, the do's and the don'ts, for designing extensions.

bhaugen,
@bhaugen@social.coop avatar

@smallcircles @edsu @steve @helge
We're working on https://www.valueflo.ws/ as a vocabulary for economic networks and one of our future target implementations is Activity Pub and the Fediverse. So to the extent that gets some traction, it will be all about interop.

> What great innovative would it bring, that makes it worthwhile?

That will be the equivalent of million-dollar ERP systems but for P2P networks of small nodes.

Could be other interesting collaborative apps.

bhaugen,
@bhaugen@social.coop avatar

@smallcircles @edsu @steve @helge
One angle about economic networks is that the relevant standards would be set in practice by the network participants.
They would all need to agree to interoperate, and nobody else matters.

Would be good if they published their own standard.

evan,
@evan@cosocial.ca avatar

@smallcircles @steve Could you explain a problem for JSON-only folks? I think the main problem I'd see is if publishers use namespaced terms when they're not necessary -- like <https://www.w3.org/ns/activitystreams#Like> or as:Like instead of plain old Like. And the Postel principle (sorry Steve!) would say, don't do that, publishers. (And, also, be better prepared for that, consumers!)

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

For JSON-only folks it is not so much a problem as a question about merit. "Why would I use it?". In practice they don't, just throw in some @context for good measure.

This context, while minimally helpful, does not say which props are required/optional in msg exchanges to a particular endpoint and what exchanges to expect.

Actual practice is that there's not yet a robust extension mechanism, and people study codebase + issue trackers to see figure out integration with some app.

evan,
@evan@cosocial.ca avatar

@smallcircles @steve

> Actual practice is that there's not yet a robust extension mechanism

This is untrue.

https://www.w3.org/TR/activitystreams-core/#extensibility

Here is the extension mechanism:

  • You define terms in a namespace and a context document.
  • You document those terms.
  • You use those terms in your published Activity Streams 2.0 documents.

JSON-only developers can just watch for terms that they are interested in; if there are naming conflicts, they can check the @context.

evan,
@evan@cosocial.ca avatar

@smallcircles @steve so, in this FEP, I defined two new properties, pendingFollowers and pendingFollowing:

https://codeberg.org/fediverse/fep/src/branch/main/fep/4ccd/fep-4ccd.md

There's a namespace, a JSON-LD context document, and documentation for how to use the terms.

If you see an actor on the fediverse with the pendingFollowers property, you know it's probably following these guidelines. If you want, you can check that the @context of the actor document includes "https://purl.archive.org/socialweb/pending" to make sure.

https://onepage.pub/person/OpgJTNDppzYIDfl94BrAW

evan,
@evan@cosocial.ca avatar

@smallcircles @steve as more implementers include these properties, it can become part of the standard context for Activity Streams 2.0.

https://w3c.github.io/activitystreams/draft-extensions-policy.html

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

What I mean with "robust extension mechanism" is more than a context. It comprises the entire set of tools and practices for writing quality extensions defining data formats, msg exchange patterns, business logic, etc. so that I have biggest chances to write extensions that can interoperate well. All this may include a way to discover which extensions an endpoint supports, and able to find their docs/specs.

phiofx,

@smallcircles @evan @steve the lack of killer app in the / context derives from not having a gee-wow usecase in any context. Bioinformatics is most avant-garde here (https://www.nature.com/articles/s41746-019-0162-5) and whenever there is a delightful surprise in tooling it is motivated by this niche (e.g https://owlready2.readthedocs.io/en/latest/).

If its good enough for physical health it should be good enough for social health but may take a long while to get there.

smallcircles,
@smallcircles@social.coop avatar

@phiofx @evan @steve

Yes, indeed. Suggesting great use cases for -based federated apps might be done in issues at: https://codeberg.org/fediverse/fediverse-ideas

I think more general than your case would be connecting various tools to the , related to open publishing. A field where there's lotta interest. CC @jonny

PS. Though list is dormant (I'm not a qualified curator) I co-maintain https://delightful.club/delightful-open-science

evan,
@evan@cosocial.ca avatar

@smallcircles @steve

We have robust ways of providing everything you're asking for.

  • data formats = extension types and properties
  • exchange patterns = documentation
  • business logic = documentation
  • interoperation = namespacing

Distribution is built into the protocol; you just have to make sure the addressing properties are set.

evan,
@evan@cosocial.ca avatar

@smallcircles @steve I think the one thing that's interesting in what you've said is determining if a server will manage the side-effects expected when you send an activity to an inbox or outbox. For example, if an actor posts an Arrive activity to their outbox, will the server automatically update the actor's location? I'd answer, design extension vocabularies such that there is a reasonable fall-back if the server doesn't implement side effects.

evan,
@evan@cosocial.ca avatar

@smallcircles @steve I also think that providing some more feedback in the response to a post to inbox or outbox. When a server gets an Accept for an Invite to an Event, for example, it might be able to give a response that says, "I understand this and will implement its side-effects", or "I'm going to distribute this, but I don't know what it means, so don't expect any side-effects", or "I'm not going to even distribute it."

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

These are examples of business logic that a particular app or service may have in a particular domain / use case.

In a heterogenous Fediverse someone might build msg exchange for water starting to boil in their teapot. It will likely not be a anytime soon, and most probably never enter into W3C artifacts.

Still, ideally, one should be able to discover from the endpoint how this Teapot service works. Simplest is discovery where docs/specs live. E.g. via "Compliance Profiles".

evan,
@evan@cosocial.ca avatar

@smallcircles @steve I think having machine-readable specs is possible, but it's a lot of work. I think human-readable documentation is a better starting point.

I've got a goal to add FEPs for Event management, geo-social, and relationships. All covered by the Activity Vocabulary but not ActivityPub. I'd be happy to extend the FEP template with specifics for activity types, expected responses, and object types and try these application areas out.

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

Indeed. I do not think that machine-readable specs are feasible.

The only thing that (or whatever the name) do is saying on an endpoint "I comply to these specs/feps/w3c-artifacts and here you can find docs if you wanna integrate". See the sketch of this idea.

In the discussion I started on having an integration guid, there's an example of info a profile may contain: https://socialhub.activitypub.rocks/t/initiative-activitypub-step-on-board-integration-guide/3542/19

See also: https://socialhub.activitypub.rocks/search?q=compliance%20profiles

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

> I do not think that machine-readable specs are feasible.

Umm.. well, it depends how you interpret that. I posted in reply to @hrefna about and , both ways to have more structured docs and tools for validation and codegen, etc.

https://social.coop/@smallcircles/111688239856456366

That's not exactly machine-readable specs in the Semantic Web sense of the word, but comes a step further than wholly free-form docs.

smallcircles,
@smallcircles@social.coop avatar

@evan @steve @hrefna

FYI I just bumped into this kinda interesting https://datacontract.com site:

"A is a document that defines the structure, format, semantics, quality, and terms of use for exchanging data between a data provider and their consumers [..] It follows and conventions".

smallcircles,
@smallcircles@social.coop avatar

@evan @steve

In theory everything may be there, in practice there's too many ways / flexibility / choices that led to current fedi of ad-hoc interop and whack-a-mole driven development.

We are improving. There is guidance, tools, and more coming. Bit fragmented still. Procedures are becoming more well-known.. slowly, and best-practices easier discovered.

Further organizing things.. ongoing. The bottom-up 3-stage Standards Process goes from chaos towards order organically, supports emergence.

RyunoKi,
@RyunoKi@layer8.space avatar

@smallcircles That's something I discussed with @sl007 last year.

smallcircles,
@smallcircles@social.coop avatar

This morning via HN I bumped into this article on "Content as a Graph" and it muses about the different ways to present this content, other than falling back to using hierarchies.

When I imagine a based on , then not only does the content shape itself in interesting ways based on the semantic context, but also all the dynamic functionalities that act on that content.

How could that look like? A call to to inspire us devs with some radical innovation.. 😃

smallcircles,
@smallcircles@social.coop avatar

The article is here:

https://thisisimportant.net/posts/content-as-a-graph

And this is the HN discussion (which folks being ultra-critical on , displaying the typical "either love it, or hate it" for this technology ecosystem).

https://news.ycombinator.com/item?id=38834780

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles This feels like a case where you have a solution in search of a problem.

I'm sure linked data is great if what you want to do is compose together a lot of disparate and uncoordinated data sets into your own specialized meta graph. It's hot garbage as the basis for a messaging wire format. Because these are wildly different tasks.

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles A social media service isn't composing a specialized meta view of some universal data graph. It's taking actions. It's performing work. And the lack of any constraint on what work is even possible to request is killing development in this space.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus

Yes, I think so too. #LinkedData becomes interesting only when you get to the advanced usages of LD. It is not suitable for the minimal case of defining msg formats.

The chicken/egg of the #SemanticWeb is that its glorious magic will only become apparent once the SW exists in all its glory, and the ecosystem tooling and software exists to make it easy for devs to wield the magic wand.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus

> A social media service isn't composing a specialized meta view of some universal data graph.

That said, I feel its worthwhile to ponder this. Doesn't this assumption relate to a particular notion of what entails?

When I think of I think of any indirect or direct interaction people have online. That's vast scope.

Github for instance is a social network. It happens to offer a CodeForge app, but its domain is Collaborative Software Development.

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles

To me the key words in the sentence are "composing" and "universal"

In a theoretical sense a lot of this becomes a graph at a high enough level, but that isn't practically how you design it because it simply isn't efficient to do so.

In a microcosm of a single platform run by a single company that can solve or alleviate the problems of colocation and who can control the ontologies it can make sense

But that's not visualizing the great linked data continuum.

@jenniferplusplus

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles

This is why I think one of the areas that JSON-LD has seen success is with… SEO, and as an extension things like recipes.

Because you are going against a major provider who defines what ontologies are acceptable, people have incentive to play by the rules, and that provider has invested a large amount in solving the data graph problem and making it more universally acceptable, and who controls where that metadata is ultimately stored and how it is referenced.

@jenniferplusplus

smallcircles,
@smallcircles@social.coop avatar

@hrefna @jenniferplusplus

Yes, true. But if you take e.g. Google Knowledge Graph.. it might also just have been a definition of some JSON snippet to include in your website with required and optional properties. It is Google's crawler and the subsequent aggregation in the data center where an actual graph is formed and used in interesting ways in the Search product.

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles @hrefna The fact that google is crawling blogs it finds on the open internet rather than processing messages that bloggers send directly to them is the critically different point. The crawler doesn't have any latency constraints. It doesn't have to do anything in response to finding a new document. No one gets notified. It doesn't have to make authorization decisions. It can try again in an hour if it has a parsing failure.

It's cataloging, not dialog.

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles @hrefna None of that is true in a messaging system. And LD makes all of that much much more difficult. For essentially no benefit.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus @hrefna

Oh, I totally agree, and its why I asked for the LD merits :D

Until now I think I favor to - without breaking compat - continue with AP as a JSON-first protocol, where daring experts may go the extra mile and venture into LD extensions.

Posted about that here: https://socialhub.activitypub.rocks/t/activitypub-a-linked-data-spec-or-json-spec-with-linked-data-profile/3647

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles @hrefna I recall. It kind of got derailed though. Anyway, there are real needs that LD at least seems to address in theory. Supporting arbitrary extensions without relying on a coordinating body is a big one. And the wire format is one of the hardest parts, which doesn't change much just by dropping the @ context property.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus @hrefna

The easiest way to at least solve some of that might be the Compliance Profiles (or however it is named), that allow people to lookup the specifications, and also communicate the guarantee "My app/service is compliant with A, B, X, Y" so other apps/services can have proper expectations when sending particular msgs to them.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus @hrefna

Besides the SocialHub link above, I posted about this in several other topics e.g. https://socialhub.activitypub.rocks/t/best-practices-for-ap-vocabulary-extensions/3162/5

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus

When you consider the entire domain of Software Development, we see a huge data model and all kinds of processes / workflows working on it. Which might be visualized in all kinds of interesting ways, other than eg. a timeline.

You see this also on GithubNext, e.g. where they explore Collaborative Workspaces: https://githubnext.com/projects/workspaces

ActivityPub is able to model the msg exchanges for rich domain models and heterogenous services that interoperate and combine into innovative UX.

jenniferplusplus,
@jenniferplusplus@hachyderm.io avatar

@smallcircles But it's not just a modelling task. ActivityPub has to actually effect that message exchange. And it has to do it hundreds of times per second on even fairly small nodes.

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus

Yes. A model, msg formats, msg exchange patterns, and business logic. In a closed-world perspective, when not using LD.

If you want that to be interoperable, it requires spec writing and agreeing on the design with your peers that are also creating software in the same domain. Forming communities and developer hubs for these specialized areas.

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles In a lot of ways I think we keep circling back to the same point:

  • There is a lot of aesthetic nicety to a lot of the linked data concepts.

  • That aesthetic nicety has a lot of theoretical benefits and in theory there is no difference between theory and practice.

  • Realizing that benefit and making it usable is an increadibly difficult problem to the degree where it may not even be solvable in a generalized sense.

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles I said at one point wrt the fediverse that linked data concepts make perfect sense if you assume that everything is on a single database under a single owner and we all agree on a single ontological representation.

But as was pointed out by @pluralistic in Metacrap: that's not what we are actually dealing with. Getting everyone to agree on even basics here is incredibly difficult and, even if you can solve that, getting people to use it consistently is just not happening.

smallcircles,
@smallcircles@social.coop avatar

@hrefna @pluralistic

Yes, I agree.

I imagine that when thinking innovative one should pick particular use cases and how they are mapped to very familiar, common patterns currently, and then redesign them where some qualities come into play.

Just to serve as an inspiration, and a motivator for others to explore more of these concepts. Without incentives for LD, there'll be not much further adoption (except in niche areas, where LD is in more common use already).

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles I think the article hits on this, but I also think it misses a core part of what makes this hard.

Let's look at their example (attached).

If we look at this in the scope of, say, Western pop music this analysis makes perfect sense.

But if we're studying Gregorian chants or, say, how Sappho wrote music (she was a lyricist in the Greek sense, https://en.wikipedia.org/wiki/Greek_lyric: she wrote music for a lyre) some of these concepts don't even apply any longer, or they look quite different.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fediverse
  • DreamBathrooms
  • thenastyranch
  • mdbf
  • Durango
  • Youngstown
  • slotface
  • hgfsjryuu7
  • vwfavf
  • rosin
  • kavyap
  • osvaldo12
  • PowerRangers
  • InstantRegret
  • magazineikmin
  • normalnudes
  • khanakhh
  • GTA5RPClips
  • ethstaker
  • cubers
  • ngwrru68w68
  • tacticalgear
  • everett
  • tester
  • Leos
  • cisconetworking
  • modclub
  • anitta
  • provamag3
  • All magazines