MarieMuller, to fediverse French
@MarieMuller@mastodon.social avatar
IHChistory, to history
@IHChistory@masto.pt avatar

🔗 On 11 March, there's a new #REWIND workshop.

This month, it will be dedicated to the methodology, vocabularies and tools involved in creating #LinkedData, in the context of humanistic studies.

Jorge Juan Linares Sánchez is our guest trainer.

ONLINE and FREE

ℹ️ https://ihc.fcsh.unl.pt/en/events/connecting-homer/

@litstudies @histodons
@digitalhumanities

#Histodons #DigitalHumanities #FreeWorkshop #Humanities #LitStudies #Literature #HumanidadesDigitais #EstudosLiterários #Literatura #Humanidades

jonny, to random
@jonny@neuromatch.social avatar

Huh TIL Library of Congress maintains an authoritative linked data representation of all the states, so the "official" way to refer to "Ohio" is https://id.loc.gov/vocabulary/countries/ohu

But also each state is actually a "Country," who has a broader authority of the same type of "United States." It also is a skos "Concept" and the United States is a "Broader" concept. That seems to be inherited from the way that MARC handles location/authority, filtere through imperfections in SKOS, resulting in the MADS vocabulary. some interesting background here: https://www.loc.gov/standards/mads/rdf/index.html#t12

#LinkedData is just a big stack of history that only ever makes sense if you consider it in context with literally every metadata that has every happened

witchescauldron, to fediverse
@witchescauldron@kolektiva.social avatar

@serapath

Have been looking at again, do you think we can build this:

The Open Media Network is a trust based, human moderated, project that builds a database shared across many peers (both and server). The project is more important for what it DOES NOT DO, than what it does do, using technology to build human networks. There are ONLY 5 main functions:

• Publish (object to a stream of objects) – to publish an object (text, image, link)

• Subscribe (to a stream of objects) – to a person or organization, a page, a group, a hashtag subject etc.

• Moderate (stream or object) – you can say I like/dislike (push/pull or yes/no) this etc you can comment.

• Rollback (stream) – you can remove from your flow (instance database) untrusted historical content by publishing flow/source/tag.

• Edit (meta data of object/stream) – you can edit the metadata in any site/instance/app you have a login on.

We would build in the moderation tools of the .

This is the back-end of the project to build a trust based, grassroots, semantic web. The front-end may be anything you like, for example regional-/city-/subject-based sites to a distributed archiving project

The data cauldron and the golden ladle. The technology we call the .

happyborg,
@happyborg@fosstodon.org avatar

@witchescauldron Another platform to consider is which is coming together right now. (For example, I'm looking into porting my earlier proof of concept for a an LDP interface ( / ) to the new API, and then some apps.)

I think what you propose would be feasible, certainly worth looking into because the platform has some unique characteristics and I think now would be the perfect time to start building for it. Fully autonomous .

jonny, to random
@jonny@neuromatch.social avatar

having uh fun specifying any-shaped arrays as lists of lists with @pydantic for @linkml 's new array syntax... how do you specify a recursive python type that can generate recursive JSON schema and do recursive type checks that can use pydantic's fast rust core validators and not upset the type checker????

https://github.com/linkml/linkml/pull/1887#issuecomment-1936814514

jonny, (edited )
@jonny@neuromatch.social avatar

@SnoopJ that is exactly what we are doing with @linkml by being able to specify arrays in the schema. it's a downright authorable schema language, and i'm working on different interfaces now to be able to actually use that. That's historically been one of the major problems with #LinkedData / #RDF array specifications - you might be able to describe it, but what's the point, that description is totally removed from how i actually use my data.

so if instead you could just start with some schema, use that to generate a bunch of models (that don't suck to use, having used other schema -> code generator tools before) that you can use in your analyisis/whatever code and then also publish the data in some standardized format, that would be an astronomically better situation than what most scientists have to do now.

This lists of lists version is just the default one if you want to add zero dependencies to whatever you're doing (aside from pydantic, it is the pydantic version of the schema after all). It's a little more clumsy but it works out of the box. I'm also cooking up a tiny lil package (also with minimal deps) with a type that lets you use whatever the heck else array format you want to, right now just got numpy, dask, and hdf5, but going to split those out into plugins and make hooks for any additional formats too ( https://github.com/p2p-ld/numpydantic )

jonny, to random
@jonny@neuromatch.social avatar

After discussing how to represent arbitrary dimensionality with some specified dimensions for like 5 hours, @linkml does in fact after 20 some years of the have a usable schema construct for specifying arrays. For ur consideration: https://github.com/linkml/linkml-model/pull/181

Pydantic model generator comes tomorrow, schema for specifying encoding comes next week. There shall be content addressed linked data arrays.

pantierra, to random

I recommend the online book "Spatial Linked Data Infrastructures" - https://linked-sdi.com/ - by @luis_de_sousa. It's an excellent resource for understanding Linked Data and the Semantic Web. Even if geospatial topics aren't your focus, you'll find the concepts clearly explained. A must-read for anyone diving into these technologies!

dsantini, to Etymology
happyborg, to SafeNetwork
@happyborg@fosstodon.org avatar

I've a bit more to do on #vdash which has given me more time to wonder about what next.

As #SafeNetwork is getting pretty exciting r.n. I'm veering towards something to help Devs with #p2p apps, and feeling a buzz around compiling the client API for #WASM, and showing how to build native cross platform mobile and desktop apps using your web framework of choice (eg #SveltKit), #Rust/WASM and #Tauri.

Then an LDP containers API so existing #Solid apps become Safe Apps in this setup. #LinkedData

smallcircles, (edited ) to fediverse
@smallcircles@social.coop avatar

#LinkML may be a nice way to bridge #JSON-only and #LinkedData-based #ActivityPub

With LinkML schema's are written in YAML and can then be converted in a wide range of different formats.

Very interesting. I added it to the #SocialHub topic pondering whether AP should be considered JSON-first..

https://socialhub.activitypub.rocks/t/linkml-for-definition-of-activitypub-extension-schemas/3838

jonny, (edited ) to random
@jonny@neuromatch.social avatar

so we have been batting around the idea of some kinda paper bot for awhile re: the question "how do we track discussions around scholarly work" and I am starting to think this paper-feeds project is the way to do it.

So say it is an AP instance and it has one primary bot user, you follow it and it follows you back. When you make a post with something that resolves to a DOI, then that post is linked to that work. Any hashtags used in that post are added to that papers keywords (assuming some basic moderation and word ban lists). Then keyword feeds are also represented as AP actors that can be followed and make a post per paper. I wonder if we can spoof the "in reply to" field to present all those posts as being replies to that paper.

So say the bot also has some simple microsyntax for linking your account to an ORCID - either directly in a profile field, or by @'ing the bot and checking a rel=me, or hell even oauth. Then you could also relate when the authors of given works talk about other works and use that as another proximity measure. Then you could make an author RSS feed/AP actor that is just the works someone publishes and optionally that they talk mention - so eg I could make an aggregate feed for the papers my friends are reading.

Then you could have instances of this feed generator follow one another and broadcast aggregated similarity information at a paper level not linked to personal information, and also opt-in info like the fedi account <-> ORCID link. Since youre on AP already you basically get that for free.

Thinking about what would be useful for social discovery of scholarly works, and there are a lot of really interesting ideas once you start actually yno doing it starting from a place of not having a product to sell or a platform to run so you avoid some of the scale and liability probs.

Edit: prior post here: https://neuromatch.social/@jonny/111688727690129033
And repo here: https://github.com/sneakers-the-rat/paper-feeds/
And ill start tagging these with #PaperFeeds but that last post has too many interactions to edit now

jonny,
@jonny@neuromatch.social avatar

@hochstenbach yes yes - i have ultimately come to the conclusion that the LDP needs to be #P2P in order for it to do the things it wants to do. Fluid ontologies indexed by DNS have basically never worked, and so most of RDF world just treats them like non-dereferencing IRIs, which is sad - it's just intrinsically fragile, and really the only #LinkedData vocabularies you can really rely on still being there are the ones that w3c hosts because they're the only ones that really care about URLs staying the same forever.

I really like the design of what you're working on here - just operating on files is great, rules syntax took a bit to read but makes sense and seems amenable to interface design, and i especially like the plugin approach to 'just pull and push from anywhere'. The problem i have with thinking about the longevity or deployability of things like this are not really intrinsic to your project at all, but about the imo naive assumptions that LD makes about DNS: it is genuinely expensive and complicated to put something on the 'net for your average bear (timbl said as much). All the (necessary) placeholder example.com's in the demos are a reflection of that - since of course the rule isn't actually at example.com, presumably it isn't actually dereferencing there, and so it becomes just an IRI slug that is simultaneously necessarily bound to a URL but can't use it.

my longest lasting question in studying LD is "where is #SOLID?" I have tried and failed dozens of times to just run something from the project and have never managed to do it and have never heard of someone actually using it day-to-day. millions of people run bittorrent clients though, so it's not just an intrinsic "people don't want to run software" problem. The barrier to 'how do i actually put my stuff online' has to be a lot lower than 'rent a domain, manage a bunch of paths, and run an always-on server forever'.

The federated approach like the fedi and eg. institutions hosting pods is promising for many things, but it is sort of a nonstarter for anything with arbitrary clearweb user-generated content for liability and security reasons, so I think that would be super dope for things like notifications for scholarly work, but I think institutions will balk at an eventing framework that requires arbitrary code to run on an institutionally managed server, and especially can result in arbitrary content being available on their domain.

I think we should take advantage of existing infrastructure though - eg. i like how you're using npm to host and version vocabularies, and that federated infrastructure could (and imo should) serve some backstop role of preserving availability and providing bootstrap entrypoints for a p2p swarm. I think that has to look like using different protocols than HTTP though, and following along that line you pretty rapidly get to needing social infrastructure at the base in order to have comprehensible namespacing (rather than a bunch of long hashes, even with some naming system patched over the top of it, as IPNS demonstrates doesn't really work that well). I think your going towards integration with email and masto and whatnot from a local client is a nice set of steps towards personal web tooling, and i'm gonna keep this bookmarked for when i get closer to working on something related :)

happyborg,
@happyborg@fosstodon.org avatar

@hochstenbach
All the info and links are collected in a post on the #SafeNetwork forum, including the presentation video, slides etc: https://safenetforum.org/t/devcon-talk-supercharging-the-safe-network-with-project-solid/23081?u=happybeing

The demos no longer work on Safe Network as the APIs have changed but the key elements demonstrated were:

  • hosting #Solid apps on a decentralised #p2p network with just one library swapped
  • using p2p storage via LDP API in standard Solid apps
  • no dependence on ephemeral centralised DNS, web server or serverside code
    @jonny #LinkedData
KlausBulle, to random German
@KlausBulle@nfdi.social avatar

A team from Amsterdam reviewed several solutions for of up-to-date information:
1️⃣ OAI-PMH
2️⃣ ResourceSync
3️⃣ Git
4️⃣ Notifications
5️⃣ Linked Data Event Streams
6️⃣ Change Discovery

Read in this blogpost which solution they chose for the Colonial Collections Datahub: https://theartofinformationblog.wordpress.com/2023/12/30/synchronising-colonial-heritage-a-linked-data-approach/

smallcircles, to fediverse
@smallcircles@social.coop avatar

For the question of "Why use ?" has never been answered. There should be clear merits to wade through all the complexity that this choice brings, right?

Yes, its ultra flexible, and you can define your own semantic , and theoretically it could provide a robust extension mechanism to AP protocol. Except that right now it doesn't.

What's the vision of a Linked Data ? What great innovative would it bring, that makes it worthwhile?

smallcircles,
@smallcircles@social.coop avatar

This morning via HN I bumped into this article on "Content as a Graph" and it muses about the different ways to present this content, other than falling back to using hierarchies.

When I imagine a based on , then not only does the content shape itself in interesting ways based on the semantic context, but also all the dynamic functionalities that act on that content.

How could that look like? A call to to inspire us devs with some radical innovation.. 😃

smallcircles,
@smallcircles@social.coop avatar

The article is here:

https://thisisimportant.net/posts/content-as-a-graph

And this is the HN discussion (which folks being ultra-critical on #LinkedData, displaying the typical "either love it, or hate it" for this technology ecosystem).

https://news.ycombinator.com/item?id=38834780

smallcircles,
@smallcircles@social.coop avatar

@hrefna @pluralistic

Yes, I agree.

I imagine that when thinking innovative #UX one should pick particular use cases and how they are mapped to very familiar, common #UI patterns currently, and then redesign them where some #LinkedData qualities come into play.

Just to serve as an inspiration, and a motivator for others to explore more of these concepts. Without incentives for LD, there'll be not much further adoption (except in niche areas, where LD is in more common use already).

smallcircles,
@smallcircles@social.coop avatar

@jenniferplusplus

Yes, I think so too. #LinkedData becomes interesting only when you get to the advanced usages of LD. It is not suitable for the minimal case of defining msg formats.

The chicken/egg of the #SemanticWeb is that its glorious magic will only become apparent once the SW exists in all its glory, and the ecosystem tooling and software exists to make it easy for devs to wield the magic wand.

phiofx,

@smallcircles @evan @steve the lack of killer app in the / context derives from not having a gee-wow usecase in any context. Bioinformatics is most avant-garde here (https://www.nature.com/articles/s41746-019-0162-5) and whenever there is a delightful surprise in tooling it is motivated by this niche (e.g https://owlready2.readthedocs.io/en/latest/).

If its good enough for physical health it should be good enough for social health but may take a long while to get there.

smallcircles,
@smallcircles@social.coop avatar

@phiofx @evan @steve

Yes, indeed. Suggesting great use cases for #LinkedData-based federated apps might be done in #FediverseIdea issues at: https://codeberg.org/fediverse/fediverse-ideas

I think more general than your case would be connecting various #OpenScience tools to the #Fediverse, related to open publishing. A field where there's lotta interest. CC @jonny

PS. Though list is dormant (I'm not a qualified curator) I co-maintain https://delightful.club/delightful-open-science

angelo, to random German
@angelo@social.veltens.org avatar
lysander07, to random

Have you ever particularly thought about the graph in knowledge graphs? What can we learn from the graph structure of a knowledge graph? To answer these questions, Ann Tan and I will dive a little bit deeper into graph theory in this section of our lecture.
video: https://open.hpi.de/courses/knowledgegraphs2023/items/4tvRVAcst4kQE0N4g0L2TK
youtube video: https://www.youtube.com/watch?v=n2Q8of_Q26E&list=PLNXdQl4kBgzubTOfY5cbtxZCgg9UTe-uF&index=60
slides: https://zenodo.org/records/10185251
@fiz_karlsruhe @fizise @tabea @sashabruns

jpagroenen, to random Dutch
uclab_potsdam, to random
@uclab_potsdam@vis.social avatar

Today at the EFHA International Conference 2023 in Utrecht, S. de Günther shares prototype combining narration and exploration – collaborative and cross-disciplinary research project on digitizing fashion history by @SabineDG Giacomo Nanni @dielindada @ikyriazi @nrchtct:

https://uclab.fh-potsdam.de/refa/

More about the conference: https://fashionheritage.eu/efha-international-conference-2023-future-heritage/

avhuffelen, to random Dutch
@avhuffelen@social.overheid.nl avatar

Digitalisering heeft grote impact. We moeten anders met data omgaan. De data waarover de overheid beschikt, moet beter vindbaar, koppelbaar en bruikbaar zijn. De legitimiteit van de overheid moet sterker. Daarom zijn we vandaag samen om tot die overheidsbrede standaard te komen.

janvlug,
@janvlug@mastodon.social avatar

@avhuffelen Zeer mee eens. Zie in dit kader ook de Thesauri en Ontologieën voor Overheidsinformatie (#TOOI):

#linkeddata #overheid #FAIR

https://standaarden.overheid.nl/tooi

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • mdbf
  • magazineikmin
  • InstantRegret
  • hgfsjryuu7
  • Durango
  • Youngstown
  • slotface
  • everett
  • thenastyranch
  • rosin
  • kavyap
  • khanakhh
  • PowerRangers
  • Leos
  • DreamBathrooms
  • vwfavf
  • ethstaker
  • tacticalgear
  • cubers
  • ngwrru68w68
  • modclub
  • cisconetworking
  • osvaldo12
  • GTA5RPClips
  • normalnudes
  • tester
  • provamag3
  • All magazines