Dans ce nouvel épisode, avec @maxlath nous parlons de :
👉 la genèse du projet
👉 son développement, son fonctionnement et son financement
👉 sa communauté, ses outils et intéractions
👉 du rapport de inventaire avec @wikidata 👉 des défis et limitations autour du Fediverse et l'implémentation d'#activitypub
👉plein d'autres choses !
La #transcription est d'ores et déjà disponible sur le site.
Wie können Provenienzforschende in Museen, Archiven, Bibliotheken oder privaten Sammlungen mit freien und offenen Projekten wie #Wikidata, #Wikibase oder #Wikipedia Kulturdaten organisieren, vernetzen und zugänglich machen?
Darum ging es bei „Provenance Loves Wiki“ in Praxis-Inputs und rund 15 Barcamp-Sessions in zwei Tagen.
Deux candidats m'ont contactés pour le Google Summer of Code 2024, afin de travailler sur #Lingualibre . Je mène donc le suivit et l'onboarding . Je réalise également que nous assistons je l'espère à l'arrivé d'une 4ème génération de collègues-developpeurs sur le project Lingualibre après :
@WikibaseCommunity manager Valerie Wollinger interviewed me a few months back about my early experience with Wikibase and ongoing work with the stakeholders community.
We had a frank conversation with a focus on what works well & what can still be improved further.
hi #wikibase peeps. Anyone done either of these two things in Wikibase or #Wikidata. I'm looking for #howto guidance, examples, and data models. 1. Build a glossary 2. Map an academic working group's activity
To end the day, the keynote speaker Adam Anderson presented an ontology for cuneiform artefacts.
He showed how Midjourney can produce... interesting renditions of ancient architecture like the Ishtar gate, but is hopeless if asked to reproduce cuneiform text.
How can we teach AI cuneiform?
With an ontology! FactGrid's cuneiform ontology is implemented in #Wikibase. It maps inscriptions directly to cuneiform characters in #Wikidata, not transcriptions.
Domain names are not #PersistentIdentifiers; their referent can change over time as people stop paying for domains and new owners come in.
The Internet Domains #Wikibase is an editable database tracking domain names, datasets regarding them, and other features like their ownership over time.
@egonw Seems a bit overkill: who's going to to crosslink proprietary identifiers other than Wikidata? (They could even be personal data, so it helps to have some indication of notability to override that.)
But maybe someone who's already collecting them (or has and doesn't mind losing #TwitterAPI access) could try #Wikibase (https://socialblade.com ?). Waive the #DatabaseRights on the SPARQL endpoint and see how long it takes until the source of 99 % of the PD data sues. Useful precedent!
I am in the early stages of setting up Librarybase, a #Wikibase for all the publication and document metadata in the world.
I am looking for pilot projects. If you have a large corpus of bibliographic metadata (messy is fine!) and would like to experiment with incorporating it in an #RDF dataset and/or cleanup through a collaborative interface, let me know.