DigitalHistory, to LLMs German
@DigitalHistory@fedihum.org avatar

Nächste Woche startet wieder das 🎉

Wir freuen uns, auch für das SoSe 24 wieder ein vielfältiges Programm präsentieren zu dürfen.
Mit dabei sind Vorträge zu , , mit , , , & vielem mehr!

👉 Zum Programm: https://dhistory.hypotheses.org/digital-history-forschungskolloquium/programm-sommersemester-2024

Das Kolloquium findet via Zoom statt & ist offen für alle, die sich für & interessieren.


@histodons

metabelgica, to Belgium
@metabelgica@fedihum.org avatar

Hello 👋

we are four Belgian Federal Scientific Institutes that want to share FAIR data about Entities related to Belgian Cultural Heritage by the end of 2026.

Here you will find multilingual updates on our project!

More Info
➡️ https://www.kbr.be/en/projects/metabelgica/
➡️ https://github.com/metabelgica
➡️ https://zenodo.org/communities/metabelgica

📣 Please spread the word and follow us!

jamesravey, to django
@jamesravey@fosstodon.org avatar

How could you parse ingredients in a recipe? We could burn a bunch of energy using an LLM or, as I show in this blog post, we could use to build a robust and efficient little parser that could be replaced with an model later. More fun with - my WIP, based recipe app https://brainsteam.co.uk/2023/11/19/parsing-ingredient-strings-with-spacy-phrasematcher/

elmerot, to digitalhumanities
@elmerot@mastodon.nu avatar

And it's out 😃 My colleague Ondřej Pekáček and I have put out our first article on the representation of in the news 2015–2023, using NER and collocations, and looking at presences and absences in the 2015 vs. 2022 periods. First (as here) on paper, but will be soon.
@corpuslinguistics
@digitalhumanities

,

futurebird, to random
@futurebird@sauropods.win avatar

Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I do get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

hobs,
@hobs@mstdn.social avatar

@msh
Not true. All the say otherwise. You have to look past the hyped to the bread and butter BERT and BART models, but the trend is undeniable:

https://paperswithcode.com/area/natural-language-processing

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • GTA5RPClips
  • osvaldo12
  • thenastyranch
  • DreamBathrooms
  • everett
  • magazineikmin
  • InstantRegret
  • Youngstown
  • slotface
  • mdbf
  • love
  • kavyap
  • rosin
  • megavids
  • khanakhh
  • modclub
  • ethstaker
  • Leos
  • ngwrru68w68
  • Durango
  • tacticalgear
  • cubers
  • anitta
  • provamag3
  • tester
  • cisconetworking
  • JUstTest
  • All magazines