ErikJonker,
@ErikJonker@mastodon.social avatar

My view is that in general will not cause massive job losses , it will change work. However in some specific niches jobs will dissappear. Translation is a good example.
https://www.politico.eu/article/translators-translation-european-union-eu-autmation-machine-learning-ai-artificial-intelligence-translators-jobs/

asoska,

@ErikJonker Not sure what came first - AI or the urge to cut staff and costs.

Jigsaw_You,
@Jigsaw_You@mastodon.nl avatar

@asoska @ErikJonker Good point. However, I think the impact of will not be very large on the short-term. Current set of solutions will probably not survive or professionally used on a large scale due to (planned) regulations. The technology in its current form - however promising - is still too unreliable, badly designed (blackbox), not energy efficient etc. These issues need to be addressed first.

ErikJonker,
@ErikJonker@mastodon.social avatar

@Jigsaw_You
...i think LLMs as a "co-pilot" for humans while coding, medicine, law, writing, doing research etc are already being used now and change work. That's the reality.
@asoska

Jigsaw_You,
@Jigsaw_You@mastodon.nl avatar

deleted_by_author

  • Loading...
  • Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @ErikJonker @asoska Moreover, the tool landscape is still very volatile. New tools everyday. Not much standards yet.

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @asoska @ErikJonker We only read about the ‘spectacular’ success stories. But most likely, just like we other hypes, most organizations or just still playing with it, conducting pilots and PoCs. Finding out what the added-value is, definitely not relying on these tools for crucial processes, decisions, production code etc. So the peak of productivity has not been reached. Moreover, these tools are definitely not used in strictly regulated environments.

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar
    ErikJonker,
    @ErikJonker@mastodon.social avatar

    @Jigsaw_You @asoska ..agree with most part of this article, only the scepsis about added value of LLMs u don't share

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @asoska @ErikJonker The technology is certainly useable but only if certain fundamental conditions are fulfilled: transparency wrt training-data, bias elimination, no copyright issues wrt used training data…

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @asoska @ErikJonker And off course fixing the reliability, i.e. drastically reduce the level of hallucination.

    ErikJonker,
    @ErikJonker@mastodon.social avatar

    @Jigsaw_You @asoska Ofcourse they can be used in strictly regulated environment (that's where i work) only for certain tasks (not for decisions, formal tasks etc) , I see many opportunities in supporting roles, also in law

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @ErikJonker @asoska Hmm, you can always use it for small tasks, I guess? But, eventually the impact is relatively small then. My guess: the future belongs to tailored custom-made tools using high quality, curated and transparent datasets. LLM based tools, but not simply trained on everything. To draw a parallel: smart data instead of large data. 😀

    ErikJonker,
    @ErikJonker@mastodon.social avatar

    @Jigsaw_You @asoska i am a big fan of smart data, still for text generation, language understanding, using a lot of data in the trainingset makes sense, the same goes for code completion and other tasks. Ofcourse "smart" choices should be made with regard to the datasets for pretraining.

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @ErikJonker @asoska There are a couple of issues with the current set-up of (too) large models. Firstly, bias. The larger the training-set, the more difficult to reduce bias, i.e. more data curation is needed etc. Next, large datasets means longer training time. Longer training time means larger environmental impact. We should develop models that are fit for purpose, not naively build (too) large models, simply because we can.

    ErikJonker,
    @ErikJonker@mastodon.social avatar

    @Jigsaw_You @asoska true, fit-for-purpose, but if I want a model than can summarize text, write good letters etc. , more data is often better. The question is what the optimal size is for various functions , also with energy use/costs in mind 🤔

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @ErikJonker @asoska More data means more costs and risks. Thus, indeed, we need to carefully determine if benefits outweight costs and risks. I don’t think this evaluation is made in this age of extreme 😀

    ErikJonker,
    @ErikJonker@mastodon.social avatar

    ..i am slightly optimistic, also because big platforms want to make (a lot of) money. So they have an interest in efficient/cheaper models also, that's also why you see a lot of innovation around that in opensource but also with the big platforms (Google , Microsoft, AWS). We will see.

    Jigsaw_You,
    @Jigsaw_You@mastodon.nl avatar

    @ErikJonker As long as big platforms are cutting staff at ‘responsible ’ teams I don’t see a lot of reason to be optimistic. 🤔

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • DreamBathrooms
  • everett
  • osvaldo12
  • magazineikmin
  • thenastyranch
  • rosin
  • normalnudes
  • Youngstown
  • Durango
  • slotface
  • ngwrru68w68
  • kavyap
  • mdbf
  • InstantRegret
  • JUstTest
  • ethstaker
  • GTA5RPClips
  • tacticalgear
  • Leos
  • anitta
  • modclub
  • khanakhh
  • cubers
  • cisconetworking
  • megavids
  • provamag3
  • tester
  • lostlight
  • All magazines