@TheServitor@sigmoid.social avatar

TheServitor

@TheServitor@sigmoid.social

Daniel Detlaf (he/him)
One-man flea circus, writer, sci-fi nerd, news junkie and AI tinkerer.

No one who is carefully curating their feed should follow me, I boost whatever catches my eye

I mostly post about:
#Writing #SciFi #Politics #AI #Climate #Science #Python #Space #TimeTravel #History #LLMs #Dystopia #OpenAI #AIEthics #Journalism #NewMexico #DemocraticSocialism

Albuquerque, NM
Avatar: Doodle of robot wearing a tie and name tag.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

marcel, to ai
@marcel@waldvogel.family avatar

Modern text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.

Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.

This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.


https://www.theverge.com/2024/5/24/24164119/google-ai-overview-mistakes-search-race-openai

TheServitor,
@TheServitor@sigmoid.social avatar

@marcel

I would not be surprised if LLMs could get us to 99% correctness. Which is still too low for automated processes but plenty good for manual work.

You can have one check another's work, and it works to a reasonable degree, because LLMs are stronger evaluators and classifiers than truth generators. They are better at telling whether an answer is correct than giving a correct answer.

LLMs aren't but they may end up a tool used by a theoretical AGI.

stooovie, to ai
@stooovie@mas.to avatar

I have yet to see a SINGLE tool that successfully carries out a complex task.

Mimic anything? No problem. Actually do something that comprises several steps? ALWAYS fails. In 100% of cases.

TheServitor,
@TheServitor@sigmoid.social avatar

@stooovie

I have an article writing tool that makes about 20 different API calls. Most of them are for generation but several of them use the for reasoning tasks. For example matching keywords to the article headings it would be most appropriate to write about them under, then returning a JSON.

I'm only a hobbyist but I'd say a couple of the prompts are pretty complex.

parismarx, to tech
@parismarx@mastodon.online avatar

“The certain knowledge that Kevin Roose is a credulous dumbass who makes a jingle-bell sound if he nods his head real fast only does so much to moderate the obscenity and offensiveness of his ascribing ‘playful intelligence’ and ‘emotional intuition’ to a predictive text generator.”

https://defector.com/if-kevin-roose-was-chatgpt-with-a-spray-on-beard-could-anyone-tell

#tech #ai #chatgpt #gpt4o

TheServitor,
@TheServitor@sigmoid.social avatar

@parismarx

I laughed and I'm a Times reader with no especial love of the tech section, but I'm not sure personally eviscerating Roose in a blog post shows any particular moral superiority on the part of OP.

donwatkins, to ai
@donwatkins@fosstodon.org avatar

Any recommendation for #AI detector apps? Asking for a friend.

TheServitor,
@TheServitor@sigmoid.social avatar

@donwatkins

They're all pretty unreliable. I used one of the best ones on a site of mine with a range of AI and human content. It was more accurate than a coin flip, but had a lot of false positives.

kellogh, to LLMs
@kellogh@hachyderm.io avatar

Let’s be honest, if you’re a software engineer, you know where all this compute and power consumption is going. While it’s popular to blame , y’all know how much is wasted on , microservices, overscaled , spark/databricks and other unnecessary big data tech. It’s long past time we’re honest with the public about how much our practices are hurting the climate, and stop looking for scapegoats https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/

TheServitor,
@TheServitor@sigmoid.social avatar

@kellogh

"Today, the electricity utilized by data centers accounts for 0.3 percent of overall carbon emissions, and if we extend our accounting to include networked devices like laptops, smartphones, and tablets, the total shifts to 2 percent of global carbon emissions."

That's actually much, much lower than I imagined. Considering how much use I get out of the internet, I find this almost a relief?

TheServitor,
@TheServitor@sigmoid.social avatar

@kellogh

Dear God, don't confront me with the footprint of the steak in my freezer. It's probably as much as my laptop's emissions in a year. Knowledge is power but self-awareness can be a tough row to hoe sometimes.

smallcircles, to meta
@smallcircles@social.coop avatar

It is weird and discouraging to see how many people choose for its looks and reach.

Forgetting that the sole reason this app exists is for a humongous moloch to get at even more of their personal data. All to increase their dominance and power and ability to enrich themselves.

Forgetting that this dominance is already at highly problematic proportions, where Meta and also and other have the power to influence society as a whole.

TheServitor,
@TheServitor@sigmoid.social avatar

@smallcircles

Are that many people choosing Threads? That's not a rhetorical question, I haven't been following this.

I took a peek at threads when it came out, looked around, saw the same general functionality as everywhere else, shrugged, left. I'm a little surprised if it has been getting traction from federation. Seemed like just another social media app in a crowd.

dragfyre, (edited ) to ai
@dragfyre@mastodon.sandwich.net avatar

Y'all need to remember there's a natural endgame to all of this indiscriminate application of generative . The more ubiquitous become, the more they'll start to learn from each other. Feeding LLM output back into an LLM essentially poisons it and makes output degrade. Same thing with AI art generators. It's like making a copy of a copy; quality naturally trends downwards.

The upshot is that most* organizations that bank on generative AI will eventually, and naturally, fail.

TheServitor,
@TheServitor@sigmoid.social avatar

@dragfyre

What makes that inevitable? I'm not disputing the effect you describe. Model collapse makes sense.

But developers are rational actors. I think the last year has already seen considerable movement towards more curated data sets. (After finding CSAM in LAION, for example.)

The foundational models like GPT4 may have needed to consume everything in order to reach where they are at, but that's not necessarily true of models broadly.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • InstantRegret
  • mdbf
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • osvaldo12
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • JUstTest
  • tacticalgear
  • ethstaker
  • cisconetworking
  • modclub
  • tester
  • GTA5RPClips
  • cubers
  • everett
  • normalnudes
  • megavids
  • Leos
  • anitta
  • lostlight
  • All magazines