miklo, to random Polish

"Czy Ty się boisz sztucznej inteligencji ?
Nie potrafię sobie wyobrazić takiego wariantu, żeby się to dla nas źle NIE skończyło." -

odpowiada na pytanie , fizyk teoretyk, kompozytor, twórca filmowy i popularyzator nauki.
Ponieważ mam dokładnie takie same obawy, obserwując tempo rozwoju tej technologii a jednocześnie kompletne olewanie aspektów etyki i bezpieczeństwa, to polecam wysłuchanie przemyśleń na ten (i nie tylko!) temat człowieka, który z pewnością ma coś mądrego do powiedzenia:
invidious.citizen4.eu/watch?v=…


hodujemy gatunek, który będzie dominował nad nami intelektem: prof. andrzej dragan - didaskalia #7

FeralRobots, to random
@FeralRobots@mastodon.social avatar

If you want to know why people don't trust or Microsoft or Google to fix a broken faux- , consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing ""-based suicide prevention tools.

(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)

https://www.vice.com/en/article/5d9m3a/horribly-unethical-startup-experimented-on-suicidal-teens-on-facebook-tumblr-with-chatbot

kellogh, to random
@kellogh@hachyderm.io avatar

imo the user experience of GitHub stinks. Generating code is one of the tasks I trust an least. I’d rather a chat interface so I can ask it to

  1. Refactor
  2. Generate files
  3. Move files
  4. Navigate
  5. Ask questions about code
  6. Understand a new code base

Sure, writing little bits of code is kinda cool, but also ehh 🤨 I’d rather just type it myself. Feels like a lot of opportunity left on the table

stancarey, to generativeAI

"It's not lying, it's not telling the truth, because both of those would require some intentionality and some communicative intent, which it doesn't have."

@emilymbender talks to Michael Rosen about chatbots and the synthetic text they produce: https://www.bbc.co.uk/programmes/m001l97m

philiphubbard, to llm
@philiphubbard@fediscience.org avatar

Can an solve esoteric problems, like camera motion in ? So far, I'm seeing only limited success. The GPT-4 (advanced) model of phind.com can solve: "Write Python using the Blender API to animate orbiting of the camera by 90 degrees around the Z axis centered at object 'A' from frame 1 to 24." It parents to a pivoting empty node, a good approach. But the GPT-3.5 (standard) model gets confused. And the GPT-4 model fails on a more useful version of this problem. (1/3)

sofia, to ai
@sofia@chaos.social avatar

i love how much the current wave of fucks with the cultural tropes we had about computer intelligence. instead of cold, rigid, logical planners, they are moody lumps of intuitions, who respond differently when you ask nicely, have trouble staying on topic, and may just make up stuff that "sounds about right".

i think many false expectations are because we can't quite imagine computers doing things like that.

they still can't make good jokes tho 😅.

ovid, to ai
@ovid@fosstodon.org avatar

What is fascinating about the new revolution is that a storm is coming, the experts are telling us, we can see it, and it will be fascinating to see how industry reacts.

In short, as a profession is going to largely die. I hear numbers like "in ten years" being bandied about, though I'm skeptical of the timeframe.

Developers are the 21st century version of the well-paid textile workers, except we have years of advance warning,.

What are your plans?

kellogh, to llm
@kellogh@hachyderm.io avatar

a lot of takes today about how anything you can do with an would be better done with “ML”. yes, always from people who never did ML in their life.

see a claim. check their bio, they’ve never even heard of a binary classifier, yet they’re suddenly ML experts

nickanderson, to emacs
@nickanderson@fosstodon.org avatar

I asked Khoj (https://khoj.dev/) to summarize how I org in 2023 speaking in the style of Dr. House MD.

> Ah, another human seeking to understand the mysteries of... adjusts glasses ...org-mode! chuckles Well, well, well. Let me tell you something about this "exocortex" and "second brain" business.

petersuber, (edited ) to ai
@petersuber@fediscience.org avatar

will allow tools to train on any material “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise."
https://technomancers.ai/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/

PS: For better or worse, this is a bold, unnuanced move that pushes other jurisdictions to justify any nuance they wish to recognize.

** Please see the update below for a correction or clarification.

TNLNYC, (edited ) to random
@TNLNYC@mastodon.social avatar

As far as I'm concerned, the best explaination of came from @cwebber :

"ChatGPT is Mansplaining as a Service"

https://octodon.social/@cwebber/109462369670144568

The more I think about the statement, the more it strikes me as correct for most LLMs: confidently asserting truthiness on every topic without having any expertise in any.

danilo, to ai
@danilo@hachyderm.io avatar

One criticism of and type technologies we’d hear about a LOT more if the amateur critic set actually engaged with and learned about the technology:

“AI” tools quietly, subtly, act as brakes against innovation. That is, because they are best suited to replay existing, statistically well-worn paths to solving a problem, they will actually add friction when attempting something novel.

They are handy for reproducing existing maps, and stealth sabateurs for blazing new trails.

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear issue that applies well beyond the case

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

fredbrooker, to ai Czech
@fredbrooker@witter.cz avatar

Jak vidíte současný boom LLM? 🤔

Já vidím největší problém v tom, že training data jsou založena na práci lidí, kteří ve většině případů nedali souhlas ke zpracování své autorské či umělecké práce. Největší krádež dat na světě! A tím pádem Business Model založený na pirátství.

aral, to ArtificialIntelligence
@aral@mastodon.ar.al avatar

Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.

Vivaldi, (edited ) to ai
@Vivaldi@vivaldi.net avatar

Should we commit to using the latest trend in AI—Large language models— or LLMs? No, we won't.

This current AI trend comes with a list of limitations and challenges.

For starters, LLMs are complex and resource-intensive. The energy needed to train and operate these models is excessive and could potentially harm the environment.

For more reasons to question the use of AI, read further to understand its practical, ethical, and technical drawbacks.

@NearToTheSky

https://vivaldi.com/blog/technology/vivaldi-wont-fallow-the-current-ai-trend/

john, (edited ) to ChatGPT
@john@sauropods.win avatar

I've been using over the last few months, mostly for tinkery type stuff, idle questions and so on. But my usage is increasing, and I'm starting to use it in place of a search engine. Given that, and the whole ... thing, I need to move to a locally-running . I didn't realise it is actually really easy, and some of the models are pretty good!

I'm using FreeChat for Mac, which runs out of the box (it also lets you choose a model from a local file): https://www.freechat.run

SirTapTap, to ai
@SirTapTap@mastodon.social avatar

Bard, fucking chill jesus christ

syntaxseed, to ChatGPT
@syntaxseed@phpc.social avatar

A chat with my Dad inspired by his use of to choose aesthetically pleasing wood combos, turned into an interesting thought exercise.

How much assistance in art does it take for it to cross the line into being no longer a work of human creativity?

And do we place the same value (tangible & intangible) on created from the human mind, and art simulated by a ? Do we care?

nazgul, to llm

I’ve been reading a lot of non-computer-related informational and how-to sites the past few months, and I’m starting to realize why LLM generators have such a verbose and roundabout description style. They didn’t make it up, that’s become the voice of the web and they adopted it.

The other day I was looking for tips on reducing back pain while washing dishes and the site went on for pages before saying “use a step stool”.

The old style web, similar to old style newspaper articles was to immediately get to the point, and then provide increasing levels of detail. This allowed the reader to immediately learn what they needed to know, and then get more information if they wanted it, up to the depth they wanted.

The new style is to make the article large to increase SEO, and to put the answer at the end to increase advertising revenue.

I hate it.

Although at least nobody will ever suspect my writing to be generated by a generic LLM.

There’s a great printer review on the web where the author was recommending the best laser printer for a home-office. The answer was clearly (and I happen to agree) a Brother laser printer. The author didn’t care about the additional ad revenue, but they couldn’t ignore the SEO issue. So they answered the question in the first paragraph, then told the reader to stop reading, and the let an LLM generate fill for the rest. (Read it, it’s amusing, in a dystopian way. https://www.theverge.com/23642073/best-printer-2023-brother-laser-wi-fi-its-fine).

The down side of LLM-generated content of course is that now every time you see something odd in an article, you start to wonder if it’s generated and shouldn’t be trusted. And while there’s something to be said for trusting stuff less, this isn’t the right path.

Today I was reading a helpful article about plumbing S-Traps vs. P-Traps and I hit this gem.

> Do you live in a very dry climate?
>
> Then it would be great to check the level of water within the trap.
>
> There’s a chance all the water from there will evaporate.
>
> If this happens, flush a large amount of money through the line and refill the trap.

Human mistake? Easter egg? LLM. I don’t know, but I laughed at least.

Random thought: If you want to make something useful from “AI”, make a browser plugin that reformats articles to work the old way.

ai6yr, to ai

Arstechnica: "Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

YouTube videos of 6K celebrities helped train AI model to animate photos in real time."
https://arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

Crell, to ai
@Crell@phpc.social avatar

I reached out to about their code assistant.

They verified:

  1. They use , which means my GitHub OSS has almost certainly been used in training data.
  2. They rely on OpenAI's promise to not ingest any code that is used for "context".
  3. They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.

Uninstall this crap, now. It's dangerous and irresponsible

kellogh, to llm
@kellogh@hachyderm.io avatar

i low key don't want to see a big jump in or capabilities anytime soon. rn they're capable enough that my mom wants to use them, but bad enough that even she has an intuitive sense for when they're wrong

that's how you build "AIQ", the skill of using it. Lots of people toying with them, to feel out their capabilities and limitations

kjr, to llm
@kjr@babka.social avatar

I am trying to build a RAG with LLAMA 3 and... getting really crazy with the strange formats I get in the response....
Not only the response, but additional text, XML tags...

kellogh, to llm
@kellogh@hachyderm.io avatar

there’s a big need for something stronger than , but more flexible and cheaper than a giant all-knowing .

A great part about RAG is that it’s just a database. You just issue INSERT/UPDATE and yeah, that’s how you maintain knowledge. No million dollar training process

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • ethstaker
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • osvaldo12
  • everett
  • ngwrru68w68
  • JUstTest
  • Durango
  • cubers
  • tester
  • GTA5RPClips
  • modclub
  • mdbf
  • cisconetworking
  • tacticalgear
  • Leos
  • normalnudes
  • anitta
  • provamag3
  • lostlight
  • All magazines