"Czy Ty się boisz sztucznej inteligencji ?
Nie potrafię sobie wyobrazić takiego wariantu, żeby się to dla nas źle NIE skończyło." -
odpowiada na pytanie #AndrzejDragan , fizyk teoretyk, kompozytor, twórca filmowy i popularyzator nauki.
Ponieważ mam dokładnie takie same obawy, obserwując tempo rozwoju tej technologii a jednocześnie kompletne olewanie aspektów etyki i bezpieczeństwa, to polecam wysłuchanie przemyśleń na ten (i nie tylko!) temat człowieka, który z pewnością ma coś mądrego do powiedzenia: invidious.citizen4.eu/watch?v=…
If you want to know why people don't trust #OpenAI or Microsoft or Google to fix a broken faux-#AGI#chatbot#LLM, consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing "#AI"-based suicide prevention tools.
(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)
imo the user experience of GitHub #Copilot stinks. Generating code is one of the tasks I trust an #LLMleast. I’d rather a chat interface so I can ask it to
Refactor
Generate files
Move files
Navigate
Ask questions about code
Understand a new code base
Sure, writing little bits of code is kinda cool, but also ehh 🤨 I’d rather just type it myself. Feels like a lot of opportunity left on the table
"It's not lying, it's not telling the truth, because both of those would require some intentionality and some communicative intent, which it doesn't have."
Can an #LLM solve esoteric #programming problems, like camera motion in #Blender3D? So far, I'm seeing only limited success. The GPT-4 (advanced) model of phind.com can solve: "Write Python using the Blender API to animate orbiting of the camera by 90 degrees around the Z axis centered at object 'A' from frame 1 to 24." It parents to a pivoting empty node, a good approach. But the GPT-3.5 (standard) model gets confused. And the GPT-4 model fails on a more useful version of this problem. (1/3)
i love how much the current wave of #AI fucks with the cultural tropes we had about computer intelligence. instead of cold, rigid, logical planners, they are moody lumps of intuitions, who respond differently when you ask nicely, have trouble staying on topic, and may just make up stuff that "sounds about right".
i think many false expectations are because we can't quite imagine computers doing things like that.
What is fascinating about the new #AI#LLM revolution is that a storm is coming, the experts are telling us, we can see it, and it will be fascinating to see how industry reacts.
In short, #programming as a profession is going to largely die. I hear numbers like "in ten years" being bandied about, though I'm skeptical of the timeframe.
Developers are the 21st century version of the well-paid #Luddite textile workers, except we have years of advance warning,.
a lot of takes today about how anything you can do with an #LLM would be better done with “ML”. yes, always from people who never did ML in their life.
see a claim. check their bio, they’ve never even heard of a binary classifier, yet they’re suddenly ML experts
I asked Khoj (https://khoj.dev/) to summarize how I org in 2023 speaking in the style of Dr. House MD.
> Ah, another human seeking to understand the mysteries of... adjusts glasses ...org-mode! chuckles Well, well, well. Let me tell you something about this "exocortex" and "second brain" business.
The more I think about the statement, the more it strikes me as correct for most LLMs: confidently asserting truthiness on every topic without having any expertise in any.
One criticism of #AI and #LLM type technologies we’d hear about a LOT more if the amateur critic set actually engaged with and learned about the technology:
“AI” tools quietly, subtly, act as brakes against innovation. That is, because they are best suited to replay existing, statistically well-worn paths to solving a problem, they will actually add friction when attempting something novel.
They are handy for reproducing existing maps, and stealth sabateurs for blazing new trails.
#ML systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear #MLsec issue that applies well beyond the #LLM case
Já vidím největší problém v tom, že training data jsou založena na práci lidí, kteří ve většině případů nedali souhlas ke zpracování své autorské či umělecké práce. Největší krádež dat na světě! A tím pádem Business Model založený na pirátství.
Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.
Should we commit to using the latest trend in AI—Large language models— or LLMs? No, we won't.
This current AI trend comes with a list of limitations and challenges.
For starters, LLMs are complex and resource-intensive. The energy needed to train and operate these models is excessive and could potentially harm the environment.
For more reasons to question the use of AI, read further to understand its practical, ethical, and technical drawbacks.
I've been using #ChatGPT over the last few months, mostly for tinkery type stuff, idle questions and so on. But my usage is increasing, and I'm starting to use it in place of a search engine. Given that, and the whole #OpenAI... thing, I need to move to a locally-running #LLM. I didn't realise it is actually really easy, and some of the models are pretty good!
I'm using FreeChat for Mac, which runs out of the box (it also lets you choose a model from a local file): https://www.freechat.run
I’ve been reading a lot of non-computer-related informational and how-to sites the past few months, and I’m starting to realize why LLM generators have such a verbose and roundabout description style. They didn’t make it up, that’s become the voice of the web and they adopted it.
The other day I was looking for tips on reducing back pain while washing dishes and the site went on for pages before saying “use a step stool”.
The old style web, similar to old style newspaper articles was to immediately get to the point, and then provide increasing levels of detail. This allowed the reader to immediately learn what they needed to know, and then get more information if they wanted it, up to the depth they wanted.
The new style is to make the article large to increase SEO, and to put the answer at the end to increase advertising revenue.
I hate it.
Although at least nobody will ever suspect my writing to be generated by a generic LLM.
There’s a great printer review on the web where the author was recommending the best laser printer for a home-office. The answer was clearly (and I happen to agree) a Brother laser printer. The author didn’t care about the additional ad revenue, but they couldn’t ignore the SEO issue. So they answered the question in the first paragraph, then told the reader to stop reading, and the let an LLM generate fill for the rest. (Read it, it’s amusing, in a dystopian way. https://www.theverge.com/23642073/best-printer-2023-brother-laser-wi-fi-its-fine).
The down side of LLM-generated content of course is that now every time you see something odd in an article, you start to wonder if it’s generated and shouldn’t be trusted. And while there’s something to be said for trusting stuff less, this isn’t the right path.
Today I was reading a helpful article about plumbing S-Traps vs. P-Traps and I hit this gem.
> Do you live in a very dry climate?
>
> Then it would be great to check the level of water within the trap.
>
> There’s a chance all the water from there will evaporate.
>
> If this happens, flush a large amount of money through the line and refill the trap.
Human mistake? Easter egg? LLM. I don’t know, but I laughed at least.
Random thought: If you want to make something useful from “AI”, make a browser plugin that reformats articles to work the old way.
They use #OpenAI, which means my GitHub OSS has almost certainly been used in training data.
They rely on OpenAI's promise to not ingest any code that is used for "context".
They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.
Uninstall this crap, now. It's dangerous and irresponsible
i low key don't want to see a big jump in #LLM or #AI capabilities anytime soon. rn they're capable enough that my mom wants to use them, but bad enough that even she has an intuitive sense for when they're wrong
that's how you build "AIQ", the skill of using it. Lots of people toying with them, to feel out their capabilities and limitations
I am trying to build a RAG with LLAMA 3 and... getting really crazy with the strange formats I get in the response....
Not only the response, but additional text, XML tags... #Llama3#LLM#RAG
there’s a big need for something stronger than #RAG, but more flexible and cheaper than a giant all-knowing #LLM.
A great part about RAG is that it’s just a database. You just issue INSERT/UPDATE and yeah, that’s how you maintain knowledge. No million dollar training process