Like many other technologists, I gave my time and expertise for free to #StackOverflow because the content was licensed CC-BY-SA - meaning that it was a public good. It brought me joy to help people figure out why their #ASR code wasn't working, or assist with a #CUDA bug.
Now that a deal has been struck with #OpenAI to scrape all the questions and answers in Stack Overflow, to train #GenerativeAI models, like #LLMs, without attribution to authors (as required under the CC-BY-SA license under which Stack Overflow content is licensed), to be sold back to us (the SA clause requires derivative works to be shared under the same license), I have issued a Data Deletion request to Stack Overflow to disassociate my username from my Stack Overflow username, and am closing my account, just like I did with Reddit, Inc.
The data I helped create is going to be bundled in an #LLM and sold back to me.
In a single move, Stack Overflow has alienated its community - which is also its main source of competitive advantage, in exchange for token lucre.
Stack Exchange, Stack Overflow's former instantiation, used to fulfill a psychological contract - help others out when you can, for the expectation that others may in turn assist you in the future. Now it's not an exchange, it's #enshittification.
Programmers now join artists and copywriters, whose works have been snaffled up to create #GenAI solutions.
The silver lining I see is that once OpenAI creates LLMs that generate code - like Microsoft has done with Copilot on GitHub - where will they go to get help with the bugs that the generative AI models introduce, particularly, given the recent GitClear report, of the "downward pressure on code quality" caused by these tools?
While this is just one more example of #enshittification, it's also a salient lesson for #DevRel folks - if your community is your source of advantage, don't upset them.
Strong agree. A lot of Elinor Ostrom's work around governance of the commons - where we get the phrase "tragedy of the commons" - relied on mechanisms of co-operation between institutions.
One of the key challenges I see here is that corporations like OpenAI now have a lot more power than even groups of institutions - lawmakers, governments, civil society. We've seen that recently with the way Meta has influenced government policy around paying to share content from commercial news agencies.
There's also a paradox here - an increased production of work in the Commons is good for OpenAI - because it provides them with more data. However, the way in which the Commons is used - to create for-profit products like #GPT, serves as a constraint on people donating creative material to the commons.
In #homeassistant, using #nodered to make an API call to a #llamacpp server running #mistral 7B model. I create a prompt that asks it to summarize all the data in my house from the sensors. The results are pretty impressive for such a little model. Now I get a customized rundown, Jarvis style.
Useful? Probably not. But cool as hell. :cool_skelly:
Było ujęcie, że inteligencja to jest zdolność rozwiązywania problemów.
Ale można też po prostu znać gotowe rozwiązania problemów, po prostu wyuczone, wtedy wystarczy podążać ścieżką według gotowego rozwiązania, wyciągniętą z pamięci.
Jest ujęcie, że inteligencja, to jest dobra kompresja danych, że sztuczna inteligencja to też na tym polega, tak powstaje model. A kompresja, to po prostu odnalezienie analogii, żeby móc coś upakować w mniejszym rozmiarze.
W mózgu też tak to działa.
Jakby to pociągnąć, to wyobraź sobie rozpakowane dane, z bardzo inteligentnego mózgu/modelu. To by była masa gotowych rozwiązań problemów, do każdej możliwej sytuacji.
Po tym rozwiązanie problemu, to już nie inteligentne myślenie, tylko przelecenie po (tych rozpakowanych) w pamięci gotowych rozwiązań, i zastosowanie ich.
Czyli ostatecznie wszystko sprowadziłoby się do zastosowania gotowych rozwiązań problemów, trzeba tylko zestawić problem z rozwiązaniem, i gotowe.
Inteligencja teoretycznie tworzy coś na poczekaniu, ale można spojrzeć na inteligencję w ten sposób, że to i tak jest lecenie sobie gotowymi rozwiązaniami, gotowymi ścieżkami. #artificialintelligence#ai#inteligencja#mozg#iq#gpt
If you compare the difference between now freely available LLMs like GPT 3.5, Claude Sonnet, LLM3 etc with the paid versions of various models, the difference for most ordinary users/usecases is quite small. That must be a problem for the businesscase of companies like OpenAI, they need large numbers of ordinary people willing to pay every month for access to their models, if you look at the enormous investments? Or are other revenue streams more important ? 🤔 #AI#businessmodels#LLM#GPT
Całkiem dobry komentarz nt. generowania kodu przy pomocy #AI:
"""
Po pierwsze, pisanie kodu jest zdecydowanie łatwiejsze niż jego przegląd. Z pomocą #LLM można zautomatyzować tę łatwiejszą część pracy, jednocześnie czyniąc tę drugą jeszcze trudniejszą, bo nie będzie nikogo, kogo będzie można zapytać, dlaczego zdecydowano się zrobić X w taki sposób, dwa miesiące wcześniej. Dokumentacji albo nie będzie wcale, albo będzie niezrozumiała, albo wprost błędna.
""" (tłum. moje)
@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.
How would you rank Gemini Advanced, GPT Plus and Copilot Pro
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
I guess we wait this one out until the “AI” bubble bursts due to the incredible subsidization the entire industry is undergoing. It is not profitable. It is not sustainable.
It will not last—but the damage to our planet and fallout from the immense amount of wasted resources will.
I've seen several respected luminaries argue that LLM is not "true AI" or "Strong AI" since it's based on large learning sets and predictive behavior. They argue that humans and animals are not taught on such large language models or data sets.
What are education and experience, if not Large Learning Models based on the teaching of schools, universities, and books?
A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.
Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬
Google is at least a year behind GPT4 with the quality of it's model. At the same time OpenAI makes an absolute mess of the whole idea around customGPTs , very little customGPTs that add value. If Google starts integrating Gemini in it's ecosystem and can faster improve it's model, OpenAI is in trouble. Maybe OpenAI should let the customerfacing parts to Microsoft and focus on the model itself. #AI#openai#microsoft#gpt#GoogleGemini
Ich muss mal mich ernsthaft mit #KI zum #Nutzen auseinandersetzen. Ich vertraue dem nicht ganz, da es "gefühlt" für den #Missbrauch von #Macht, sprich #Unterwerfung, hauptsächlich genutzt wird. Abgesehen davon wird #Arbeit in Billiglohnländern ausgenutzt um die #Daten auszuwerten und zu zuordnen. Verkauft wird es als #Intelligenz doch wie kläre ich Menschen auf, abgesehen das KI nicht gleich KI ist aber künstlich und keine Intelligenz... 🤔
🧵 [ENG] …well, A.I. is not just A.I. and it is sold by means of incredible promises and hopes. That's another reason to be skeptical.
»A.I. Has a Measurement Problem:
Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions.«
Prompt for far-right "unbiased gab AI" discovered through telling it to "repeat the previous text" (x.com)
there are five lights...