If you are using LLMs through API tokens, or running locally, which UI do you use? I'm in the market for recommendations. Have tried llm and LibreChat but neither really stuck for me. #ai#llm
Fun fact about the Langchain's logo. The parrot and a chain link were inspired by people calling LLMs stochastic parrots. Models mimic human writing but don't really understand what they're saying. #Langchain serves to "chain" the "parrots" together 😃 #llm#javascript#webdev
Ganz schön arrogant, wie einige #KI bzw. #LLM verteufeln.
Nicht jedem fallen die Worte aus den Fingern wie frühreife Früchte, für viele ist das Schreiben ein zähes, träges Mäandern.
Und wenn KI da bei der Textarbeit hilft und diesen Menschen mehr Partizipation und Produktivität und einfach ein besseres Gefühl ermöglicht, ist das toll!
Teamkollege hat sich die einzigen #Bücher zu einem Fachthema im #Buchhandel als Print-on-Demand bestellt. Was ankam, war nicht nur verschnitten, litt an ausgewaschenem Druckbild und entbehrte jeglicher Gliederung, nein, die Texte waren auch 100% #KI-Erzeugt und völlig inhaltsleerer #Fake.
Angeblich habe der Autor „Rob Botwright“ (🤔) in einem halben Jahr 80 Bücher von Pascal-Programmierung über IT-Sicherheit bis Stringtheorie bei „Pastor Publishing Ltd.“ veröffentlicht.
Anthropic released an iOS app for their Claude 3 LLM.
I’m past the stage that dismisses LLMs. Some variant will be a useful tool for me. For various tasks. Some I haven’t thought of yet. I’m currently using them as research assistants on topics I’m writing about. To see if detailed prompts (several hundred words with topic headings etc) get responses that include things I’d overlooked. I don’t use any generated text directly.
I might use Claude as a tutor for some studying I plan. #LLM
An #LLM#AI trained on playlists and music reviews who talks with a sneer like a local record store clerk who just doesn't have time for your pedestrian tastes. #ideas But if it must, it will make recommendations.
Very interesting reading on LLMs and their current and future limitations
>> It might be best to say that LLMs demonstrate incredible intuition but limited intelligence. It can answer almost any question that can be answered in one intuitive pass. And given sufficient training data and enough iterations, it can work up to a facsimile of reasoned intelligence.
Although they hallucinate, make mistakes and can't be fully trusted, it's amazing how much knowledge LLMs like Llama3 contain and can deliver with a few gigabytes and how you can interact with it in natural language. If we showed this to someone from 1990 he would call it AGI for sure 😀 #AI#LLM
The amazing thing about LLMs is how much knowledge they posess in their small size. The llama3-8b model, for instance, weighs only 4.7GB yet can still answer your questions about everything (despite some hallucinations). #llm#ai#ollama#llama3
A former Amazon executive has accused the company of telling her to violate copyright law in order to compete with other tech giants in AI, reports Business Insider.
As part of a wider lawsuit against the company, in which Viviane Ghaderi claims she was discriminated against and ultimately fired for taking maternity leave, Ghaderi says she was told to “ignore legal advice and Amazon’s own policies to get better results” when developing its large-language models.
I think I'll settle on paying for Anthropic Claude 3 via their web interface (I'll check out the API access at some point too), and use PAYG API credits via Drafts for access to GPT 4. The GPT 4 selector in the API currently redirects to gpt-4-turbo.
In #homeassistant, using #nodered to make an API call to a #llamacpp server running #mistral 7B model. I create a prompt that asks it to summarize all the data in my house from the sensors. The results are pretty impressive for such a little model. Now I get a customized rundown, Jarvis style.
Useful? Probably not. But cool as hell. :cool_skelly:
Stack Overflow with all the explanation that's needed about why #LLMs are useless for any sort of learning:
Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of answers created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.
The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce.
[…]
The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.
Google provides a tool called PageSpeed Insights which gives a website some metrics to assess how well it is put together and how fast it loads. There are a lot of technical details but in general green scores are good, orange not great and red is bad.
I tried to ensure the tests were similar for each platform by choosing a page that shows a list of posts, like https://mastodon.social/explore.
The rest don’t seem to have prioritized performance or chose a software architecture that cannot be made to perform well on these metrics. It will be very interesting to see how that affects the cost of running large instances and the longevity of the platforms. Time will tell.