So #Steeve got a major upgrade recently. He moved from a #gptneo (2.4B) model to a #llama2 (7B) model. Trained on 300k messages from our private chat history, Steeve is way more capable of following the conversation now. He used to have some "favorite phrases" he would say a lot, and I'm seeing less of that. His vision and reading models also got upgraded, so he gets more detail about the links and memes we share. Long live Steeve! :steeve:
LLM Agents can Autonomously Hack Websites.
"Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs."
But opensource models will reach GPT-4 levels in the very near future so be prepared. https://arxiv.org/html/2402.06664v1?s=09 #ai#llm#cybersecurity#gpt
#Shaarli: gparted - How to prepare a disk on an EFI based PC for Ubuntu? - Ask Ubuntu - Comment préparer un disque pour démarrer en EFI (donc formaté en GPT au lieu de MBR).
TL;DR : table de partition en GPT, partition 512 Mo en FAT32 avec flags esp+boot, partition système, autres partitions.
Prévoir une autre partition vide pour Windows éventuellement + bootrepair après l'installation de Windows.
Does anyone have a good list of logical questions to judge large language models ability to reason?
Questions like "if it takes 3 hours for 3 towels to dry, how long does it take for 9 towels to dry?"
I'm playing around with Mistrals leaked 70b Miqu LLM and want to test it's reasoning skills for a project I'm working on. I've been really impressed so far. It's slower than Mistral & Mixtral but it's been producing the best reasoned answers I've seen from an LLM. And it's running locally!
I'm trying to figure out if this person has blatantly copied part of my blog post without any attribution or if all their posts are GPT generated and they don't even bother to read them before publishing.
Smaug-72B, a Qwen-72B-based open-source #LLM released by Abacus #AI, tops the Hugging Face Open LLM leaderboard and outperforms #GPT-3.5 on several benchmarks
#OpenAI's #GPT-4 finally meets its match: Scots Gaelic smashes safety guardrails - https://www.theregister.com/2024/01/31/gpt4_gaelic_safety/ "The safety guardrails preventing OpenAI's GPT-4 from spewing harmful text can be easily bypassed by translating prompts into uncommon languages – such as Zulu, Scots Gaelic, or Hmong."
ChatGPT violated European privacy laws, Italy tells chatbot maker OpenAI
Italian regulators say they informed OpenAI that its ChatGPT artificial intelligence chatbot has violated European Union’s stringent data privacy rules
You can do research without using large language models and without putting large language models into existing systems you're building. You're still allowed. It's OK and everything will be fine, I promise.
Among the many reasons we should resist the widespread application of generative #AI an important, if less concrete, one is to preserve the freedom to change. This class of method crystallizes the past and present and re-generates it over and over again. The net result, if it's used en masse, is foreclosing the future.
If you're stats-poisoned: human flourishing requires the joint distribution of the future to be different from that of the past and present. We, collectively, form a non-stationary system, and forcing the human system to be stationary is a kind of violence.
Next week I'll be starting a pretty ambitious project—50 Days of LIT Prompts. Every weekday for 10 weeks, I'll be sharing prompt patterns along with my thoughts and readings relating to Large Language Models like those behind #ChatGPT. Follow the link below, and this thread, for updates: https://sadlynothavocdinosaur.com/posts/50-days-of-lit-prompts/
I figured generating a headline is kind of the apotheosis of this week's prompts. I mean, ideally, it feels like a headline is a distillation of a text's essence.