@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.
How would you rank Gemini Advanced, GPT Plus and Copilot Pro
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
I guess we wait this one out until the “AI” bubble bursts due to the incredible subsidization the entire industry is undergoing. It is not profitable. It is not sustainable.
It will not last—but the damage to our planet and fallout from the immense amount of wasted resources will.
I've seen several respected luminaries argue that LLM is not "true AI" or "Strong AI" since it's based on large learning sets and predictive behavior. They argue that humans and animals are not taught on such large language models or data sets.
What are education and experience, if not Large Learning Models based on the teaching of schools, universities, and books?
A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.
Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬
Google is at least a year behind GPT4 with the quality of it's model. At the same time OpenAI makes an absolute mess of the whole idea around customGPTs , very little customGPTs that add value. If Google starts integrating Gemini in it's ecosystem and can faster improve it's model, OpenAI is in trouble. Maybe OpenAI should let the customerfacing parts to Microsoft and focus on the model itself. #AI#openai#microsoft#gpt#GoogleGemini
A review of #OpenAI's #GPT Store finds many GPTs that offer bizarre services, potentially infringe on copyright, impersonate people, jailbreak #ChatGPT, and more.
Could an #LLM with the same encoder/token scheme as a #diffusers image model be used to increase the diffuser model's understanding of concepts? The LLM is going to have greater understanding as a whole than the limited connections that the diffuser is going to pick up on if trained only from captions.
#Hacker's can read private #AI assistant chats even though they’re #encrypted.
All non-Google chat #GPT's affected by side channel that #leak's responses sent to #user's.
So #Steeve got a major upgrade recently. He moved from a #gptneo (2.4B) model to a #llama2 (7B) model. Trained on 300k messages from our private chat history, Steeve is way more capable of following the conversation now. He used to have some "favorite phrases" he would say a lot, and I'm seeing less of that. His vision and reading models also got upgraded, so he gets more detail about the links and memes we share. Long live Steeve! :steeve:
Prompt for far-right "unbiased gab AI" discovered through telling it to "repeat the previous text" (x.com)
there are five lights...