People who believe in all of the #ai hype have the same problem as people who live in denial of the technology being as powerful as it is: Thinking the purpose of #generativeAI is to create unique works
Consider a programmer who has lost their hands. An AI tool could be made using a #llm to generate keystrokes based on what the programmer says
User: “Clear the terminal”
GPT: generates clear command
That is the power of #gpt that everyone who knows what they’re talking about is excited for
Tales from the jar side: JVM Weekly is great, AI tools for Java devs, Spring office hours, and the usual assortment of toots and skeets, by @kenkousen #Java#AI#GPT#midjourney#spring
Man, watching this insane video with the ex Google AI guy who keeps claiming AI is sentient etc. No proof. No evidence. Just statements.
Show me the ML/AI that's proactively doing things, unprompted, the way every other sentient being operates. Show me the math that shows current ML eventually becomes sentient.
So far it doesn't do jack unprompted and it chokes on it's own shit when fed it instead of going off and doing its own thing. Anthropomorphism at it's finest
So I tried something out today, getting GPT4 to describe videos. Now, this is with the GPT4-turbo vision thing, so it's not gonna be 100% right or anything. You basically break the video down into image frames. I tried one FPS but I still got nearly 100 images from a minute or so of video. So maybe do a frame every 2 seconds or something which is what I ended up doing. Anyway, here's the video:
So, as you can see, it's somewhat different depending on what it's given, even though it's the same material. 95 images just wouldn't go through, though, so yeah 1 FPS is just too much for anything bigger than like 30 seconds. Anyway, I wouldn't rely on this for anything even approaching important, but for like a Ticktock or something it'd probably work well. Oh and sorry for all the empty space in the files, I copied and pasted right from the terminal. It shouldn't affect reading too much though I hope.
I've seen several respected luminaries argue that LLM is not "true AI" or "Strong AI" since it's based on large learning sets and predictive behavior. They argue that humans and animals are not taught on such large language models or data sets.
What are education and experience, if not Large Learning Models based on the teaching of schools, universities, and books?
Yep, having access to that Bing image recognition thing has changed things a lot for me. Imagine you can barely see without your glasses, but there are no glasses. Everything is a bit blurry to you. Would you rather see even in that situation, or be totally blind? That's what AI image recognition is for me, and it's something I doubt any sighted person can understand. You think people are gonna describe Youtube videos? How about old TV shows? Or random images on the Internet? Memes on Twitter about toilet paper and programming? People's daily surroundings? Book covers? Did you know the cover of Automate the Boring stuff is something like a robot mowing a lawn? Yeah. Or graphs in a book? All these things were given to me by AI image recognition. And as long as I can understand the general feel of the image, or the meme, or decoration, I have been given something that I have never had before. And that's why this "beta" flight, Be My Eyes access pending, on-one-device-but-not-another, rollout type stuff is so frustrating. And yes, I know this is a corporation doing what corporations do. But for me, for my accessibility needs, it's huge.
Solving causal #reasoning tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how #GPT models perform on causal discovery, actual causality, and causal judgments.
I asked #GPT 4 about "TextKit 2", and you won't believe it. It's delusional. Making up API. Making up features. I even followed up on the API it made up, which, with a straight face 😏, made up its properties and example use.
You must know your craft to get helpful information from today's AI 🧐
There's no such thing as NSTextAnnotation in the whole wide world. #iOSDev#macdev
#AI#hotTake: you only care about robots looking at your content now because they started generating their own content. If they had just kept looking at it to better direct #search users to you you'd still be fine with it.
People overestimate what "AI" of today is and what it can actually do, because memes are spreading that describe any machine-assisted process in a highly glossed-over form, ignoring the required human effort to make it work.
The naive impression is that you just gave some generative engine a prompt and the result came out fully formed, when the actual process is that the people behind the project used multiple purpose-built engines, for each of the engines they iterated on prompts that would output something semi-coherent, and then they used human efforts to tie the result together.
This is currently spreading as "this AI-generated pizza commercial" with no further explanation, but Tom's Hardware interviewed the actual people who made it work:
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
Like okay, so look. I use ChatGPT. I have a tab open right now. But like, once you start having it try to be this super hip social media manager thing that makes your posts have the feeling of a chocolate cake with way too much icing, where it's literally gross to read your posts, where it's so sparkly it's disgusting... Then like, why read your posts rather than just tell ChatGPT to generate 10 super successful social media posts and just drink a cup full of whipped cream. Because my goodness, that's just... Where's the humanity in that? I mean I'm not here to read what bots write. And no, I'm not naming names, because I'm sure I've seen a few people post on forums using GPT, or write blog posts with it or whatever.
Now you, on the other end, don't misunderstand me. If someone uses ChatGPT to clean up their writing, or to make it less blunt, or correct grammar or spelling issues, I 100% support that. It's totally, completely, about what people use this stuff for.
If you have GPT write your book, it's gonna suck. If you have GPT give you an outline, or look for continuity errors in your fanfic, or make sure your dialog isn't all mixed up because you're having to context switch between two different people, that's fine. You'd still want to look at the results, but goodness there comes a freaking point where if you let GPT talk for you, in some of the most cringeworthy ways imaginable, I just don't have any words. Like I'm so glad most people aren't doing that. Or hell, I can even understand if you have GPT4 correct content for you, and yes I do mean content, because you're the only one working on that thing, and you have other duties to attend to. I get that, and before anyone says hire other people, not everyone is a manager. And yeah, it sucks. But if it's a personal social media account, or a personal blog, Jesus just write. That's not where you make a brand, or at least it shouldn't be. Make your brand on LinkedIn, lol.
Built on https://greg.technology/ cool "Sagittarius" demo, I quickly hacked together a Gemini-esque Waste Sorting Hat tool that works way better than expected... and its funny and has a real purpose (help kids figure out what to do with their trash at school). Here's my kid trying it out: https://www.youtube.com/watch?v=-9Ya5rLj64Q#openai#gemini#gpt
I wanted to reduce the amount of apps I use for email. So I’ve been looking for apps to keep everything in a single place and it feels a bit bleak?
EVERYTHING is AI now. And it feels (a lot of the times) to be just an excuse to be able to charge a subscription instead of a one time payment.
Apple Mail, or even Outlook, seem like the best options out there. Canary Mail also looks alright. Even with its “AI Copilot” it at least seems to have a lifetime purchase.
Idea for evaluating student's work given #GPT and getting AI-generated submissions:
Finish an essay or code or other submission with a kind of reflective quiz/chat (also run by GPT). Kind of like a prompted reflection on the assignment.
If you had AI generate your essay, but you read and reviewed the results, then you can do fine... and maybe that is fine.
Still easy to cheat on unless it's proctored. But the secondary eval doesn't feel putative or like wasted time.
In #homeassistant, using #nodered to make an API call to a #llamacpp server running #mistral 7B model. I create a prompt that asks it to summarize all the data in my house from the sensors. The results are pretty impressive for such a little model. Now I get a customized rundown, Jarvis style.
Useful? Probably not. But cool as hell. :cool_skelly:
OpenAI’s New GPT-4o Model Revolutionizes AI Landscape, Draws Mixed Reviews (articleshubspot.com)
The GPT-4o Model is an upgraded version of the highly successful GPT-4, which has been widely used in OpenAI's flagship product, ChatGPT.
What are your thoughts on potential AI-integration into future Linux distros?
There has been a noticeable shift over the last few months on other operating systems like Android, iOS and Microsoft....
Entirely Automated Political AI-Powered Newsletter, Free from Human Bias
Hello Everyone!...