Helping someone debug something, said they asked chatgpt about what a series of bit shift operations were doing. He thought it was actually evaluating the code, yno like it presents itself as doing. Instead its example was a) not the code he put in, with b) incorrect annotations, and c) even more incorrect sample outputs. Has been doing this all day and had just started considering maybe chatGPT was wrong.
I was like first of all never do that again, and explained how chatGPT wasnt doing anything like what he thought it was doing. We spent 2 minutes isolating that code, printing out the bit string after each operation, and he immediately understood what was going on.
I fucking hate these LLMs. Empowerment is learning how to figure things out, how to make tools for yourself and how to debug problems. These things are worse than disempowering, teaching people to be dependent on something that teaches them bullshit.
Edit: too many ppl reading this as "this person bad at programming" - not what I meant. Criticism is of deceptive presentation of LLMs.
@PiTau@jonny #LargeLanguageModels are exquisitely bad at anything for which there is very little human curated training data. I asked GPT-4 via #BingChat to generate Vult-DSP for a fairly basic MIDI synthesizer and it very confidently spat out nonsense
Since leaving Twitter, I've been trying to figure out what my journalism-y socials look like. (Threads: A sad Potemkin village where I just share articles. Bluesky: A fun, bitchy hotel cafeteria that still seems ill-suited to news.) I've been trying to use Mastodon for more hard news stuff, but I'm having a hard time getting my feed to be both active and relevant.
So I'm trying a few experiments to see if I can rejig my good. If you've got good journalism follow suggestions, send em my way.
Generative AI answers are essentially worthless without full attribution for the sources of the information. It's not just a matter of giving proper credit to those sources, but permitting users to easily click through for more details and especially to determine the veracity of the AI answers, both in total and in their individual parts.
An AI answer might be accurate in most respects, causing users to assume it's accurate in all respects, but be wrong about even one critical aspect of the answer that could result in injury or death to the person assuming 100% accuracy.
Disastrous. Not hyperbole. Will the firms take responsibility for damages done by errors in their AI answers?
@lauren do you like the way #Bingchat cites sources for its answers?
To me it very often exposes just how flimsy the sources can be. But it also offers rabbit holes to learn more. Anyway at least this level of citation seems totally reasonable to expect from any bot.
Predictably, #microsoft started injecting ads into #openai#gpt4 powered #bingchat conversations…and just as predictably, there is now a huge #malvertising problem in Bing Chat.
It’s actually worse than #malware poisoned advertisements showing up in search engine results for a couple of reasons.
One is that the “ad” links in #bingchat aren’t as clearly distinguished from regular search results as they are in a non-chat search engine format. (There’s a label, but it’s small and hard to see.)
Another is that Microsoft didn’t have ads in Bing Chat for the first six months and just sort of quietly snuck them in with these barely visible labels. People who have been using Bing Chat for a while may not realize that they’re there.
We could be using #AI tools to help with things like searching for #malware or develop #threatmodels or tailor #cybersecurity controls. A company like #microsoft could use it for some of these things (they have a widely used threat modeling methodology and associated tool). Instead they are using it to trick people with #bingchat and make #malvertising worse.
But also…the conversational and personal format of #bingchat, in contrast to the rather impersonal nature of traditional search results, lulls users and gets them to let their guard down.
I am not blaming the users here, I am totally blaming #Microsoft for 1) tricking people and 2) not checking what anyone puts on its ad delivery platform as long as they fork over money, like anyone else who runs an ad delivery platform.
Some of our staff are asking to use Bing Chat Enterprise, so we're checking it out before deploying it.
It can summarize information from the current web page you're viewing in Edge. I opened my personal /about page, then asked it what other projects I was working on.
Arson fires is NOT a project I am working on. This is going to be fun. 🤖
If I ask it to 'copyedit' a paragraph, it does a great job, making small tweaks like a spell checker 👍
But give it 3-4 paragraphs to rework, it changes the text so completely that my 'voice' is completely gone.
If I ask it to just compose something with a prompt, it's insipid silliness. I mean its reasonable, just so generic it offers no value. It's not just Bard, I've also tried #ChatGPT with the same effect.
@scottjenson I have found that the products which are specifically tuned to help you write are slightly better than the ones that are generic chat spaces; see the #BingChat compose mode in #MicrosoftEdge and also the Write tool from #DeepL.
"Bing Chat" bullshit delivered via Windows Update hung the Explorer load on a computer used for playout of music at a radio station. Just resulted in a black screen after the windows splash. THANKS!
Modern computing is such a terrible experience.
Windows is increasingly morphing into an advert delivery platform.
This disaster resulted in a support call from a business at 3.30am last night as one of their business critical machines was just doing a black screen on boot.
It's unforgivable #Microsoft are doing this and, in some setups, doing it in such a way the computer fails to start up. I don't want Bing Chat, don't install it via windows update automatically! #BingChat#BingChatInstaller#micrososftCopilot
I find it really unsurprising #Dalle / #bingchat seems unable to draw a folding bicycle in its folded configuration. To be fair, I also can't draw them folded or unfolded.