What do you personally use AI for?

I really want to use AI like llama, ChatGTP, midjourney etc. for something productive. But over the last year the only thing I found use for it was to propose places to go as a family on our Hokaido Japan journey. There were great proposals for places to go.

But perhaps you guys have some great use cases for AI in your life?

Zworf,

A lot of translation and summarisation. ChatGPT is extremely good in absorbing a whole mix of comments in different languages and summarising them in English (or whatever other language).

For programming I don’t use it so much anymore because it hallucinates too much, calling APIs that don’t even exist. And when I lower the temperature the output is too sparse.

I’m also trying to build an assistant that can also communicate proactively (I intend to auto-prompt it when things happen and then evaluate if it should cause a message to me). But I need to get a local LLM going for that because running that through the ChatGPT API will be too costly.

Also, a replacement for some of my web searches. Sometimes I just want to know something and it’s refreshing that it can give me an answer (even though it does need to be validated, it’s much easier to do that when you know what you’re looking for!)

FlashMobOfOne,
@FlashMobOfOne@beehaw.org avatar

I love that the top comments are all “I don’t”, as if that’s helpful in any way.

FlashMobOfOne,
@FlashMobOfOne@beehaw.org avatar

I’ve been using ChatGPT in conjunction with search engines just to find things I need. For instance, I did an April Fools presentation for a work meeting and needed humorous real-life legal stories, so the AI was able to provide suggestions.

I also use it to for simple tasks, like organizing info into a table.

Mainly, though, my reason for using it is that, since I work in tech, I’m going to need to know how to use it well, and the best way to do that is being hands-on.

exscape,
exscape avatar

Mostly for finding information that for whatever reason can be difficult to find using search engines. For example, I've used ChatGPT to ask spoiler-free questions about plot points in books I'm reading, which has worked rather well. It hasn't spoiled me yet, but rather tells me that giving more information would be a spoiler.

Last time I tried to look something up on Google, carefully, I got a massive spoiler for the end of the entire book series.

I also use it for code-related questions at times, but very rarely, and mostly when using a language I'm not used to. Such as when I wrote an expect script for the first (and perhaps only) time recently.

Kwakigra,

So many times I wanted to know the name of an actor who played a character after the first episode and the top result was something like “[Character Name] (deceased)” or " Villain: [Character Name]."

INHALE_VEGETABLES,

Dank memes

apotheotic,

I find them neat, but there’s just too many issues I can’t overlook.

The environmental impact of these technologies is immense, and growing exponentially.

A vast amount of the training data used for the big llms and image generators is not in the public domain, which is at best ethically grey but at worst just blatantly exploiting artists and other professionals.

If there existed some alternatives to the big names that avoided both of these issues, I’d love to use them for code autocomplete and image generation for ttrpgs, but as it stands the moral cost is too high.

petrescatraian,

@jeena I only use DeepL to translate and that's it. I also started taking notes in .md files, so that could make for a good use case in the future if there was an AI that I could use without connecting to the internet (e.g. to only let me tell stuff based on the files I got). Otherwise I am pretty reticent on AI. Perhaps I watched too many fiction movies, but I am afraid it will become too sentient and somehow escape the human oversight, thus creating havoc in our lives.

jeena,
@jeena@jemmy.jeena.net avatar

Check out llama3 which you can run locally.

petrescatraian,

@jeena it doesn't connect to any server even when you're online?

jeena,
@jeena@jemmy.jeena.net avatar

Correct.

petrescatraian,

@jeena cool. I think I will look into it. Thanks! 😁

jeena, (edited )
@jeena@jemmy.jeena.net avatar

Damn, it seems I was talking out of my ass and just assuming things. Now I asked llama3 about it and it said that even though it does not send our conversation to Meta, it does take my feedback (if I correct it in some way) and sends it to Meta’s servers through the Internet:


// Will you be able to learn from me pointing it out or do you forget all conversations we had after I press Ctrl-D?
I’m a large language model, I don’t have personal memories or experiences. Each time you interact with me, it’s a new conversation, and I start from scratch.

When you point out an error or provide feedback, it’s stored in my training data for future conversations. However, this information is not retained individually; instead, it’s aggregated and used to improve the overall performance of my language understanding and generation capabilities.

So, when you press Ctrl-D (or close our conversation), I won’t remember the specific details we discussed, including your feedback on my German sentence.
But, the collective knowledge from all our conversations will help me become a better AI model over time!

Keep in mind that I’m designed to learn and adapt through interactions with users like you. Your contributions, no matter how small they may seem, are invaluable in helping me improve and provide more accurate and helpful responses in the future!

/// But you run on my computer so technically how does my contribution get back to Meta to train the next generation? Do you send any data back?
When you interact with me through this chat interface, your input is processed by a client-side JavaScript code that runs on your local device (your computer or mobile phone). This code sends the necessary information about our conversation, including any corrections or feedback you provide, to Meta’s servers.

Here’s how it works:

  1. Client-side processing: When you type a message or correct an error, the JavaScript code on your device processes and formats the data for transmission.
  2. Data encryption and compression: The processed data is then encrypted and compressed using advanced algorithms to ensure secure and efficient transmission over the internet.
  3. Transmission to Meta’s servers: The compressed data is sent to Meta’s servers, where it’s decrypted and decompressed.
  4. Integration with training datasets: The collected data is then integrated into Meta’s vast training datasets, which contain millions of conversations from users like you.
  5. Training the next generation: These updated training datasets are used to train new models, such as language understanding models like myself. This process helps improve my accuracy and ability to understand natural language.

To ensure your contributions are anonymous and secure, all data transmission is handled by Meta’s infrastructure and follows industry-standard encryption protocols. Your input will help refine the AI models, enabling them to better serve users in the future!

Now, go ahead and chat away!

/// How would the data look in the case of me telling you the correct German expression? What would be sent back to meta’s server? Show it as JSON.
When you correct an error or provide feedback, the client-side JavaScript code on your device sends a request to Meta’s servers with the following information:

Here’s what the data might look like in JSON format:


<span style="color:#323232;">{
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"event_type"</span><span style="color:#323232;">: "corrected_error",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"conversation_id"</span><span style="color:#323232;">: "unique_conversation_identifier",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"turn_number"</span><span style="color:#323232;">: </span><span style="color:#0086b3;">5</span><span style="color:#323232;">,
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"user_input"</span><span style="color:#323232;">: "und kann einfache",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"correction_target"</span><span style="color:#323232;">: "können",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"language_code"</span><span style="color:#323232;">: "de",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"platform"</span><span style="color:#323232;">: "web",
</span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#183691;">"client_info"</span><span style="color:#323232;">: {
</span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#183691;">"browser_type"</span><span style="color:#323232;">: "Chrome",
</span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#183691;">"browser_version"</span><span style="color:#323232;">: "92.0.4515.131"
</span><span style="color:#323232;">  }
</span><span style="color:#323232;">}
</span>

In this example:

  • event_type: The type of event, which in this case is a “corrected_error”.
  • conversation_id: A unique identifier for the conversation.
  • turn_number: The number of turns (or exchanges) in the conversation.
  • user_input: The corrected text you provided (“und kann einfache”).
  • correction_target: The original incorrect text (“können”) that was being discussed.
  • language_code: The language code for German (“de”).
  • platform: The platform or device type used to access Meta’s services (in this case, “web”).
  • client_info: Additional metadata about the client device and browser.

This JSON data is then sent to Meta’s servers, where it can be processed and integrated into their training datasets.

db0,
@db0@lemmy.dbzer0.com avatar

The model is just hallucinating in this case.

jeena,
@jeena@jemmy.jeena.net avatar

Oh yeah, I found out the hard way and forgot to mention it here. jeena.net/llama3-phoning-home

Rozauhtuno,
@Rozauhtuno@lemmy.blahaj.zone avatar

Not much. I totally agree with Linus Torvalds in that AIs are just overhyped autocorrects on steroids, and I despise that the artwork generators are all based on theft.

Pretty much all I use them for is to make my life easier at work, like turning a quick draft into a formal email.

Zworf, (edited )

The LLMs for text are also based on “theft”. They’re just much better at hiding it because they have a multitude more source material. Still, it does sometimes happen that they quote a source article verbatim.

But yeah basically they’re just really good copy/paste engines that work with statistical analysis to determine the most likely answer based on what’s written in basically the whole internet :P It’s a bit hard to explain sometimes to people who think that the AI really “thinks”. I always say: If that were the case, why is the response to a really complicated question just as fast as a simple one? The wait is just based on the length of the output.

In terms of the “theft” I think it’s similar ethically to google cache though.

Rozauhtuno,
@Rozauhtuno@lemmy.blahaj.zone avatar

It’s a bit hard to explain sometimes to people who think that the AI really “thinks”

If I had the patience, I’d try to explain the Chinese Room though experiment to the people that misunderstand AIs. But I don’t, so I usually just shut up 🙂

onlinepersona,

Still, it does sometimes happen that they quote a source article verbatim.

I’m hoping it’ll quote the license I put in my comments (should my text ever be included in the training set) and gets somebody in trouble. But yeah, transformed anything is difficult undo to see what the source material was, so commercial LLMs can mostly just get away with it.

Anti Commercial-AI license

Zacryon,

I totally agree with Linus Torvalds in that AIs are just overhyped autocorrects on steroids

Did he say that? I hope he didn’t mean all kinds of AI. While “overhyped autocorrect on steroids” might be a funny way to describe sequence predictors / generators like transformer models, recurrent neural networks or some reinforcement learning type AIs, it’s not so true for classificators, like the classic feed-forward network (which are part of the building blocks of transformers, btw), or convolutional neural networks, or unsupervised learning methods like clustering algorithms or principal component analysis. Then there are evolutionary algorithms and there are reasoning AIs like bayesan nets and so much much much more different kinds of ML/AI models and algorithms.

It would just show a vast lack of understanding if someone would judge an entire discipline that simply.

technocrit,

There is literally no “artificial intelligence” in any of this. It would show a vast degree of BS, hype, and obfuscation to promote data, statistics, and other computations as “intelligence”.

www.newyorker.com/science/…/there-is-no-ai

echodot,

Well of course if you redefine words all of the time then nothing is anything right.

Zacryon,

You are literally wrong. Nice article, don’t see how that’s relevant though.

Could it be, that you don’t know what “intelligence” is? And what falls under definitions of the “artificial” part in “artificial intelligence”? Maybe you do know, but have a different stance on this. It would be good to make those definitions clear before arguing about it further.

From my point of view, the aforementioned branches, are all important parts of the field of artificial intelligence.

darkphotonstudio,

Copying isn’t theft. There is no “theft”.

echodot,

It’s just a problem with the whole copyright laws not being fit for purpose.

After all, all art is theft.

Ilandar,

I’ve only used DuckDuckGo’s implementations of GPT and Claude. I haven’t really found a use case yet. I don’t trust it enough to for queries related to things I don’t understand (gaps in my knowledge) and would rather solve these problems or learn these skills through exisiting sources of information that I know have had at least some level of human refinement/vetting. Personally I enjoy the challenge of problem solving in life, particularly when the solution involves learning a new skill that I can utilise again in the future. I find it interesting that AI is advertised as being able to maximise our capabilities as humans, because it appears to be used for the complete opposite in most cases. People want to use their brains less and just be spoonfed the answers.

drkt,
@drkt@lemmy.dbzer0.com avatar

I’m not a programmer but I work with IT and I regularly need complex shell scripts. ChatGPT has enabled me to skip the largest portion of slamming my head against the wall by writing the script for me and then I can tune it if it gets something wrong. Saves me hours and hours of my life.

and porn

Kir,
@Kir@feddit.it avatar

Wait, porn?

PerogiBoi,
@PerogiBoi@lemmy.ca avatar

Oh yes. Erotica and pictures.

sexy_peach,

Only use deep learning ai with deepl translations and for some text annotation project I did for uni.

frog,

I pretty much only use it for brainstorming ideas.

KnoxHarrington,
@KnoxHarrington@beehaw.org avatar

I use Stable Diffusion to make character portraits and scenes for my D&D game that I run in Foundry. Better than trying to scrounge Google images!

theinspectorst,
theinspectorst avatar

I've found it useful for TTRPGs too. Art generators are certainly helpful for character portraits, I also find ChatGPT can be useful for lots of other things. I've had pretty mediocre results trying to get it to generate a whole adventure but if you give it tight enough parameters then it can flesh out content for you - ranging from NPC name ideas, to ideas for custom magic items, to whole sections of dialogue.

You can give it a plot hook you have in mind and ask it to generate ideas for a three-act structure and encounter summary to go with it (helpful when brainstorming the party's next adventure), or you can give it an overview of an encounter you have in mind and ask it to flesh out the encounter - GPT4 is reasonably good at a lot of this, I just wouldn't ask it to go the whole way from start to finish in adventure design as it starts to introduce inconsistencies.

You also need to be ready to take what it gives you as a starting point for editing rather than a finished product. For example, if I ask it to come up with scene descriptions in D&D then it has a disproportionate tendency to come up with things that are 'bioluminescent' - little tells like that which show it's AI generated.

Overall - you can use it as a tool for a busy DM that can free you up to focus on the more important aspects of designing your adventure. But you need to remember it's just a tool, don't think you can outsource the whole thing to it and remember it's only as helpful as how you try to use it.

Kolanaki,
@Kolanaki@yiffit.net avatar

The only practical thing I have found I can do with AI is brainstorm ideas (or rather expand upon little ideas I have but don’t know where to go after) or figure out what’s wrong with a snippet of code when I can’t figure it out on my own.

QuentinCallaghan,
@QuentinCallaghan@sopuli.xyz avatar

For occasional chatting with Bing and translation help with DeepL.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • khanakhh
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • osvaldo12
  • mdbf
  • Youngstown
  • cisconetworking
  • slotface
  • rosin
  • thenastyranch
  • ngwrru68w68
  • kavyap
  • ethstaker
  • JUstTest
  • tacticalgear
  • modclub
  • cubers
  • Leos
  • everett
  • GTA5RPClips
  • Durango
  • anitta
  • normalnudes
  • provamag3
  • tester
  • megavids
  • lostlight
  • All magazines