vluz
vluz avatar

vluz

@vluz@kbin.social

What're some of the dumbest things you've done to yourself in Linux?

I’m working on a some materials for a class wherein I’ll be teaching some young, wide-eyed Windows nerds about Linux and we’re including a section we’re calling “foot guns”. Basically it’s ways you might shoot yourself in the foot while meddling with your newfound Linux powers....

vluz,
vluz avatar

Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.

vluz,
vluz avatar

Just figured out there are 10 places called Lisbon dotted around the US, according to the search.

vluz,
vluz avatar

Not just agree, here (Portugal) they make lines to sign up. Saw it on a mall and at a public transport hub among other places.

The Majestic Birth of Graphical User Interfaces – Xerox Alto and the Alto Trek game (blisscast.wordpress.com)

Can you imagine a time before the Graphical User Interface, when you could only operate a computer with abstract-looking text instead of using simple menus, and it was unheard of to use the oh-so-common mouse? A time when computers were harder to learn, and even harder to master? Well then, join us on our splendid trip where...

On the search for the ̶b̶e̶s̶t̶ decent presentation making software (lemmy.ml)

In the post-COVID world where so much is done remotely I’m utterly amazed with the absence of a decent app for making slides. I recently went through a long and honestly very disappointing journey for finding the one and only app that fits my needs. And… yeah… there is none. Here are the requirements I have, and I can...

vluz,
vluz avatar

I got cancelled too and chose Hetzner instead. Will not do business with a company that can't get their filters working decently.

vluz,
vluz avatar

Plus it’s pajeetware so there’s another reason not to use it.

WTF?! I'll just block you and we go our separate ways.

vluz,
vluz avatar

Lovely! I'll go read the code as soon as I have some coffee.

vluz,
vluz avatar

We know remarkably little about how AI systems work

Every single time I see this argument used, I stop reading.

vluz,
vluz avatar

That is extremely better. It is a very interesting problem, as you put it.

vluz,
vluz avatar

I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I've done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.

SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.

Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.

Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I've looked into this news, SSD-1B might be a good candidate for my dumb experiments.

vluz,
vluz avatar

WTF?! Why is there an opinion attached to the article?

vluz,
vluz avatar

Oh my Gwyn, this comment section is just amazing.

vluz,
vluz avatar

Goddammit! Don't tell that one, I use it to impress random people at parties.

What’s a game you consider cozy that others might not? (kbin.cafe)

I think a lot of people agree with me on Minecraft being cozy, but some might not because of how prevalent combat is outside of Peaceful Mode. I personally find Minecraft cozy even off of Peaceful Mode, but this also might have to do with the fact that I usually play it with friends indoors, and I’ve done so on a rainy day...

vluz,
vluz avatar

Not joking, although I understand it seems very silly at face value.
Dark Souls 3 PvP specifically SL60+6 at gank town (after pontiff).
It used to be my go-to wind down after a work day.
It made me smile and actually relaxed me enough to go to bed and sleep, especially after a hard day.

X/Twitter has updated its Terms of Service to let it use Posts for AI training (stackdiary.com)

Here’s the kicker: based on these AI-assigned definitions in the updated terms, your access to certain content might be limited, or even cut off. You might not see certain tweets or hashtags. You might find it harder to get your own content seen by a broader audience. The idea isn’t entirely new; we’ve heard stories of...

[A.I Art] [OC maybe?] I used ai to create multiple images of linux and pacman (monyet.cc)

Hello dear lemmy users , I am a fellow linux enthusiast who found about a particular website which made this image I hope you like it , lemmy fellows :) I would post this on other platforms like reddit /hackernews with the body being you should use lemmy...

vluz, (edited )
vluz avatar

While designing a similar classifier, I've considered the idea of giving it the whole thread as "context" of sorts.
Not just the parent comment, the whole thread up to original post.

I've abandoned the idea.
A comment must stand on it's own, and it would put limits on results, the way I was planning to do it.
I might be very wrong, your insight into this would be very helpful.

My original idea was to go recursively trough the thread and test each comment individually.
Then I would influence the actual comment results with the combined results of it's parents.
No context during inference, just one comment at a time.

For example consider thread OP->C1->C2->C3.
My current model takes milliseconds per test with little resources used.
It would be ok up to very large threads but would contain a limit to save on answer time.
I want to determine if Comment 3 is toxic in the context of C2, C1, and OP.
Test C3, test C2, test C1, test OP. Save results.
My current model gives answer in several fields ("toxic", "severe toxic", "obscene", "threat", "insult", and "identity hate")
The idea was to then combine the results of each into a final result for C3.

How to combine? Haven't figure it out but it would be results manipulation instead of inference/context, etc.

Edit: Is there any way you can point me at examples difficult to classify? It would be a nice real world test to my stuff.
Current iteration of model is very new and has not been tested in the wild.

vluz,
vluz avatar

Oof, pop-culture references are hard and I had not considered that at all.
Thanks for the examples, I'll have a think on how to deal with those.

My only insight is one you already had.
Test at least the comment before, and then use the output to dampen or amplify the final result.
Sorry for being no help at all.

--

My project is very basic but I'll post it here for any insight you might get out of it.
I teach Python in a variety of settings and this is part of a class.

The data used is from Kaggle: https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/
The original data came from Wikipedia toxic comments dataset.
There is code too from several users, very helpful for some insight into the problem.

Data is dirty and needs clean up so I've done so and posted result on HF here:
https://huggingface.co/datasets/vluz/Tox

Model is a very basic TensorFlow implementation intended for teaching TF basics.
https://github.com/vluz/ToxTest
Some of the helper scripts are very wonky, need fixing before I present this in class.

Here are my weights after 30 epochs:
https://huggingface.co/vluz/toxmodel30

And here is it running on a HF space:
https://huggingface.co/spaces/vluz/Tox

How to set up Podman with NVIDIA GPU acceleration and macvlan networking on Gentoo (gist.github.com)

Getting GPU acceleration working is a common task for those of us running Plex or Jellyfin. There is not much documentation for getting the NVIDIA container stack to work with Podman, even less on Gentoo, plus there have been a lot of changes to NVIDIA’s container toolkit lately....

vluz,
vluz avatar

Absolutely stellar write up. Thank you!

I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account total vram memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?

vluz,
vluz avatar

That's wonderful to know! Thank you again.
I'll follow your instructions, this implementation is exactly what I was looking for.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • tsrsr
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • hgfsjryuu7
  • Youngstown
  • InstantRegret
  • slotface
  • everett
  • rosin
  • ngwrru68w68
  • kavyap
  • PowerRangers
  • Leos
  • ethstaker
  • GTA5RPClips
  • Durango
  • cisconetworking
  • osvaldo12
  • vwfavf
  • khanakhh
  • mdbf
  • cubers
  • modclub
  • tacticalgear
  • tester
  • anitta
  • All magazines