HoloPengin

@HoloPengin@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

HoloPengin,

Either that, or they use specific tools that they can’t or won’t replace and which don’t work on Linux. Usually it’s creative or engineering software. There are usually good, Linux compatible, open source alternatives, but they’re not the same as industry standard tools that they need to know how to use and be 100% compatible with. Windows or MacOS is your only safe bet there.

If you’re a mere hobbyist and interested in learning new tools it’s an entirely different answer. You can try out the windows versions of the alternative software first, then try switching to Linux down the line when see the greener grass.

HoloPengin,

Supposedly it’s actually pretty decent if you just turn off all of the quest markers and whatnot in the settings. Turns it into more of an immersive story driven exploration game instead of an Ubisoft clear the map checkbox game.

HoloPengin,

Throw some silicone joystick protector rings on your sticks if you haven’t. Makes the joystick almost completely silent even when I slam them against the shell, and as long as they’re seated right and clean they’ll still slide smoothly against the shell. Just make sure to run through the calibration script so you still have full joystick range after adding them

HoloPengin,

It’s just regulation. No sweepstakes allowed without some “skill” involved

HoloPengin, (edited )

Unless their production costs are vastly cheaper for the old model, I give it maybe 6 months before they replace the 256 LCD sku with an OLED version. They probably know they wouldn’t be able to keep up with releasing the entire lineup at once and want to get just a bit more use out of the existing lcd sku production line and supply chain (using up already purchased components and running out contracts) before they shutter it.

HoloPengin,

A vibrating buttplug. It also self replicates at the press of a button.

HoloPengin,

This. Do I want an OLED deck? Yes. Do I need one? Absolutely not. I like my deck enough and I can wait for Steam Deck 2.

HoloPengin,

Yep. They’ve said that basically no internal components are cross compatible between original and OLED steam decks. Everything’s been redesigned internally

HoloPengin,

The sausage is the operator? Not the bun?

HoloPengin,

Another is the rotisserie chicken which they put in the back of the store.

HoloPengin,

Dangan (弾丸) literally just means bullet, and Danganronpa doesn’t use the kanji in its logo. The stripe with the English/romaji subtitle is the same color as the big text in the Danganronpa logo but it’s inverted here.

It’s fine.

HoloPengin, (edited )

As a side note because it wasn’t too clear from your writing, but the weights are only tweaked a tiny tiny bit by each training image. Unless the trainer sees the same image a shitload of times (Mona Lisa, that one stock photo used to show off phone cases, etc) then the image can’t be recreated by the AI at all. Elements of the image that are shared with lots of other images (shading style, poses, Mario’s general character design, etc) could, but you’re never getting that one original image or even any particular identifiable element from it out of the AI. The AI learns concepts and how they interact because the amount of influence it takes from each individual image and its caption is so incredibly tiny but it trains on hundreds of millions of images and captions. The goal of the AI image generation is to be able to create vast variety of images directed by prompts, and generating lots of images which directly resemble anything in the training set is undesirable, and in the field it’s called over-fitting.

Anyways, the end result is that AI isn’t photo-bashing, it’s more like concept-bashing. And lots of methods exist now to better control the outputs, from ControlNet, to fine-tuning on a smaller set of images, to Dalle-3 which can follow complex natural language prompts better than older methods.

Regardless, lots of people find that training generative AI using a mass of otherwise copyrighted data (images, fan fiction, news articles, ebooks, what have you) without prior consent just really icky.

HoloPengin, (edited )

Heads up, this is a long fucking comment. I don’t care if you love or hate AI art, what it represents, or how it’s trained. I’m here to inform, refine your understanding of the tools (and how exactly that might fit in the current legal landscape), and nothing more. I make no judgements about whether you should or shouldn’t like AI art or generative AI in general. You may disagree about some of the legal standpoints too, but please be aware of how the tools actually work because grossly oversimplifying them creates serious confusion and frustration when discussing it.

Just know that, because these tools are open source and publically available to use offline, Pandora’s box has been opened.

copying concepts is also copyright infringement

Except it really isn’t in many cases, and even in the cases where it could be, there can be rather important exceptions. How this all applies to AI tools/companies themselves is honestly still up for debate.

Copyright protects actual works (aka “specific expression”), not mere ideas.

The concept of a descending blocks puzzle game isn’t copyrighted, but the very specific mechanics of Tetris are copyrighted. The concept of a cartoon mouse isn’t copyrighted, but mickey mouse’s visual design is. The concept of a brown haired girl with wolf ears/tail and red eyes is not copyrighted, but the exact depiction of Holo from Spice and Wolf is (though that’s more complicated due to weaker trademark and stronger copyright laws in Japan). A particular chord progression is not copyrightable (or at least it shouldn’t be) but a song or performance created with it is.

A mere concept is not copyrightable. Once the concept is specific enough and you have copyrighted visual depictions of it, then you start to run more into trademark law territory and start to gain a copyright case. I really feel like these cases are kinda exceptions though, at least for the core models like stable diffusion itself, because there’s just so much existing art (both official and even moreso copyright/trademark infringing fan art) of characters like Mickey Mouse anyways.

The thing the AI does is distill concepts and interactions between concepts shared between many input images, and can do so in a generalized way that allows concepts never before seen together to be mixed together easily. You aren’t getting transformations of specific images out of the AI, or even small pieces of each trained image, you’re instead getting transformations of learned concepts shared across many many many works. This is why the shredding analogy just doesn’t work. The AI generally doesn’t, and is not designed to, mimic individual training images. A single image changes the weights of the AI by such a miniscule amount, and those exact same weights are also changed by many other images the AI trains on. Generative AI is very distinctly different from tracing, or distributing mass information that’s precisely specific enough to pirate content, or from transforming copyrighted works to make them less detectable.

To drive the point home, I’d like to expand on how the AI and its training is actually implemented, because I think that might clear some things up for anyone reading. I feel like the actual way in which the AI training uses images matters.

A diffusion model, which is what current AI art uses, is a giant neural network that we want to guess the noise pattern of an image. To train it on an image, we add some random amount of noise to the whole image (could be a small amount like film grain, or it could be enough to make the image completely noise, but it’s random each time), then pass that image and its caption through the AI to get the noise pattern the AI guesses is in the image. Now we take the difference between the noise pattern it guessed and the noise pattern we actually added to the training image to calculate the error. Finally, we tweak the AI weights based on that error. Of note, we don’t tweak the AI to perfectly guess the noise pattern or reduce the error to zero, we barely tweak the AI to guess ever so slightly better (like, 0.001% better). Because the AI is never supposed to see the same image many times, it has to learn to interpret the captions (and thus concepts) provided alongside each image to direct its noise guesses. The AI still ends up being really bad at guessing high noise or completely random noise anyways, which is yet another reason why it can’t generally reproduce existing trained images from nothing.

Now let’s talk about generation (aka “inference”). So we have an AI that’s decent at guessing noise patterns in existing images as long as we provide captions. This works even for images that it didn’t train on. That’s great for denoising and upscaling existing images, but how do we get it to generate new unique images? By asking it to denoise random noise and giving it a caption! It’s still really shitty at this though, the image just looks like some blobby splotches of color with no form, else it probably wouldn’t work at denoising existing images anyways. We have a hack though: add some random noise back into the generated image and send it through the AI again. Every time we do this, the image gets sharper and more refined, and looks more and more like the caption we provided. After doing this 10-20 times we end up with a completely original image that isn’t identifiable in the training set but looks conceptually similar to existing images that share similar concepts. The AI has learned not to copy images while training, but actually learned visual concepts. Concepts which are generally not copyrighted. Some very specific depictions which it learns are technically copyrighted, i.e. Mickey Mouse’s character design, but the problem with that claim too is that there are fair use exceptions, legitimate use cases, which can often cover someone who uses the AI in this capacity (parody, educational, not for profit, etc). Whether providing a tool that can just straight up allow anyone to create infringing depictions of common characters or designs is legal is up for debate, but when you use generative AI it’s up to you to know the legality of publishing the content you create with it, just like with hand made art. And besides, if you ask an AI model or another artist to draw Mickey mouse for you, you know what you’re asking for, it’s not a surprise, and many artists would be happy to oblige so long as their work doesn’t get construed as official Disney company art. (I guess that’s sorta a point of contention about this whole topic though isn’t it? If artists could get takedowns on their mickey mouse art, why wouldn’t an AI model get takedowns too for trivially being able to create it?)

Anyways, if you want this sort of training or model release to be a copyright violation, as many do, I’m unconvinced current copyright/IP laws could handle it gracefully, because even if the precise method by which AI’s and humans learn and execute is different, the end result is basically the same. We have to draw new more specific lines on what is and isn’t allowed, decide how AI tools should be regulated while taking care not to harm real artists, and few will agree on where the lines should be drawn.

Also though, Stable Diffusion and its many many descendents are already released publicly and open source (same with Llama for text generation), and it’s been disseminated to so many people that you can no longer stop it from existing. That fact doesn’t give StabilityAI a pass, nor do other AI companies who keep their models private get a pass, but it’s still worth remembering that Pandora’s box has already been opened.

HoloPengin,

If you hold down the Steam button, do the controls work? And are you launching the apps from the Steam UI or from the start menu?

HoloPengin,

Off topic

HoloPengin,

Doesn’t proton-ge have a specific build for LoL that makes this shit basically trivial?

HoloPengin,

What’s works is when you have an Isekai that absolutely doesn’t shortcut it, but actually ties the initial rejection of the call into how they felt about themselves in their old life

DEF CON 31 - An Audacious Plan to Halt the Internet's Enshittification - Cory Doctorow (www.youtube.com)

The enshittification of the internet follows a predictable trajectory: first, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. It doesn’t have to be this way....

HoloPengin,

I don’t know if I’d call it monopolized exactly. It’s not like we can’t get alternative email accounts from other companies to corporate to encrypted to private server, etc.

Google absolutely has the most say in what’s correct about the protocol/security because they’re the de-facto standard for individual user accounts, but literally nothing is stopping you from running your own server.

deleted_by_author

  • Loading...
  • [News] Updates to the linux kernel 6.6 suggest a hardware refresh/variant of the Steam Deck is in development (www.phoronix.com)

    The Linux kernel has been updated for some new hardware that still includes the AMD Van Gogh APU, which is currently only used in the Steam Deck. Popular speculation is that Valve will release an updated Steam Deck, one that still uses the same APU (so same performance), but had other changes to the hardware. Possibly different...

    HoloPengin, (edited )

    That’s not wild speculation, just normal speculation. It’d also maybe possible that the refreshed sephiroth chip that’s also been found recently could be used in both a deck refresh and deckard.

    Valve does tend to re-use hardware between different products when it makes sense anyways. The watchman dongle for SteamVR controller data was just a Steam Controller dongle. You can actually flash the firmwares between each.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Durango
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • ethstaker
  • InstantRegret
  • Leos
  • normalnudes
  • everett
  • khanakhh
  • osvaldo12
  • cisconetworking
  • modclub
  • anitta
  • tester
  • megavids
  • lostlight
  • All magazines