@7heo@lemmy.ml
@7heo@lemmy.ml avatar

7heo

@7heo@lemmy.ml

This profile is from a federated server and may be incomplete. Browse more on the original instance.

7heo, (edited )
@7heo@lemmy.ml avatar

This is the way. And I might add, Unix desktop. Let’s not start bikeshedding between FOSS Unix distributions out of dogmatic reasons (I’m sure you didn’t mean to specifically single out “Linux” here, but I wish we would stop opposing “Linux” and other Unixes like BSD, Illumos, etc).

The point is, voting with your data for software that is defending your interests, and respecting your rights.

Edit: Dang, I didn’t expect to get so much slack for “Unix as opposed to Unix-Like”. I absolutely meant “Unix-Like”, but my point is that it shouldn’t matter. Most software is trying to be compatible, these days, and Linux isn’t (in spite of all that marketing material) an OS. It is a kernel. So semantics for semantics, can it even be compared to something it is not? I merely tried to be inclusive.

7heo,
@7heo@lemmy.ml avatar

It turns out that “Women Who Code Closing - Women Who Code” actually isn’t about Women that code a software called “Closing”, and Women that code in general.

In fact, what they meant to write was:

The End of an Era: “Women Who Code” Closing – Women Who Code.

I know I’m gonna get downvoted for this, but punctuation matters, and sadly, it has to be said. So here I go.

7heo, (edited )
@7heo@lemmy.ml avatar

Maybe they mean it in the sense of “forgery”. You know, as in “let people imagine what it is like to have friendships, by letting them make forgeries of their lives, but with friends in it” 🤪

7heo,
@7heo@lemmy.ml avatar

With a 52% percent mortality rate, this might well be the last such opportunity. One way or another. 😬

7heo,
@7heo@lemmy.ml avatar

Is it just me, or is everyone here commenting on a half article, the other half being behind a paywall? 😬

7heo,
@7heo@lemmy.ml avatar

I think we can all agree on that… But without the entire article, one can only parametrise their answer… I was hoping someone with a full version could do an HTML dump. 😅

Or at the very least a markdown dump in here.

7heo,
@7heo@lemmy.ml avatar

Damn, now I want part 2!! 😶

(Thanks for posting!! 🙏)

7heo,
@7heo@lemmy.ml avatar

I personally don’t think we’re getting anything. I always saw the “it takes time” point as a way to duck the issue until the community would forget about it entirely.

So, all in all, if you really want to get some answers, maybe wait enough time to give LMG more than enough time to investigate; so much so that using the “this wasn’t a reasonable amount of time” defence strategy would be impossible, and then try to get the community to care again. And ask for answers. I personally do not see this happening, ever, but I hope I’m wrong.

7heo,
@7heo@lemmy.ml avatar

Her lawyers

That’s assuming she can hire any. Her case is far from clear cut, I’m not convinced anyone would take her case for a proportion of a potential settlement.

She probably has no recourse.

7heo, (edited )
@7heo@lemmy.ml avatar

Yeah, it is one of the least bad uses for it.

But then again, using literal tera-watts-hours of compute power to save on the easiest actually recyclable material known to man (cardboard), maybe that’s just me, maybe I’m too jaded, but it sounds like a pretty bad overall outcome.

It isn’t a bad deal for Amazon, tho, who is likely to save on costs, that way, since energy is still orders of magnitude cheaper than it should be[^1], and cardboard is getting pricier.

[^1]: if we were to account for the available supply, the demand, and the future (think sooner than later) need for transition towards new energy sources… Some that simply do not have the same potential.

7heo, (edited )
@7heo@lemmy.ml avatar

I think you’re overstating the compute power […]

I don’t actually think so. A100 GPUs in server chassis have a 400 or 500W TDP depending on the configuration, and even if I’m assuming 400, with 4 per watercooled 1U chassis, a 47U rack with those would consume about 100kW with power supply efficiency and whatnot.

Running those for a day only would be 2.4GWh.

Now, I’m not assuming Amazon would own 100s of those racks at every DC, but they probably would use at least a couple of such racks to train their model (time is money, right?). And training them for a week with just two of those would be 35GWh, and I can only extrapolate from there.

So I don’t think that going to TWh is such an overstatement.

[…] and understating the amount of cardboard Amazon uses

That, very possibly.

I have seldom used Amazon ever, maybe 5 times tops, and I can only remember two times. Those two times, I ordered a smartphone and a bunch of electronics supplies, and I don’t remember the packaging being excessive. But I know from plenty of memes that they regularly overdo it. That, coupled with the insane amount of shit people order online… And yes, I believe you are right on that one.

Even so, as long as it is cardboard, or paper, and not plastic and glue, it isn’t a big ecological issue.

However, that makes no difference to Amazon financially, cost is cost, and they only care about that.

But let’s not pretend they are doing a good thing then. It is a cost effective measure for them, that ends up worsening the situation for everyone else, because the tradeoff is good economically, and terrible ecologically.

If they wanted to do a good thing, they could use machine learning to optimise the combining of deliveries in the same area, to save on petrol, and by extension, pollution from their vehicles, but that would actually worsen the customer experience, and end up costing them more than it would save them, so that’s never gonna happen.

7heo,
@7heo@lemmy.ml avatar

Do bullets kill soldiers?

Infantry soldiers in the open, possibly. Soldiers in an APC? No.

Same applies to companies. A single sufficient bad review on a small, one-person company can take it out entirely. A single review of a big corporation? Not even one from a big shot like MKBHD.

This headline is dumb.

7heo,
@7heo@lemmy.ml avatar

The thing is, devops is pretty complex and pretty diverse. You’ve got at least 6 different solutions among the popular ones.

Last time I checked only the list of available provisioning software, I counted 22.

Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, “cloud LAN”, etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the “lesser ones”?

And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

And that’s not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

The entire ecosystem has considerable overlap too.

So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged… And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There’s no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.

7heo,
@7heo@lemmy.ml avatar

But use the widows version and the proton layer. The Linux version is horribly coded.

7heo,
@7heo@lemmy.ml avatar

I would not call that a “privacy proxy”, it is very disingenuous. It is a normal proxy, which replaces the technical metadata from your connection, so that automated tracking is harder. But it will not replace or remove any of your input. And you can easily be tracked that way too.

7heo,
@7heo@lemmy.ml avatar

Plus, that way, you have a trail of invites. If something goes wrong, you can prune entire branches and mitigate most abuse.

7heo,
@7heo@lemmy.ml avatar

Yeah, I find the puzzle sliding JavaScript captchas the best as a user. Cognitively better than “training neural networks to recognise protestors”, and still fast enough that it doesn’t feel like a forced ad. Reliability might however vary a lot between implementations.

7heo,
@7heo@lemmy.ml avatar

I’d argue that what is holding the Linux GUI back is the amount of options, combined with the lack of proper interoperability testing (not for the lack of trying, but between the amount of options and the amount of versions, it is absolutely unfeasible), and the lack of strong design choice on the side of distributions: everyone wants to have and support everything under the sun, even if it means having 4 or 5 different flavours or editions of a particular distribution.

Don’t get me wrong, I salute the intention and the initiative, but concretely, this almost always (and I put “almost” to be safe, I’ve never seen a counter example) means a clunky, unpolished experience in most cases.

I usually describe it as:

If GUIs were doors:

  • Mac OS would be selling literally only one kind of door, that is super slick, brushed metal, glass and white, fancy, with a black glass and brushed metal handle, has a great feel to it, good heft, great handling, satisfying sound and feedback, etc, but then you need to buy everything else from them (including your lights, flooring, etc) or it just won’t open. Of course they sell everything at a premium.
  • Windows would be your standard wooden office door with the standard metal handle and the standard automatic door closer; but anyone can open it even when locked, it needs to be changed every other year, if you “customise” (i.e. adapt it in any way) it it will wear out 10x faster, and any adjustment you do (handle spring tension, automated closer strength and kickback, hinges adjustment, etc) will be reset at night randomly every other week, the door will get new “features” (like microphones, a search prompt, an assistant, etc) randomly, and you can use any kind of furniture you want, but during the “night resets” (aka “upgrades”), all the furniture in the office will be reset to be “Microsoft furniture”, and you will need to exchange it all back in the next morning. And for various unpredictable reasons, once in a while, when going through the door, it will close unexpectedly and violently, slamming you in the face with full force.
  • Linux and FOSS in general is a collection of community made IKEA inspired doors. You can mix and match anything. Any kind of door, any kind of hinge. Any kind of handle. Want a door that opens sideways? Go for it. Want a door that slides up? Do it. Want a butterfly door? Sure. A proximity sensor as a handle? Totally. A carbon fibres and ceramic door? Absolutely. All at once? Why not. In the end, no door is exactly the same, even across the same building, and you often need a few minutes to figure out how new doors work in new buildings. And of course, lots of doors are ill designed, with completely unnecessary features, and conflicting options, like both a sideways and butterfly hinge. Still works, but has caveats. But hey, if it breaks, or doesn’t fit, you can change it any time, get parts anywhere, and there is an absolutely insane amount of community made documentation on most of it (except the internals, some of it is hard to understand, some of it is absolutely obscure, and most of it is documented by people who made it exclusively for people who made it)

IMHO what we would need is for distributions to “adopt” a given GUI (or DE), and stick to that. Do not even carry the packages for something else. If it is needed, another distribution will be made. That would simplify things a lot, and would greatly relieve the stress on maintainers.

And it would make for a much more approachable user experience.

7heo, (edited )
@7heo@lemmy.ml avatar

I believe you’re missing the actual causality chain here.

While it is actually proven that vendors will degrade your experience artificially to “motivate” you to buy new devices, in the never ending pursuit of monetary gain, there is no such potential incentive here: you aren’t paying for new drivers.

And while others suggest biases, I do believe you are witnessing an effect that is at least partially real, if not totally, but not for the reasons you believe:

Most programs that leverage GPUs end up being GPU bottlenecked. Meaning that one can almost always improve the program’s performance by using a better GPU.

But then, why does a new driver not improve performance, and rather, simply “bring a degraded performance back to previous levels”?

Well, that has to do with auto-updates, and the way drivers are distributed.

While, in a world where one would have to manually update everything, a new driver would almost certainly mean better performance for a given program, most programs in our world auto-update automatically (and sometimes even, silently). And the developers are usually on top of things wrt drivers, because they follow drivers updates closely, get early versions, etc.

Meaning that when a driver is updated, your apps usually are, too. In a way that leverage the new driver for more processing, rather than faster processing. But unlike your automatically updated apps, your drivers are updated manually.

And the consequence of such updates, when you are too slow to update your drivers, is a degraded experience.

Not because anyone artificially throttled your device’s performance, but because you lag too much behind expected updates.

What distro should I use on my potato?

I have an HP Stream 11 that I want to use for word processing and some light web browsing - I’m a writer and it’s a lightweight laptop to bring to the library or coffee shop to write on. Right now it’s got Windows and it’s unusable due to lack of hard drive space for updates. Someone had luck with Xubuntu, but it’s...

7heo, (edited )
@7heo@lemmy.ml avatar

Note: this comment is long, because it is important and the idea that “systemd is always better, no matter the situation” is absolutely dangerous for the entire FOSS ecosystem: both diversity and rationality are essential.

Systemd can get more efficient than running hundreds of poorly integrated scripts

In theory yes. In practice, systemd is a huge monolithic single-point-of-failure system, with several bottlenecks and reinventing-the-wheel galore. And openrc is a far cry from “hundreds of poorly integrated scripts”.

I think it is crucial we stop having dogmatic “arguments” with argumentum ad populum or arguments of authority, or we will end up recreating a Microsoft-like environment in free software.

Let’s stop trying to shoehorn popular solutions into ill suited use cases, just because they are used elsewhere with different limitations.

Systemd might make sense for most people on desktop targets (CPUs with several cores, and several GB of RAM), because convenience and comfort (which systemd excels at, let’s be honest) but as we approach “embedded” targets, simpler and smaller is always better.

And no matter how much optimisation you cram into the bigger software, it will just not perform like the simpler software, especially with limited resources.

Now, I take OpenRC as an example here, because it is AFAIR the default in devuan, but it also supports runit, sinit, s6 and shepherd.

And using s6, you just can’t say “systemd is flat out better in all cases”, that would be simply stupid.

7heo, (edited )
@7heo@lemmy.ml avatar

And Docker initially used Ubuntu. They explicitly and specifically switched to Alpine in 2016 for performance, to minimise the overhead.

7heo,
@7heo@lemmy.ml avatar

IMHO the issue is two folds:

  1. The makefile were never supposed to do more than determine which build tools to call (and how) for a given target. Meaning that in very many cases, makefile are abused to do way too much. I’d argue that you should try to keep your make targets only one line long. Anything bigger and you’re likely doing it wrong (and ought to move it in a shell script, that gets called from the makefile).
  2. It is really challenging to write portable makefiles. There’s BSD make and GNU make, and then there are different tools on different systems. Different dependencies. Different libs. Etc. Not easy.
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • thenastyranch
  • ngwrru68w68
  • magazineikmin
  • khanakhh
  • rosin
  • mdbf
  • Youngstown
  • slotface
  • everett
  • cisconetworking
  • kavyap
  • DreamBathrooms
  • anitta
  • InstantRegret
  • Durango
  • osvaldo12
  • ethstaker
  • modclub
  • GTA5RPClips
  • Leos
  • cubers
  • tester
  • normalnudes
  • megavids
  • provamag3
  • lostlight
  • All magazines