@stardreamer@lemmy.blahaj.zone avatar

stardreamer

@stardreamer@lemmy.blahaj.zone

This profile is from a federated server and may be incomplete. Browse more on the original instance.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Harder to write compilers for RISC? I would argue that CISC is much harder to design a compiler for.

That being said there’s a lack of standardized vector/streaming instructions in out-of-the-box RISC-V that may hurt performance, but compiler design wise it’s much easier to write a functional compiler than for the nightmare that is x86.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

My issue with them is that they make their lower tier plans too enticing. I’ve wanted to upgrade to pro for all the fancy gizmos but the basic mail plan is just too good a deal to upgrade.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Here you dropped this:


<span style="font-weight:bold;color:#a71d5d;">#define </span><span style="font-weight:bold;color:#795da3;">ifnt</span><span style="color:#323232;">(x) </span><span style="font-weight:bold;color:#a71d5d;">if </span><span style="color:#323232;">(</span><span style="font-weight:bold;color:#a71d5d;">!</span><span style="color:#323232;">(x))
</span>
stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.

A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.

Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It’s just way too much work for very little effort.

It’s like I can enter my house through the door or the chimney. I would always take the door since it’s designed for human entry. I could technically use the chimney if there’s no door. But if someone lights up the fireplace I’d be toast.

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

Nothing but effort. Nobody wants to constantly baby a project just because someone else may change their code at a moment’s notice. Why would you want to comb through someone else’s html + obfuscated JavaScript to figure out how to grab some dynamically shown data when there was a well documented publicly available API?

Also NewPipe breaks all the time. APIs are generally stable, and can last years if not decades without changing at all. Meanwhile NewPipe parsing breaks every few weeks to months, requiring programmer intervention. Just check the project issue tracker and you’ll see it’s constantly being fixed to match YouTube changes.

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

It doesn’t have to be turn-based. FFXI and FFXII are also great. I feel the bigger issue is that making a story heavy game while everyone else is also making story heavy games makes it no longer unique.

I wouldn’t mind going back to ATB, but I don’t think that would win back an audience except for nostalgia points.

Maybe more FF:T though? Kinda miss that.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

“Have you considered there is something more to life than being very very very very, ridiculously good looking?”

“Like murder?”

Is there an artist so horrible that no matter how hard you try that you cannot separate their art from them?

Similar to the recent question about artists where you can successfully separate them from their art. Are there any artists who did something so horrible, so despicable, that it has instantly invalidated all art that they have had any part in?

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Terry Goodkind.

Can’t separate the work from the author since both are pretty bad.

It takes a special kind of person to require a pinned “please don’t celebrate deaths” reminder on Reddit when you die…

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Having a good, dedicated e-reader is a hill that I would die on. I want a big screen, with physical buttons, lightweight, multi-weeklong battery, and an e-ink display. Reading 8 hours on my phone makes my eyes go twitchy. And TBH it’s been a pain finding something that supports all that and has a reasonably open ecosystem.

When reading for pleasure, I’m not gonna settle for a “good enough” experience. Otherwise I’m going back to paper books.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Land’s cursed. Almost as if America was built on top of an ancient Native American burial ground or something.

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

The argument is that processing data physically “near” where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I’d say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn’t allow loops, recursion, etc). No matter how fast your fancy new architecture is, it’s worthless if most programmers on the job market won’t be able to work with it. Second, there’re too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It’s just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

And that’s fine. Plenty of authors are great at writing the journey and terrible at writing endings. And from what we’ve gotten so far at least he now knows what not to do when writing an ending.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

At this rate the only party they will have left will be their own farewell party.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

+1 for fairmail. Never have I seen an app so functional yet so ugly at the same time.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

No, the 2037 problem is fixing the Y2k38 problem in 2037.

Before that there’s no problem :)

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

Worked in IT, target disk mode is a life saver when you have to recover data from a laptop with a broken screen/keyboard/bad ribbon cable and don’t want to take apart something held together by glue.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

A more recent example:

“Nobody needs more than 4 cores for personal use!”

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

You can do it through democracies. Taiwan has two sets of high speed rail systems.

Are they expensive to maintain? Absolutely. In fact they bankrupted 2+ companies until the government decided to step in and foot part of the bill. But then again, if the government isn’t willing to pay for basic infrastructure, what are taxes for?

(Also as a tangent, the Taiwan high speed rail bentos are to die for. I had it 5+ years back and I still remember it. Super cheap meal in a disposable bamboo lunch box. Usually there are 1-2 choices per day. I had chicken thighs, pickled veggies, steamed pumpkin, and half a marinated tea egg. The bottom half of the lunch box was filled with rice. 10/10 would eat at a busy train station during rush hour again)

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar
  1. Attempt to plug in the USB A device
  2. If you succeed. End procedure
  3. Otherwise, destroy the reality you currently reside in. All remaining universes are the ones where you plugged in the device on the first try.

That wasn’t so hard, was it?

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

The year is 5123. We have meticulously deciphered texts from the early 21st century, providing us with a wealth of knowledge. Yet one question still eludes us to this day:

Who the heck is Magic 8. Ball?

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

Funny how a game about fearing the unknown is being hated on by a group that fears the (relatively) unknown.

stardreamer,
@stardreamer@lemmy.blahaj.zone avatar

I’m just going to put this information here: the use case for 46Gb WiFi is going to be extremely niche. There is nearly no legitimate use case where you can achieve that speed on your phone.

The problem here is that:

  1. The majority of internet traffic is TCP
  2. TCP protocol processing is atomic (i.e. your speed is bottlenecked by a single CPU)
  3. The bottleneck is the receiver (i.e. downloader)
  4. TCP is too complex for efficient receiver-side hardware offloads (i.e. can’t workaround this issue by adding more special hardware)

What does this mean?

Your connection speed on a wifi 7 device WILL be bottlenecked by your single-core CPU speed, even if you are doing absolutely nothing except transmitting data. This assumes you are only using a TCP single connection (e.g. downloading a file from a website). But that’s the majority of use cases unless you are running a server (in this case on your phone).

I haven’t checked what CPU the Pixel 8 uses. But my Pixel 7 has a Cortex A-78. I also don’t have the raw data handy for the 3Ghz A-78, but I do have data from the 2Ghz A-53 connected to a 100Gbps Ethernet NIC which is around 8-9Gbps. The A78 generally outperforms the A53 by 1.5x (At least that’s the characteristics on the Nvidia Bluefield DPUs). So we can assume 12-14Gbps max for a single connection with Wifi 7 running on a state-of-the-art ARM CPU.

That is still nowhere near 46Gbps. It’s like mounting a Vulcan Minigun on a bicycle.

To use the full wifi bandwidth, you would need to have multiple connections running on different cores. That’s also not including the switches/servers connected to the wifi AP. Unless you are running a Redis server on your phone, I see no reason why Wifi 7 would be needed unless the remaining hardware is upgraded significantly.

stardreamer, (edited )
@stardreamer@lemmy.blahaj.zone avatar

ELI5, or ELIAFYCSS (Explain like I’m a first year CS student): modern x86 CPUs have lots of optimized instructions for specific functionality. One of these is “vector instructions”, where the instruction is optimized for running the same function (e.g. matrix multiply add) on lots of data (e.g. 32 rows or 512 rows). These instructions were slowly added over time, so there are multiple “sets” of vector instructions like MMX, AVX, AVX-2, AVX-512, AMX…

While the names all sound different, the way how all these vector instructions work is similar: they store internal state in hidden registers that the programmer cannot access. So to the user (application programmer or compiler designer) it looks like a simple function that does what you need without having to micromanage registers. Neat, right?

Well, problem is somewhere along the lines someone found a bug: when using instructions from the AVX-2/AVX-512 sets, if you combine it with an incorrect ordering of branch instructions (aka JX, basically the if/else of assembly) you get to see what’s inside these hidden registers, including from different programs. Oops. So Charlie’s “Up, Up, Down, Down, Left, Right, Left, Right, B, B, A, A” using AVX/JX allows him to see what Alice’s “encrypt this zip file with this password” program is doing. Uh oh.

So, that sounds bad. But lets take a step back: how bad would this affect existing consumer devices (e.g. Non-Xeon, non-Epyc CPUs)?

Well good news: AVX-512 is not available on most Intel/AMD consumer CPUs until recently (13th gen/zen 4, and zen 4 isn’t affected). So 1) your CPU most likely doesn’t support it and 2) even if your CPU supports it most pre-compiled programs won’t use it because the program would crash on everyone else’s computer that doesn’t have AVX-512. AVX-512 is a non-issue unless you’re running Finite Element Analysis programs (LS-DYNA) for fun.

AVX-2 has a similar problem: while released in 2013, some low end CPUs (e.g. Intel Atom) didn’t have them for a long time (this year I think?). So most compiled programs wouldn’t compile with AVX-2 enabled. This means whatever game you are running now, you probably won’t see a performance drop after patching since your computer/program was never using the optimized vector instructions in the first place.

So, the affect on consumer devices is minimal. But what do you need to do to ensure that your PC is secure?

Three different ideas off the top of my head:

  1. BIOS update. The CPU has a some low level firmware code called microcode which is included in the BIOS. The new patched version adds additional checks to ensure no data is leaked.
  2. Update the microcode package in Linux. The microcode can also be loaded from the OS. If you have an up-to-date version of Intel-microcode here this would achieve the same as (1)
  3. Re-compile everything without AVX-2/AVX-512. If you’re running something like Gentoo, you can simply tell GCC to not use AVX-2/AVX-512 regardless of whether your CPU supports it. As mentioned earlier the performance loss is probably going to be fine unless you’re doing some serious math (FEA/AI/etc) on your machine.
  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • megavids
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • provamag3
  • tester
  • Leos
  • JUstTest
  • All magazines