ZickZack

@ZickZack@kbin.social
ZickZack,

Should have been done a long time ago. Even adding and removing gravel traps where they currently have the blue concrete should be within the realms of possibility for an F1 gp if they want both F1 and MotoGP (consider that places like Baku literally pave their historical cobblestone and then un-pave it after the gp)

ZickZack,

For example, if you had an 8-bit integer represented by a bunch of qbits in a superposition of states, it would have every possible value from 0-256 and could be computed with as though it were every possible value at once until it is observed, the probability wave collapses, and a finite value emerges. Is this not the case?

Not really, or at least it's not a good way of thinking about it. Imagine it more like rigging coin tosses: You don't have every single configuration at the same time, but rather you have a joint probability over all bits which get altered to produce certain useful distributions.
To get something out, you then make a measurement that returns the correct result with a certain probability (i.e. it's a probabilistic turing machine rather than a nondeterministic one).

This can be very useful since sampling from a distribution can sometimes be much nicer than actually solving a problem (e.g. you replace a solver with a simulator of the output).
In traditional computing this can also be done but that gives you the fundamental problem of sampling from very complex probability distributions which involves approximating usually intractable integrals.

However, there are also massive limitations to the type of things a quantum computer can model in this way since quantum theory is inherently linear (i.e. no climate modelling regardless of how often people claim they want to do it).
There's also the question of how many things exist where it is more efficient to build such a distribution and sample from it, rather than having a direct solver.
If you look at the classic quantum algorithms (e.g. https://en.wikipedia.org/wiki/Quantum_algorithm), you can see that there aren't really that many algorithms out there (this is of course not an exhaustive list but it gives a pretty good overview) where it makes sense to use quantum computing and pretty much all of them are asymptotically barely faster or the same speed as classical ones and most of them rely on the fact that the problem you are looking at is a black-box one.

Remember that one of the largest useful problems that was ever solved on a quantum computer up until now was factoring the number 21 with a specialised version of Shor's algorithm that only works for that number (since the full shor would need many orders of magnitude more qbits than exist on the entire planet).

There's also the problem of logical vs physical qbits: In computer science we like to work with "perfect" qbits that are mathematically ideal, i.e. are completely noise free. However, physical qbits are really fragile and attenuate to pretty much anything and everything, which adds a lot of noise into the system. This problem also gets worse the larger you scale your system.

The latter is a fundamental problem: the entire clue of quantum computers is that you can combine random states to "virtually" build a complex distribution before you sample from it. This can be much faster since the virtual model can look dependencies that are intractable to work with on a classical system, but that dependency monster also means that any noise in the system is going to negatively affect everything else as you scale up to more qbits.
That's why people expect real quantum computers to have many orders of magnitude more qbits than you would theoretically need.

It also means that you cannot trivially scale up a physical quantum algorithm: Physical grovers on a list with 10 entries might look very different than a physical grover with 11 entries.
This makes quantum computing a nonstarter for many problems where you cannot pay the time it takes to engineer a custom solution.
And even worse: you cannot even test whether your fancy new algorithm works in a simulator, since the stuff you are trying to simulate is specifically the intractable quantum noise (something which, ironically, a quantum computer is excellent at simulating).

In general you should be really careful when looking at quantum computing articles, since it's very easy to build some weird distribution that is basically impossible for a normal computer to work with, but that doesn't mean it's something practical e.g. just starting the quantum computer, "boop" one bit, then waiting for 3ns will give you a quantum noise distribution that is intractable to simulate with a computer (same thing is true if you don't do anything with a computer: there's literal research teams of top scientists whose job boils down to "what are quantum computers computing if we don't give them instructions").

Meanwhile, the progress of classical or e.g. hybrid analog computing is much faster than that of quantum computing, which means that the only people really deeply invested into quantum computing are the ones that cannot afford to miss, just in case there is in fact something:

  • finance
  • defence
  • security
  • ....
ZickZack,

I can just go to the search tab and look for the magazine (e.g. Search for retro gaming) and find an the other instances.
I think s fair number of people forget to switch the search to magazines before looking (or are actually subscribing to other instances but don't notice it)

ZickZack,

That depends on the size of the insurance: keep in mind that, for the most part, kbin is just a list of txt files. 2gb of ram sounds like a lot less than it is since people are used to desktops that have all sorts of additional stuff running on the side which pushes up the overall system consumption

ZickZack,

They choose to do this. Delicious has historically been a point and click developer, but they wanted to diversify, especially since their previous title "pillars of the earth" flopped. They first tried their have at rts with "a year of rain" which is simply not that good, and then looked into Gollum.
You also can't raid make the argument that the project was rushed out the door, considering the game was supposed to release in 2021 (two years ago).

They tried something they had no experience in, not through coercion but because they wanted to, and produced a game of shockingly low quality. Since this wasn't the first flop, but just the latest in a huge series of flops, (though it was the most expensive and high profile one) the studio closed.

ZickZack,

I see no indication that this was a to down forced decision from management (just from having talked to some developers at Gamescom a couple of years ago).
The concept really wasn't horrible it just looks like it now having seen the product, but a stealth have themed after Gollum is not a dumb idea.
There's lots of stuff you could do, like e.g. use the ring for temporary invisibility but at the cost of losing some e.g. sanity resource you need to recover.

The problem with this game is that the idea being bad doesn't even really factor into its quality since just the actual bare-bones graphics and fundamental gameplay is so broken that the lack of original ideas isn't really a factor.

If this was just a no-thrills e.g. thief clone with a Gollum skin, nobody would bar an eye. The problem is that even this low bar of "some stealth game+Gollum" is not reached.

In fact, we have a very direct comparison to a different "Gollum like stealth have produced by an indie developer" that was a smash hit: "Styx: master of shadows" is a climbing based stealth have featuring a small green goblin like protagonist that has to deal with a powerful but risky to use substance.

Will it ever be possible for kbin to federate with PeerTube?

I know this is a silly question, especially as I'm currently focused on local-only feeds, but there's just one part of the Fediverse I wish would get more love: PeerTube. While there aren't a lot of people actively looking for alternative video hosts outside the big guys (Twitch, YouTube, Dailymotion, etc.), it would be nice to...

ZickZack,

Peertube is inherently very scalable with relatively little cost due to an artifact of all social media platforms: Most of the traffic is driven by a tiny amount of videos/magazines/etc...

For services like youtube, you can use this as a way to quickly cache data close to the place it's going to be streamed: e.g. Netflix works with ISPs to install small servers at their locations to lessen the burden on their (and the ISPs) systems.
But with centralised systems you can only push this so far since ultimately everything is still concentrated at one central location.

Hypothetically, if you could stop this super-linear scaling for each user (you need to pay per user plus overhead generated from managing them at scale), you could easily compete against the likes of youtube simply because, at sufficient scale, all the other effects get ammortized away.

Peertube does exactly this by serving the videos as webtorrents: essentially this means that for every "chunk" of a video you downloaded, you also host that chunk for other people to download. That means that peertube itself theoretically only has to host every unique video once (or less than once since the chunks are in the network for a while), meaning you rid yourself of the curse of linear user scaling against users and only scale sub-linearly with the number of unique videos (how sub-linear depends on the lifetime for your individual torrents; i.e. how long a single video chunk stays available for others).

The costs that remain for every peertube instance is essentially the file hosting costs (and encoding the video, but that also only scales in the number of videos and could be pushed onto the uploader using WASM video encoders).
Storage itself isn't cheap, but also not ungodly expensive (especially since you can ammortize the costs over a long time as you platform grows with storage prices in a continual massive decline).

Platforms like Netflix and youtube cannot do this because

  1. Netflix is a paid-service and people don't want to do the hosting job for netflix after having already paid for the service
  2. Youtube has to serve adds which is incompatible with the "users host the content" method

In general torrenting is a highly reliable and well tested method that scales fantastically well to large data needs (it quite literally becomes more efficient the more people use it)

ZickZack,

Just as a quick check: are you sure you are in your "subscribed" view?
KBIN by default uses an "all" view, which you can change at the top right next to your username (the "table" menu).

ZickZack,

And don't forget that even after that you still have to watch baked-in "This video is sponsored by <insert shady company here>" adds since the actual revenue that gets passed to creators from youtube is so low that to keep the ship afloat they have to look for additional revenue streams.

Lemmy.ml is blocking all requests from /kbin Instances (kbin.social)

I discovered yesterday evening that Lemmy.ml is blocking all inbound ActivityPub requests from /kbin instances. Specifically, a 403 'access denied' is returned when the user agent contains "kbinBot" anywhere in the string. This has been causing a cascade of failures with federation for many server owners, flooding the message...

ZickZack,

Go to the relevant domain's front page (e.g https://kbin.social/d/kbin.social for kbin.social).
The URL scheme is "https://kbin.social/d/DOMAINHERE" assuming you are currently on kbin.social.
On the right in the sidebar you can see "Domain" and below that options to subscribe or to block.
Really it's the same thing as magazines, just that you generally don't visit the domain itself.

ZickZack,

While the inability to source is a huge problem, but you also have to keep in mind that complaining about AI has other objective beyond the obvious "AI bad".

  • it's marketing: "Our thing is so powerful it could irreparably change someone's life" is still advertising even if that irreparable change is bad. Saying "AI so powerful it's dangerous" just sounds less advertis-y than "AI so powerful you cannot not invest in it" despite both leading to similar conclusions (you can look back at the "fearvertising" done during the original AI boom: same paint, different color)
  • it's begging for regulatory zeals to be put into place: Everyone with a couple of millions can build an LLM from scratch. That might sound like a lot, but it's only getting cheaper and it doesn't need highly intricate systems to replicate. Specifically the ability to finetune a large model with few datapoints allows even open-source non-profits like OpenAssistant to compete against the likes of google and openai: Google has made that very explicit in their leaked We have no moat memo. This is why you see people like Sam Altman talking to congress about the dangers of AI: He has no serious competetive advantage and hopes that with sufficient fear-mongering he can get the government to give him one.

Complaining about AI is as much about the AI as it is about the economical incentives behind AI.

ZickZack,

What you are alluding to is called "DIDs" = "Decentralized identifiers" (see https://en.wikipedia.org/wiki/Decentralized_identifier).
The idea of most of these methods is that you identify yourself using a private key, while a public key is spread throughout the network.
If you want to log into a server on that network, the server would "challenge" your identity by encrypting something (e.g. a random number) using the public key, which you, the holder of the private key, can then decrypt and send back to prove you are who you say you are.

This method is already standardized by the W3C, but only has been for less than a year. You also have to keep in mind that all federalized social network systems (such as lemmy and kbin) are still in early development.

ZickZack,

You can use keepassXC and "self-host" your passwords on any cloud-storage you want (it's just a file after all), but if you are using 1Pass at the moment, I don't see an opt-in anonymized telemetry system as a reason to switch.

ZickZack,

I really like patreon since it allows creators some independence on the whims of platforms and advertising companies.
It also allows certain content that doesn't (currently) work on e.g. youtube to exist: E.g. (very) long form videos or highly produced documentaries that may take half a year to plan and shoot just cannot exist within youtube due to the limited per-click revenue.

That doesn't mean this system is perfect: E.g. I would like to have an option to put some money into a monthly pot, which gets distributed based on my viewing habits and current interests. E.g. Twitch has "bits" which can be bought in bulk and distributed freely as donations.
Having a monthly system for "tokens" according to which a monthly donation gets divided into (i.e. a person got 25% of my tokens, so he gets 25% of the pot) would be nice (this does have the potential issue of hurting long-form content, but I could still donate the normal way).

ZickZack,

I have a glorious model O which is decent, though I would get a G502 if I had to buy another one.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • InstantRegret
  • osvaldo12
  • everett
  • khanakhh
  • Youngstown
  • rosin
  • slotface
  • thenastyranch
  • mdbf
  • Durango
  • kavyap
  • tester
  • DreamBathrooms
  • anitta
  • magazineikmin
  • GTA5RPClips
  • tacticalgear
  • cubers
  • ngwrru68w68
  • ethstaker
  • normalnudes
  • vwfavf
  • cisconetworking
  • modclub
  • Leos
  • provamag3
  • JUstTest
  • All magazines