@skullgiver@popplesburger.hilciferous.nl
@skullgiver@popplesburger.hilciferous.nl avatar

skullgiver

@skullgiver@popplesburger.hilciferous.nl

Giver of skulls

Verified icon

This profile is from a federated server and may be incomplete. Browse more on the original instance.

[Question] Control and the DirectX 12 on the Steam Deck (lemmy.world)

Just to be clear, the game runs great with the DX11 on my Steam Deck Oled. The only downside is that i find out that you can actually enable HDR by following this guide and it works GREAT… until it doesn’t. The guide itself work with no issue but to enable the hdr option I have to run the game with DX12, and dx12 ( tested...

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

TL;DR: Shader recompilation sucks, but it’s inevitable; it’s how you can play DirectX games on Linux.

Shaders describe how to render stuff. The game feeds the GPU some shader code and a list of points, and shaders get used to turn those points into triangles, and those triangles get turned into filled shapes that are rendered to the screen. The triangles themselves are quite to translate back and forth (I’m pretty sure all you need to do is invert the Z-axis because Microsoft chose a different default coordinate system), but that’s not as easy with the shader code.

Linux, Windows, and various consoles can use Vulkan to do graphics; it’s basically an open DirectX alternative (it’s more complicated than that but let’s just pretend it is). Vulkan uses a completely different type of GPU code that will get translated to about the same code that ends up being executed, but the intermediate storage that the game interacts with is different.

This is where shader recompilation comes in: the DirectX shaders get intercepted when the game tries to run them, much like how requests to open and read files are intercepted and turned into Linux operations, and the code gets ripped apart, interpreted, and rebuilt into shaders that can be used by the Vulkan graphics code. The game doesn’t know this is happening because the recompilation makes it seem like the game is running on real DirectX.

Because this reinterpretation takes a bunch of processing power and halts the game if it doesn’t happen during a loading screen (which is the case in most modern games), the result of this recompilation gets saved to disk, assuming there’s space. Next time the game is trying to load the shaders that were already processed, it loads them from the SSD rather than doing the hard work again, basically making the translation instantaneously.

Fun fact: early versions of Elden Ring shipped with a bug that caused the shaders to get loaded over and over again in some cases, causing frame drops and slowdowns on Windows, but Linux players were barely affected because that stuff is basically the norm on DXVK and the runtime was already optimized for it.

This shader recompilation step happens on all kinds of GPU-code-to-GPU-code translation software (DXVK on Linux, but also on console emulators like Dolphin, and more). Steam fixes most of this problem by doing the recompilation on their side, storing the results, and having the Deck download the precompiled shader caches with games. That’s why there’s an extra download step for Steam games on Linux, and why games take up more space than on Windows. However, but you can’t use Steam’s DX11 shader caches with DX12, and I believe Steam only precompiles the DX11 caches.

A theoretical fix would be to download the shader caches from someone else who already went through the entire game with your patches applied. Unfortunately, sharing shader caches is not something I’ve seen outside of the Dolphin community. In some circumstances, you can tell the computer to generate all of these caches in bulk outside of the game so you can play smoothly, but I’m not sure if this is available in PC games.

I’m not sure what the patch changes exactly, so it’s hard to say what the exact problem is. Normally, recompiled shaders are cached on the file system so that they’ll be loaded instantly the next time they’re used, and the frame drop should only occur when you’re loading large new areas which requires loading new shader code. If you’re seeing slowdowns every time you go through previous areas, it could be that the modifications cause problems with the shader cache, in which case the shader recompilation needs to take place every time. If you only get slowdowns for new areas/enemies/objects/animations, then the system is probably Working As Intended and I don’t think there are any workarounds.

The only alternative to shader recompilation system would be Microsoft porting DirectX to Linux, but that’s never ever going to happen.

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

[This comment has been deleted by an automated system]

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Lemmy may be developed partially by EU funding, but that doesn’t mean they’re necessarily following EU laws.

For what it’s worth, complying with the GDPR and other privacy laws is on the corporate instance owners, not on the Lemmy devs. It’s up to the instance devs to make sure things like data encryption and deletion requests are set up correctly.

That said, there are exemptions for personal use. Things become a little muddy when you get to the “personal server but donations” territory, but companies and people have very different obligations when it comes to privacy.

Lemmy needs better tooling for privacy compliance, though. As an instance admin, the only way to generate full takeouts is to go through the database manually. The export button helps a lot, but doesn’t contain all user data.

“Maybe check if the website you signed up with is following the law” is a ridiculous take. They may be overestimating their privacy rights (especially when it comes to small servers run by individuals) but that’s not how the law works.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The GDPR is a directive implemented by 27 countries, so I guess you could call it “international law”?

With treaties such as the Safe Harbour Privacy Principles EU–US Privacy Shield EU–US Data Privacy Framework, GDPR restrictions may also start affecting American busineses, so the “international law” monniker would actually make sense.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Yeah, the Fediverse is terrible for privacy. By design, I should add.

I’m pretty sure running a Lemmy server (or Mastodon server) in Europe in blacklist federation mode is illegal, as you’re exchanging data with external processors without any kind of validation about privacy arrangements. No DPAs, no competency decesions taken into account, data shared all over the world.

Lemmy lacks proper delete functionality (you can edit to replace the contents with an empty string, though). In theory you could exercise your rights and demand thst the administrator deletes all your PII, and instructs any data processors that PII was hared with to do the same. If they do not or cannot comply, that should be grounds for a complaint with your local DPA.

I’m not aware of any international privacy law, but this is going to be A Thing now that Meta and Tumblr and Foursquare are joining the Fediverse. My guess is that they’ll consult at least one DPA (probably the Irish one, they’re usually located there for tax reasons) for guidelines. I wouldn’t be surprised if data they severely restrict Fediverse activity within EU/EEA borders because of privacy laws.

Even more interesting will be what would happen if a user sued the instance admins of a European server that’s more than just a person. Several Fediverse instances are backed by organisations, which means they need to comply with the terms of the GDPR if they operate within Europe, and the way the open Fediverse operates just isn’t compatible.

This is one of the reasons I don’t see the Fediverse lasting long. Unless you add some kind of validation system to verify that you’re exchanging data within certain borders, the entire system as it stands simply cannot be run legally by anything bigger than private individuals.

However, it’s important to note that privacy law generally only applies to PII. Your works (blog posts, comments, etc.) are probably not covered by privacy laws. Your username probably is, though.

I think the fact there’s a privacy oriented community on Lemmy is pretty hilarious. Of course, privacy is irrelevant if you choose to share information willingly, but the entire protocol is a giant privacy violation.

As an added bonus: this applies to most other federated protocols as well (Bluesky, Matrix, XMPP, you name it) unless those servers are configured to only communicate with known-compliant servers.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The user should not need to request all other instances to delete their data, their account is with a single server. It’s on the server admin to ensure that all exchanged data is taken care of appropriately.

If your European server shares data with an American server, that European server has A Problem. There’s a good chance lemmy.world federation with fedia.io may already be a violation. The issue isn’t as black and white of course, but the entire situation is legally dubious to say the least.

You’re right that the Fediverse isn’t like Facebook or Google where there’s one company in control. However, the downside of that is that there are millions of tiny instances, all with legal responsibilities. There are implications about privacy law, but also porn laws, propaganda laws, hate speech laws, child porn laws, and intellectual property laws.

We’re all just kind of betting on nobody ever taking any legal action here. One lawsuit can wipe out the Fediverse as we know it.

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

[This comment has been deleted by an automated system]

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

In theory an EU institution could fine a non-EU company, the same way the Chinese government can fine a European company. It’d be tough to do business with outstanding legal action.

There’s another way to take the “international law” definition: many countries (China, Russia, the EEA, probably more) have laws defining where user information is stored. A Russian company can’t just store their user data in an American data centre. Most countries do have some kind of privacy law, and I’m sure ActivityPub violates more than just the GDPR.

It’d be silly to think you could enforce the GDPR against some guy running a server from his basement in Brazil, but for the larger instances, which take donations, things can become more problematic. Servers run by the Lemmy devs could be operating safely from the communist depths of Cuba, but if they get fined, I doubt those EU sponsor funds would keep flowing towards Lemmy development.

Also interesting to note: a LOT of big Fediverse instances operate from Europe. Mastodon.social, Lemmy.world, Lemmy.ml, Kbin.org, just to name a few. Based on the map on Fediverse.observer, most of the world’s Fediverse servers are either in Europe or in the USA (with twice as many in Europe as in the USA). When it comes to server count, Fediverse law may as well be about EU-USA relations, maybe with Japan as a third large host.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Good for you!

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

It’s not impossible at all. Some basic data processing agreements and federation whitelisting are all you need.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

It depends on your desktop environment. For GNOME, you can try jiggle. For me, Jiggle is listed as incompatible, unfortunately.

The built-in accessibility setting for finding your mouse is the Windows-style “press left ctrl to highlight the mouse”. KDE has the same feature, though they’re working on including a macOS-like jiggle feature in the upcoming Plasma 6 release.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I have mh suspicions that this is some kind of deadlock issue. After fucking up my config during am upgrade (too many database connections, too many workers) the new multithreaded federation thingy seemed to become quite unstable. From what I could tell, the threads that are supposed to deal with federation all broke/shut down, but didn’t get automatically restarted again, leaving some messages and instances in limbo until the entire server got restarted.

After fixing my database connection issue and reducing the amount of workers, I don’t think I’ve encountered many issues since. I occasionally click the little icon above a post to check if my response had reached the community server, and usually it does, pretty quickly.

This issue sounds awful to have to debug, best of luck to the Lemmy devs in finding a fix!

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

[This comment has been deleted by an automated system]

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I run HASS on an amd64 virtual machine, so it’s possible there are difference between our devices. However, because both seem to be based on maintaining a set of Docker containers, I don’t think there’s much difference (other than the ARM specific virtual devicetree directory not existing on my machine).

If you run an up-to-date version of Docker, you should not have access to /sys/firmware by default. That’s a decision the Docker folks made because that directory contains things like bootloader configuration/information and Windows license keys.

On the Linux OS itself, there shouldn’t be any such restriction. If you can’t access these files outside of Docker, there’s something wrong, probably with your kernel. You said ssh’ing into the machine works as a workaround, so I don’t think this is the case.

What seems more likely to me, is that your current host OS comes with a recent version of Docker that shields the /sys/firmware directory from Docker containers by default. If the Docker version didn’t change, then I think what you’re seeing is what you should’ve been seeing all along.

The only way I can think of that Home Assistant could have changed this behaviour, is that it could’ve changed the configuration of the default containers. As you can read in the github issue I linked, there’s a way to tell Docker to basically disable security features (run in privileged more and allow access to all of sysfs). It’s possible that Home Assistant used to configure Docker in this manner, but no longer does.

Running a full application in privileged mode is normally a hack to work around other problems (i.e. not exposing the proper device paths with proper access controls and just allowing the container to do whatever and probably break out of isolation), so it could be that they enabled these workarounds to work around some unrelated issue. If the unrelated issue was fixed, and the containers no longer needed to run privileged, they could’ve disabled the workaround and broken your access to sysfs in the process.

The small Home Assistant supervisor daemon that acts as a sort of “”“hypervisor”“” (which handles updates of the other containers) does need to run in privileged mode; it needs to control Docker, so of course Docker can’t be configured to stop it from doing that. It’s a rather small service, though. However, I have noticed that on some installs, the supervisor daemon seems to lose its privileged mode due to a bug. It’s possible that this is bug also affected your seemingly privileged main container. If that’s the case, running the installation script again should fix the issue.

I googled all the terms I could think of that could affect your problem with “home assistant” but when it comes to devicetree access, only your Lemmy post seems to come up. I think your data collection setup may be rather unique among HASS users, so perhaps you really are the only one affected by this, or at least the only one who’s written a post about it.

In my tests, none of the normal (unprivileged) Docker containers I’m running on my servers could access /sys/firmware. I tested this under Ubuntu, Debian, Manjaro, and Arch hosts. Accessing various firmware related virtual files worked fine outside Docker, of course, but inside Docker, /sys/firmware is empty. I don’t have an Alpine install but I’d be surprised if that’d handle this directory any different.

Normally, you could work around the limitations here by just marking your home assistant container as privileged and ignoring the potential security implications, as you may have unknowingly been doing. I think that’s not exactly an unacceptable risk for a dedicated Raspberry Pi (though it would be bad to default to this configuration). Unfortunately, Home Assistant’s supervisor recreates containers for you during updates, so marking the containers as privileged can be more of a pain than you’d expect. You can try looking into ways to customise the Home Assistant Docker configuration to grant these permissions, perhaps there’s a config file I’m not aware of that you can use to make sure the supervisor recreates the containers with the appropriate configuration. As stupid as it may be, I would personally look towards alternative solutions, like your SSH workaround; perhaps your script can check for an empty /sys/firmware directory and apply the workaround from there?

tl;dr: it’s a kernel bug if you’re not running your data collecting script inside Docker, otherwise it could be a home assistant bug/update that caused the change, but as of a year or two ago you’re not supposed to be able to read these files from within a Docker container anyway.

For what it’s worth, I disagree with Docker’s blanket block of /sys/firmware and I hope the issue that’s open about this change will be resolved. You don’t want to leak Windows keys, but there should be an obvious way to expose the board info without disabling basic container security…

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Lemmy 0.19.0 and higher have an export button in the web interface settings that’ll back up most of the stuff on your account. If your instance isn’t on that version yet, you’ll have to wait for the admins to upgrade.

I’m not sure if likes are transferred, as those are propagated across the network, but saved posts and comments should be, I think. In theory you could write a tool to download all data from your Lemmy account for earlier versions of Lemmy by using the API, but that’ll be much slower to generate.

If you’re an EU citizen, you could also demand a digitally readable data export of all your personal data from your instance admins through the rights granted to you under the GDPR and have them deal with it, but I don’t think you’ll make any friends by doing that.

Latest Fedora Workstation volume slider doesn't do anything.

I can adjust the slider up and down for the volume in the tray, but it stays at 100%. I tried using Pulse Audio to adjust as well, but it has the same problem. It’s the family 17h/19h HD Audio Controller. When I use HDMI audio output all works as it should, it’s just the internal laptop speakers that will not adjust. Anyone...

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Pipewire comes with a bunch of wrappers so that PulseAudio and JACK applications still work. You can use pavucontrol to change the volume settings on Pipewire without any issues.

People who don’t know that they’re using Pipewire may not even notice and think they’re still on Pulse, that’s how good the Pipewire integration is.

I’ve run into issues like OP describes myself, but for me this was fixed with a reboot.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

The same laws also apply in the EU and Russia. It’s less relevant in America, because the American government can usually pressure companies into handing over data through other means.

I don’t trust the Chinese government one bit, but the server location thing is just “we want our laws to apply when you sell services to our citizens”.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

You can configure Linux to be flicker free through a few kernel parameters (mainly quiet and loglevel=3, but likely also something to set the resolution early during the boot process or to maintain the resolution the computer started with.

The groundwork was laid years ago, distros juet don’t enable it by default. Grub will even put the vendor logo back for you after selecting a boot device!

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

This is what you get when you let Google Translate (or Lens) translate an image.

This is the original (except someone added a useless red line):

https://www.lavanguardia.com/files/content_image_intermediate_filter/uploads/2021/06/01/60b5ce3cd53e0.jpeg

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

If you mean this link: that’s a high level description of the protocol, but it leaves out important details.

For example, Google uses MLS for group chats, but the document only mentions the Signal protocol. In other words, E2EE for group chats is broken even if you manage to implement the protocol exactly as they describe.

For example, they say the client “registers with the key server” and “uploads the public key parts”. What server is that? What protocol do we use? HTTPS POST? Do we use form/multipart? Do we encode the key in PEM or do we submit they bytes directly?

Another example: “Key material, digest, and some metadata are encrypted using the Signal session”. Whay do you mean “some”? What algorithm is used to generate the digest?

The document is a nice high level overview, but worthless if you want to implement their protocol. It basically says “we put signal, and send the signal messages over RCS, with out own key servers. Here’s how the Signal protocol works”. If, for example, Ubuntu Touch would like to implement this into their messenger, they’ll need to reverse engineer Google’s Messages app, guided by the description in their whitepaper.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Carriers wanted RCS because WhatsApp and iMessage were taking away their ability to charge by the text more than anything. I think they all dropped their subscription model by now, but no doubt they’ll start charging you when you use the more advanced features of RCS (i.e. uploading 500MB files).

Google did come up with an alternative: Google Talk, then Hangouts, then Allo, then Google Chat, and I think I missed one or two. Unfortunately, Google employees only get promoted by launching new products, so every chat team wants to launch something new, and that’s why every year Google launches a new messenger.

Nobody I know uses SMS because carriers here still charged by the SMS five years ago. MMS has been turned off entirely by the largest carriers around. Why would I pay 5 cents per message, or €3 extra for unlimited text, when I could just use the data plan I already have for free? Messages cost kilobytes of data, fractions of a cent.

As for the evolution of SMS: SMS was free at first, then relatively cheap, because it used empty space in the GSM standards allocated for basic message passing. Someone noticed that there was some leftover capacity and thought they may as well use it for a small feature. Then once people started using it, this stuff stayed in.

MMS (the part where you send anything more than 140 characters to a single phone number) is using your data connection. Well, it’s actually using a separate data connection, but it’s all packeted, like RCS is. That’s why SMS works when you have a single bar of reception, but MMS struggles.

As of LTE/4G, everything is packeted networking. Phone calls (over VoLTE) are no longer directly routed audio links, they’re voice (or video, but nobody seems to implement that part) streams over a network connection. VoWiFi (WiFi calling) is basically a VPN tunnel with two audio streams inside it. A true 4G successor of SMS would he RCS, except you’d need a separate APN that normal applications couldn’t use to reach an internal IP address that’s not accessible over the internet.

Like how MMS is entirely optional for carriers to provide, so is RCS. That’s why Google runs their own servers. For the majority of people, they need Google’s servers because their carrier doesn’t offer RCS services. They basically took a component intended to be run by carriers only and said “fuck it, we’re a carrier now”.

There can be advantages to RCS if your carrier has a server you can use, but if you’re using Google Jibe, there’s not much, really. In theory Google could expose an RCS API that would automatically enable E2EE (if available) to other apps, like how you can access RCS received messages through the same API you can use to read MMS messages on Android phones with Google’s messages app, but Google hasn’t implemented that in core Android yet. Knowing them, they probably don’t want to be locked down in their encryption system or protocol by making it part of the standard distributed to every phone manufacturer, but it’s hurting the effort to go beyond SMS.

iMessage has one advantage, which is that it can fall back to SMS when it can’t teach its servers. In theory any app could implement that. I’m not sure if iMessage does anything to encrypt its fallback SMS messages (it could) or use a chain of them to also send the necessary metadata to put the SMS messages into an iMessage conversation, but it’s always an option.

For Android users, Google Messages and soon RCS adds the ability to communicate with iMessage users without requiring iMessage users to install another app. In the silly bubble shaming countries, this is a major advantage. Outside those, everyone probably already has either WhatsApp, Line, or Vibe, or in China WeChat, so you can just use that. On the other hand, it RCS is no different than all the others and comes preinstalled on your phone, why not use it?

With the European DSA quickly approaching, there’s a good chance we’ll also gain some kind of cross messenger interoperability in the future. Google already uses the MLS standard for group messaging, which is part of an effort to standardise messaging protocols, and when MIMI gets finalised next year, perhaps large European gatekeeper apps like WhatsApp will become interoperable with RCS and other messaging apps. In a perfect world, you could just install the messaging app you prefer, or stick with the default one, and communicate with every other messenger app out there. Kind of like how iChat/MSN used to work, but as part of the standard.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I don’t really see what server side stuff an app on the dashboard of a car would need. Their Google Maps API works all the way back to the one installed with Android 2, and failing that users can install a number of navigation apps. Same with their music app (they already killed Play Music anyway).

The only thing I can think of is that they’d be killing Google Assistant on old phones, but that wouldn’t mean the rest of Android Auto couldn’t still work.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Apple already does a lot of this stuff. For example, it’ll do offline face recognition for your photos while your phone is charging overnight.

Plus, Apple is ahead of the curve when it comes to performance on this stuff. You don’t want to be running Stable Diffusion on your iPhone, but smaller AI is perfectly fine. Plus, unlike on Android, there are huge amounts of devices with ML accelerator chips that can run these models efficiently, allowing for power consumption optimisations by not having to provide a CPU fallback.

We’ll have to see how effective this will be in practice, but Apple generally doesn’t bring these types of features to their newer devices until they’re ready for daily use.

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

You can boot TWRP and mess with the system image from there. If you fastboot boot your-recovery-image.img instead of fastboot flash… you should be able to load TWRP once without ever writing something to disk. From there, you may be able to prevent the re-flashing of the original recovery, and then perhaps flash TWRP permanently. You’ll have to look around and figure out what’s possible, but there’s still some hope for you.

If your native OS is maintained longer than Lineage, it may be wise to stick with that. You can also look got alternatives to LineageOS. For example, my device only has partial Lineage support, with community builds on XDA as the only source for ROMs, but crDroid has maintained images available and PixelOS even has an Android 14 ROM I can try.

For your phone, crDroid seems to be released last, almost exactly a year ago, but Asus updated it a month ago. I would stick with Asus’ ROM in this case, and focus on sabotaging the recovery re-flashing.

You could also try to just… build LineageOS and see what happens. Once initial porting is done for an Android version, there’s a good chance very little device specific features will break until the next major version comes out. If you have access to get a Linux server with at least 16GB of RAM and about 50-100GB of free space (Windows may work through WSL but no promises), you can build the entire ROM yourself.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

I run a piece of software inspired by Mastodon’s automatic post deletes. It looks like some Fediverse servers aren’t accepting the delete requests, so placeholders end up getting left all over the place.

I’m not sure if this is a bug or if this is something servers are doing deliberately, I plan on checking back in a while and reporting bug reports if I can find out what’s causing the discrepancies.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • GTA5RPClips
  • magazineikmin
  • tacticalgear
  • khanakhh
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • thenastyranch
  • everett
  • osvaldo12
  • kavyap
  • rosin
  • provamag3
  • DreamBathrooms
  • Durango
  • normalnudes
  • ngwrru68w68
  • vwfavf
  • ethstaker
  • modclub
  • cubers
  • cisconetworking
  • Leos
  • anitta
  • tester
  • JUstTest
  • All magazines