PseudoSpock,
@PseudoSpock@lemmy.dbzer0.com avatar

How can I hide a pinned post without blocking the poster? It bothers me having this at the top of my list all the time, like some reminder on my phone I can’t ack and make go away.

cyclohexane,

I’m sorry I don’t know of any way to do that :( does it appear even when you’re browsing your main feed??

PseudoSpock,
@PseudoSpock@lemmy.dbzer0.com avatar

No, just at the top of the Linux community. I sort on New by default, looking for anything new Linux related… it’s been slow news in there of late. I’ll check if Voyager supports a method of doing it. Another user suggested Sync client. I’m usually on my desktop browser, though.

Thanks for checking. :)

cyclohexane,

I just unpinned the post. I figured there may be others bothered by this, and plus its been enough weeks at this point. Thanks for voicing this to me :)

PseudoSpock,
@PseudoSpock@lemmy.dbzer0.com avatar

Shoot, I’m sorry. Thank you for doing that for me (and us, if there happen to be others). I do feel bad you felt forced to do that, though. :( I should just accept it is how it is until Lemmy devs a way. I’m sorry.

d3Xt3r,

Most third-party Lemmy clients should support this. For instance, if you’re on Sync, you can just swipe it hide the post (assuming you’ve configured it that way).

JinxLuckless,

I’m a disabled gamer with lots of time on my hands. I’m considering dual booting Linux Mint (or something else equally easy to transition to) with Windows 10. My plan would be to entirely swap to Linux, but keep Windows for the few games that require it. However, I have some concerns.

Do I need to worry about certain niche programs I use not being Linux compatible, or do things like Wine make that irrelevant? I’m especially curious about 3rd party game/mod launchers, like GW2Launcher and XIVLauncher, or Overwolf/Curseforge.

What about Windows store apps-- is there any way to use them while in Linux? Sounds like a dumb question, but figured I’d ask just in case. This part isn’t a deal breaker either way.

Thanks in advance for any replies!

bloodfart,

Microsoft store apps don’t work in wine.

Guild wars used to work in Linux, idk about two but it seems to.

What you might consider, since you have the time, is using Linux as a main os and run windows in a vm inside it with gpu passthrough.

The idea is that you boot Linux all the time and when you need windows you “turn on” the virtual machine running it which gets direct control over a video card connected to a monitor.

It’s like having two computers with two monitors right next to each other except with only one computer.

The big benefit is that you get damn near 100% compatibility with even games that have windows only anti-cheat because… you’re running windows. It’s also nice to not make a choice to “switch” because windows is always right there when you need it!

The cons are that it takes a little time and learning to set up and you need to make sure your hardware works with it and that you have enough of it to make such a setup work (both onboard and discrete video cards, two monitors or a kvm switch, etc.).

But for a certifried gamer it’s a good move.

d3Xt3r,

The big benefit is that you get damn near 100% compatibility with even games that have windows only anti-cheat because… you’re running windows.

This isn’t necessarily true - most anti-cheat programs detect VMs, and depending on the game, some may prevent you from launching the game (eg games using Vanguard), others may flag you and cause you to get kicked out of the game, or even get you banned (Battleye is pretty notorious for this, from what I hear).

Now there are some tricks you can use, such as editing the XML for your VM to mimic your host machine’s SMBIOS data / vendor strings etc, but it’s a bit of work and can be a hit-or-miss.

Of course, the best option would be to not support games which use invasive anti-cheat in the first place. :)

And if you’re on nVidia, it can be a bit of a pain to get it all going, since you need to patch your GPU’s vBIOS. You can see how much work is involved in setting it all up over here: gitlab.com/Mageas/single-gup-passthrough - so not for the faint-hearted. :)

cc: @JinxLuckless

bloodfart,

Good looking out! I don’t game and set mine up a long time ago so those newer systems are beyond my knowledge.

JinxLuckless,

Thanks so much for the info, both of you!

I do in fact have Nvidia… bummer! I’m not too worried about AntiCheats that don’t support Linux since that mostly seems to be for PvP-heavy games, which are not usually a thing I’m into. Ark & Rust were about the only games like that I liked, & I played on PvE servers. But I do think some MMOs use AntiCheats, right? Though for sure not GW2 or FFXIV, which are my current obsessions.

My current plan is, since support for Windows 10 is being dropped in October 2025, maybe I’ll upgrade to Windows 11 so I can keep getting security updates, and then dual-boot to Linux, but have Linux as the main. Like 90% of the time I’ll be in Linux Mint (or whichever one I pick), and then just swap over to Windows briefly if/when I need to.

The VM plan sounded really awesome, but I think the nVidia fix looks beyond my ability. I’m someone who can’t code & only knows like 3 DOS commands, but can set up a Minecraft modpack (without changing any recipes) & upload the files to servers others run, or otherwise handle setting up mods for games in general. I’m saying all that to try and give some idea of my expertise or lack thereof. I’d consider myself a low-end power user, maybe? So given that, does this plan sound reasonable, re the dual booting & mostly swapping to Linux Mint (or whatever distro)?

d3Xt3r,

Yes, I mostly agree with your conclusions. MMOs do generally employ anti-cheat, so I wouldn’t attempt running them in a VM unless you want to take a risk. So dual-booting is an acceptable compromise.

The good news is though that gaming on Wine keeps improving every day. From the games you’ve mentioned, only Rust isn’t compatible with Linux (due to EasyAntiCheat), but the others are gold rated - and GW2 is even platinum rated!

You can use ProtonDB to check the game compatibility, and the user reports are usually helpful to see if they’ve encountered any issues or had to employ any tweaks to get it going. But do keep an eye out on this space, as Wine/Proton keeps improving constantly, so you never know, maybe some day even Rust might work!

Edit: Actually, reading the reviews for Rust, looks like you can actually get it to work if you connect to a server that doesn’t use anti-cheat!

JinxLuckless,

Oh wow, nice! I especially appreciate the ProtonDB link-- I’d known about Proton, but not ProtonDB. :) And that’s awesome, re GW2 especially!

I’m thinking I’m going to try Pop!OS… I was reading reviews etc. of various gaming/newbie Linux stuff on [itsfoss.com](It’s Foss) and they’re a big fan, and Pop seems pretty readymade for gaming stuff. I plan to put it on a flash drive & test it before dual booting to it, and if I’m not a fan, Linux Mint it is! I tried that once before, years ago.

MojoMcJojo,

I want to turn a Microsoft surface go 2 into a kali linux machine. I would appreciate any guidance pulling this off. I want use it for learning it security stuff, partly for work but mostly for curiosity. Occasionally I run across malware, trojans, and I want to look under the hood to see how they work. I’m assuming Kali is the best tool for the job and that Lemmy is the place to go for tooling around with tools.

d3Xt3r, (edited )

Kali is a pentesting distro, it’s not designed for malware analysis. The distro you’d want to use for malware analysis is REMnux, but it’s mostly meant for static analysis. Static analysis is fine, but you may not be able to dig deep unless you’re familiar with decrypting code and using tools like Cutter, Ghidra, EDB etc for debugging. Naturally you’d also need intimate low-level coding experience, familiarity with assembly language and/or Win32 APIs (or whatever APIs the malware is using). So this isn’t an area a casual security researcher can just get into, without some low-level coding experience. But you can at least do some beginner-level analysis like analysing the PE headers and using some automated tools which employ signature-based detection, or you could analyse strings and URLs embedded in the malware; stuff like that.

Dynamic analysis is far more easier to get into and more “fun”, but the problem is of course, with most malware being made for Windows, Linux is kinda irrelevant in this scenario. But you could still run Linux as a VM host and run the malware inside a Windows VM. The problem with running malware in VMs though is that these days any half-decent malware would be VM/context aware and may evade detection, so for accurate results you’d really want to run the malware on a real machine, and use tools like procmon, IDA, wireshark etc for analysis. But again, decent malware may be able to evade tools like procmon, so it can get quite tricky depending on how clever your malware is. You’d normally employ a combination of both static and dynamic analysis.

Industry pros these days often use cloud-based analysis systems which can account for many such scenarios, such as Joe Sandbox, Any.Run, Cuckoo etc. These offer a mix of both VM and physical machine based analysis. You can use these services for free, but there are some limitations of course. If you’re doing this for furthering your career, then it’s worth getting a paid subscription to these services.

Coming back to Kali Linux - it’s not something you’d want to install permanently on physical machine, as its meant to be an ephemeral thing - you spin it up, do your pentesting, and then wipe it. So most folks would use it inside a VM, or run Kali from a Live USB without installing it.

There are also alternatives to Kali, such as ParrotSec and BlackArch, but really from a pentesting toolbox point of view, there’s not much of a difference between them, and it doesn’t really matter (unless you’re a Linux nerd and like the flexibility Arch offers). Most industry folks use Kali mainly, so might as well just stick to it if you want to build up familiarity in terms of your career.

As for your Surface Go - you could install a normal daily-driver Linux distro on your Surface if you really want to, and then run Kali under KVM - which is personally how I’d do it. Running Linux on Linux (KVM) is pretty convenient has a very low performance overhead. You can also employ technologies like ballooning and KSM to save RAM, if your system has low RAM.

MojoMcJojo,

Thank you for such an amazing response. You’ve given me so many great threads to pull on. I’m going to have a great time diving into all this. Sincere thank you.

venturi,

Is there any performance difference between KDE and GNOME?

d3Xt3r,

Nope.

jaybone,

Question about moving from Ubuntu to Debian - Package updates and security updates…

On Ubuntu, I seem to get notifications almost every week about new package updates. (Through the apt UI)

On Debian, I don’t see this.

I can run apt update and apt upgrade

On Ubuntu, I see this pull a bunch of package data from various package repo URLs.

On Debian, I only see this pulling package data from two or three repo URLs at debian.org

Mainly I am concerned about security updates and bug fixes. Do I need to manually add other repo sources to the apt config files? Or does debian update those repos regularly?

lemmyreader,

Are you using Ubuntu Long Term Release or not ?

  • I’m subscribed to Ubuntu and Debian announcements via email, and I see much more often Linux kernel updates for Ubuntu than for Debian. It makes me wonder whether the Debian kernel is slimmed down, and that Ubuntu is focused on Enterprises with their kernel.
lemmyvore,

Debian favors stability over anything else so the packages are thoroughly tested before release and then only upgraded for security issues, until the next release 2 years later when everything gets an update.

Ubuntu favors releasing more recent package versions constantly throughout the period between major releases.

You can install packages on Debian from the backports repo if you occasionally need an package to be a more recent version.

Another option is to install apps from Flatpak. The apps in Flatpak have their own separate dependency system and can be used on any distro.

If neither of these works for you and you find yourself constantly wishing packages were newer you may want to consider a different distro.

Some people switch their Debian from stable to testing to get similar updates to Ubuntu. You can try that but please understand you may experience the occasional issues.

Be wary of adding external repos (that don’t point to debian. org) because they can mess with the package dependencies and prevent you from upgrading to the next version when it comes out. (This also applies to Ubuntu.)

bloodfart,

They’re updated regularly. Take a look at your etc/apt/sources on both. Debian has everything coming from their servers, Ubuntu has a bunch more.

If you’re going Ubuntu -> Debian be prepared to switch to testing or enable some packages from testing or even use an alternate install method for some software (yt-dlp).

wolf,

I wonder, if you are asking two different questions:

  1. Why don’t you receive notifications about updated packages?
  2. Two: Security and bugfixes

For 1. it depends which desktop environment you use, Gnome/KDE have this update notifications out of the box, for other DEs (Xfce, LXDE, etc.) you might need to enable this with the installation of synaptic or similar.

For 2. Debian stable does not ship bugfixes but Debian stable ships security fixes. I highly recommend to subscribe to Debians Security mailing list, especially for security fixes concerning browsers and other stuff.

Edit: I have enabled automatic updates and I still receive regular notifications via Gnome Software, at least once per week.

Theharpyeagle,

What is the practical difference between Arch and Debian based systems? Like what can you actually do on one that you can’t on the other?

static_dragon,

You can “do” the same thing in Debian as you can arch, the main difference is packaging philosophy, Debian packages are older and more stable, while in Arch world you typically have the newest version of software packages as late as a few weeks from their release (the caveat being breakage is a bit more likely), Arch also has user repositories where the community can contribute unofficial packages

possiblylinux127,

You can do pretty much the same things on either. The difference is one is a rolling release with fresh fairly untested packages and the other is a fixed stable system with no major changes happening.

cyclohexane,

To summarize: the major difference is that Arch Linux gives you the latest versions of all programs and packages. You can update anytime, and you’ll get the latest versions every time for all programs

Debian follows a stable release model. Suppose you install debian 12 (bookworm). The software versions there are locked, and they’re usually not the latest versions. For example, the Linux kernel there is version 6.1, whereas the latest is like 6,9 or something. Neovim is version 0.7, whereas the latest is 0.9. Those versions will remain this way, unless you update to, say, debian 13 whenever it comes out. But if you do your regular system updates, it will only do security updates (which do not change the behavior of a program).

You might wonder, why is the debian approach good? Stability. Software updates = changes. Changes could mean your setup that was previously working, suddenly isn’t, because now the program changed behavior. Debian tries to avoid that by locking all versions, and making sure they are fully compatible. It also ensures that by doing this, you don’t miss out on security updates.

captain_aggravated,
@captain_aggravated@sh.itjust.works avatar

The practical difference is the package manager; Debian-based systems use dpkg/APT with the .deb package format, Arch uses Pacman with .pkg packages.

Debian-based distros use a stable release cycle, so there are version numbers. The ecosystem is maintained for each version for an extended period of time, so if you have a workflow that requires a specific era of software, you can stick with an older version of the OS to maintain compatibility. This does not necessarily mean the software remains unpatched; security or stability patches are applied, this tends to mean the system is stable. Arch-based distros use a rolling release, basically what they said they were going to do with Windows 10 being the “last” version of Windows and they’d just keep updating it. Upside: Newest versions of packages all the time. Downside: Newest versions of packages all the time. You get the latest features, and the latest bugs.

Debian-based distros don’t have a unified method of distributing software beyond the standard repositories. Ubuntu tried with PPAs, which kind of sucked. Arch has the Arch User Repository, or AUR.

Arch itself is designed to be an a la carte operating system. It starts out as a fairly minimal environment and the user will install the components they want and only the components they want, though many Arch-based distros like Manjaro and EndeavorOS offer pre-configured images. Debian was one of the earliest distros shipped ready to go as a complete OS; I know of no system that offers the “here’s a shell and a package manager, install it yourself” experience on the Debian family tree.

But given an installed and configured Debian and Arch machine, what can one do that the other can’t? As in, can it run [application]? Very little.

Theharpyeagle,

Thank you for this comprehensive writeup! I’m a big Mint user and like not having to mess too much with the OS itself, but I’ve run into a few issues where the stable release of something doesn’t have newer features I want. I might try Arch out on a spare laptop.

lemmyvore,

I recommend trying Endeavour, Garuda or Manjaro (I’ve ordered them from minimum to maximum hand-holding). Plain Arch is a bit too rough for Linux beginners.

shaytan,
@shaytan@lemmy.dbzer0.com avatar

Is explicit sync a good enough solution to make wayland gaming with nvidia a reality(+ remove window flickering like some people claim it will)? It’s the last obstacle I find now trying to move my main pc to linux, and I don’t really want to use x11.

Pd. Lesson learned, next time I’ll get an AMD gpu.

lemmyvore,

If you really want to switch there isn’t really any reason to not use X.

If you really want to use Wayland I guess it will take a while longer. It’s not really 100% foolproof even if you get AMD. The vast majority of apps on Linux are designed for X and XWayland isn’t completely ready either.

humbletightband,

[interview question]

Assuming a user is a root, can this user create a file that couldn’t be read or deleted by other roots? …by the same user?

cyclohexane,

What do you mean by other roots? Isn’t root only one?

ransomwarelettuce, (edited )

I think the POSIX standard only has one root user, however many users can have root privileges.

So it’s probably a trick question, however any user even without privilege can make create a file which others can’t, read even the root itself I think not sure though.

mexicancartel,

There is only one root user. Other users may have root access though. Any user can create files only readable by them, but root user can override all of them, change password of any user etc. For a user having root access, they can do all sorts of things root user does

frostprophet,

I feel like I’m getting performance below what I’ve been getting on windows for the same games when I’m booting in Linux. Top of the head example is COD WWII, the gameplay and cutscenes stagger a lot but runs fine on windows with the same hardware. I’ve checked that my graphics card is being used by Linux but I just feel like I’m missing some settings that would optimise it.

I’m running Linux mint with a NVIDIA GTX1070. I know there’s some issues with NVIDIA and Linux but would that be the full reason?

d3Xt3r,

I’m running Linux mint

I’d say that’s your main issue. Mint isn’t really optimised for gaming, as it uses an old and non-gaming optimised kernel, and most packages in general are pretty old. When it comes to Linux and gaming, the #1 rule is to try to get the latest kernel and graphics drivers. You could install a more recent and optimised kernel on Mint, but if you do that you risk breaking things, which may especially happen when you do your next OS upgrade. So I’d recommend switching to either a gaming-optimised distro such as Bazzite, or a distro which has the latest packages and is optimised for performance, such as CachyOS (although I wouldn’t recommend it if you’re still very new to Linux, since it’s based on Arch - if you’re new to Linux then Bazzite would be a better option).

The second issue is - which version of Proton are you using? If you’re using the official Proton, I’d recommend using Proton-GE instead, as it includes a lot of extra patches and tweaks not present in the official Proton + uses more up-to-date components like DXVK. You can install Proton-GE easily using ProtonUp-Qt. Once you’ve installed Proton-GE, go to the game’s property in Steam and change the compatibility tool to Proton-GE.

frostprophet,

Thanks for the recommendations! I was already kind of considering switching to Fedora so Bazzite sounds good, although CachyOS sounds interesting too.

Kangy,
@Kangy@lemmy.dbzer0.com avatar

I’m also currently running Linux Mint but want to start gaming on Linux as well. Given what you’ve said it would seem that I need to consider distro hopping.

I have a “working” knowledge of Arch, I say working loosely as I have a home server running Manjaro and kinda maybe know what I’m doing with it and I’m comfortable following guides etc.

Which of the 2 distros you mentioned would you recommend? CachyOS looks great on the surface but Bazzite definitely seems to cater to gaming and it’s website heavily leans that way

d3Xt3r,

I think you’d be fine with either, but in the end it comes down to how “hands-off” you want to be, or how much customisability, flexibility and performance you’re after. Unlike Manjaro, Cachy is closer to Arch, which means things may on rare occasions break or may require manual intervention (you’ll need to keep up with the Arch news). Bazzite on the other hand is the polar opposite, being an immutable distro - updates are atomic (they either work or don’t, and in case an update is no good, you can easily rollback to a previous version from GRUB); but this also means you lose some customisability and flexibility - like you can’t run a custom kernel or mess with the display manager (logon screen) etc, and you’ll need to mostly stick to installing apps via Flatpak or Distrobox.

Overall, if you’re after a console-like experience that just works™, then choose Bazzite. On the other hand, if you’re a hands-on type of person who likes to fine-tune things and is after the best possible performance, choose CachyOS.

Kangy,
@Kangy@lemmy.dbzer0.com avatar

Thanks for the detailed response! I think CachyOS is the way to go for me. I like to be more hands on and have more flexibility

cyclohexane,

I do not have a lot of knowledge on this, but I suspect that nvidia does not support the GTX1070 that well on Linux.

AND supports Linux better. As for nvidia, newer cards have a bit better support but I bet there’s still some disparity.

wolf,

How do I enable DNS over HTTPS or DNS over TLS for all connections in NetworkManager in Debian 12?

It is easy to configure custom DNS servers for all connections via a new .conf file in /etc/NetworkManager/conf.d with a servers=8.8.8.8 entry in the [global-dns-domain-*] section.

How can I configure NetworkManager to use DNS over HTTPS or DNS over TLS via a conf file?

d3Xt3r,

NetworkManager doesn’t support DoH, DoT or other recent protocols like DoQ and DoH3. You’ll need to set up a local DNS resolver / proxy which can handle those protocols. You could use dnsproxy for this. Once you set it up, you can just use “127.0.0.1” as your DNS server in NetworkManager.

Btw, if possible I’d recommend sticking to DoH3 (DNS-over-HTTP/3) or DoQ (DNS-over-QUIC) - they perform better than DoT and vanilla DoH, and are more reliable as well.

wolf,

Thanks a lot for your answer! :-)

crazyCat,

I use Kali Linux for cybersecurity work and learning in a VM on my Windows computer. If I ever moved completely over to Linux, what should I do, can I use Kali as my complete desktop?

foremanguy92_,

No never! Do not use Kali as main OS choose Debian, Fedora, RHEL (not designed for this use case) or Arch system

neidu2,

Short answer: yes

Longer answer: Kali is not intended to be a normal desktop OS. It will work, but ut might be a bit limiting.

If you want a desktop linux with a lot of the security stuff with it, you might want to check out ParrotSec. I used that on my work laptop for a few years.

captain_aggravated,
@captain_aggravated@sh.itjust.works avatar

Kali Linux is a pretty specific tool, it’s not suited for use as a daily driver desktop OS.

It is my understanding that Kali is based on Debian with an xfce desktop, so if you want a similar experience (same GUI, same package manager) in a daily driver OS, you can start there.

lemmyreader,

Guess you mean replicate your existing install from the VM.

From there, install Kali Linux, and restore the relevant parts.

crazyCat,

Oh very cool thank you. In one way I meant more simply just if Kali is decent as a daily driver complete desktop, rather than just as a specialized toolkit.

lemmyreader,

Kali Linux is based on Debian, so I guess you’ll be fine.

Presi300,
@Presi300@lemmy.world avatar

Short answer: no

Long aswer: Kali, as a desktop is just half broken debian with a theme and a bunch of bloatware preinstalled… Even if your host is linux, you should still run Kali in a VM.

bloodfart,

Kali is a very bad choice as a desktop or daily driver. It’s intended to be used as a toolkit for security work and so it doesn’t prioritize the needs of normal desktop use in either package management, defaults or patch updates.

If you ever switched to Linux, pick a distribution you can live with and run kali in a vm like you’re doing now.

Think of it this way: you wouldn’t move into a shoot house, mechanics garage or escape room, would you?

baseless_discourse,

Do I get new puzzles every week if I lived in a escape room?

crazyCat,

Ok, it just seems funny to need to use a Kali VM when I’d already be on Linux, but no big deal I guess.

bloodfart,

I used it as an installed desktop environment at a workbench in a non security context for a year. It was a pain in the butt in like a million ways.

Even when I used the tools kali ships with regularly I either dual booted or ran it inside a vm.

If you wanna understand why every time someone asks about using kali as a daily driver even on their own forums, a bunch of people pop up and say it’s a bad idea, give it a shot sometime.

crazyCat,

Ha no worry, I believe all you guys now and wouldn’t do it, and would just use a VM. Thank you for the insight.

d3Xt3r, (edited )

You can just install the tools you want on your host OS. But if it’s like hundreds of tools then yeah makes more sense to run it inside a VM, just so it’s all nice and separate from your daily-driver. And you may think it’s funny but the performance of Linux-on-Linux is actually pretty good, and there isn’t much of a RAM/CPU overhead either. And if you’re really strapped for RAM, you could use KSM (kernel samepage merging) and ballooning.

Many Linux users use VMs (or containers) for separate workloads, and it’s a completely normal thing to do. For instance, on my homelab box, my host OS is my daily-driver, but all my lab stuff (Kubernetes, Ansible etc) all run under VMs. The performance is so good that you won’t even notice/care that it’s running on a VM. This is all thanks to the Linux/KVM/QEMU/libvirt stack, if it were something else like VMWare or VBox, it’d be a lot more clunkier and you can feel that it’s running on a VM - but that’s not the case with KVM.

crazyCat,

Awesome good to know, thank you for the info!

mandos,

what is the difference between Wayland and xorg, why would you choose one over the other?

baseless_discourse, (edited )

I use xorg on my desktop, because nvidia card don’t have good support for my applications on wayland. Specifically, typing in electron app will jitter. I will be switching to wayland if this problem is solved.

I use wayland on my laptop because it is more secure and supports one-to-one gesture, which is crucial for trackpad.

d3Xt3r,

This was answered previously in this thread: lemmy.ml/comment/10140174

sag, (edited )

Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.

I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.

nyan,

You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably “ABI stability”.

AMDIsOurLord,

That’s the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it’s a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)

Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library’s behavior with an update, your app won’t work it no more unless you go and actually update your own code and find everything that’s broken.

So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can’t update that shit anymore. So you fix on a particular version of the library.

PlexSheep,

Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.

As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.

sag,

Hey, Thanks I have one more question. Is it possible to ship all required library with software?

d3Xt3r,

In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you’re using gcc, you could set the LD_RUN_PATH environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they’d be spitting out an ELF, and RPATH is part of the ELF spec.

BUT I agree with what @Nibodhika wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you’re using several different libraries, then you’ll have to verify each of their licenses and ensure that you’re not violating or conflicting with any of them.

Another issue is that the libraries you ship with your program may not be optimal for the user’s device or use case. For instance, a user may prefer libraries compiled for their particular CPU’s microarchitecture for best performance, and by forcing your own libraries, you’d be denying them that. That’s why it’s best left to the distro/user.

In saying that, you could ship your app as a Flatpak - that way you don’t have to worry about the versions of libraries on the user’s system or causing conflicts.

sag,

Thanks to let me know about Licensing issue.

Bienenvolk,

That is possible indeed! For more context, you can look up “static linking vs dynamic linking”

Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system’s PATH and loads them dynamically at runtime

sag,

Thanks

PlexSheep,

Absolutely! That’s called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.

jack, (edited )

Doesn’t that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient

PlexSheep,

It’s true that boundaries get inflated as a result, but with today’s hard drives it’s not really a problem.

sag,

Yea, That’s why I am learning Rust but I didn’t know it called Static Linking I think it just how Rust works LMAO. And Thanks again

PlexSheep,

No problem. Good luck with your rust journey, it’s imo the best programming language.

Nithanim,

Appimage might also be a way

Nibodhika,

It is, that’s what Windows does. It’s also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.

Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you’ll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it’s own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn’t you might be susceptible to bugs/attacks through that program.

sag,

Thanks you so much for explanation.

beeng,

To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.

If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.

If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.

PlexSheep,

When having a breaking change pre 1.0.0, I’d expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.

eugenia,
@eugenia@lemmy.ml avatar

Because it’s not guaranteed that it’ll work. FOSS projects don’t run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can’t really rely on full compatibility.

wolf,

IMHO the answer is social, not technical:

Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.

The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)

HATEFISH, (edited )

How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?

wolf,

You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)

IMHO the cleanest option is SystemD.

Hiro8811,

Try paveaucontrol, it has an option to lock settings plus it’s a neat app to call when you need to customise settings. You could also add user to the group that has access to mic.

cyclohexane,

Running something at start-up can be done multiple ways:

  • look into /etc/rc.d/rc.local
  • systemd (or whatever init system you use)
  • cron job
baseless_discourse, (edited )

If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.

The service file /etc/systemd/system/<your service>.service should like


<span style="color:#323232;">[Unit]
</span><span style="color:#323232;">Description=some description
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Service]
</span><span style="color:#323232;">ExecStart=alsactrl restore
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Install]
</span><span style="color:#323232;">WantedBy=multi-user.target
</span>

then


<span style="color:#323232;">systemctl enable <your service>.service --now
</span>

you can check its status via


<span style="color:#323232;">systemctl status <your service>.service
</span>

you will need to change <your service> to your desired service name.

For details, read: linuxhandbook.com/create-systemd-services/

HATEFISH,

This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn’t hold after reboot.


<span style="color:#323232;">
</span><span style="color:#323232;">[Unit]
</span><span style="color:#323232;">Description=Runs alsactl restore to fix microphone loop into headphones
</span><span style="color:#323232;">After=multi-user.target
</span><span style="color:#323232;">[Service]
</span><span style="color:#323232;">ExecStart=alsactl restore
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Install]
</span><span style="color:#323232;">WantedBy=multi-user.target
</span>

When I run a status check it shows it deactivates as soon as it runs


<span style="color:#323232;">Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones.
</span><span style="color:#323232;">Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
</span>
baseless_discourse, (edited )

Does after=… solve the problem or cause the problem? Sorry, I cannot parse what you were trying to say.

HATEFISH,

It seems to have no effect either way. Originally I attempted without, then when it didn’t hold after a reboot and some further reading I added the After= line in attempt to ensure the service isn’t trying to initiate before it should be possible.

I can manually enable the service with or without the After= line with the same results of it actually working. Just doesn’t hold after a reboot.

baseless_discourse, (edited )

That is interesting. BTW, I don’t assume that command will run forever right, i.e. it will terminate relatively soon? so that could be why the service is deactivated, not because it is not run. You can try to add ; echo “command terminated” at the end of ExecStart to see if it is terminated, you can also try to echo the exit code to debug.

If the program you use has a verbose mode, you can also try to turn it on to see if there is any error. EDIT: indeed, alsactrl restore --debug

There is also a possiblity that this service is run before the device you need to restore is loaded, so it won’t have any effect.

On a related note, did you install the program via your package manager, and what distro are you running. Because sometimes SELinux will block the program running. But the error message will say permission denied, instead of your message.

Tovervlag,

Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

Plasma,
@Plasma@lemmy.ml avatar

They are TTYs, they’re like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can’t launch my DE/WM.

ArcaneSlime,

If your system is borked sometimes you can boot into those and fix it. I’m not yet good enough to utilize that myself though, I’m still fairly new to linux too.

mexicancartel,

Useful if your gui breaks or if you uninstall all your terminal emulators

bloodfart,

Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.

Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.

I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.

I used to treat them like multiple desktops.

With libcaca I was even able to watch movies on it without x.

I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.

d3Xt3r, (edited )

To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.

In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.

With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term “TTY” persisted, however, now referring to these electronic terminals.

When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.

This is also where the term “terminal” or “console” originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a “terminal emulator” - and now you know why they’re called an “emulator”).

With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don’t force a reset, instead try jumping into a different TTY. And if that fails, there’s REISUB.

Tovervlag,

thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)

Presi300,
@Presi300@lemmy.world avatar

Mostly for headless systems, servers and such. That and debugging, if your desktop breaks/quits working for some reason, you need some way to run multiple things at once…

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • Durango
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • modclub
  • ethstaker
  • JUstTest
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • normalnudes
  • lostlight
  • All magazines