HATEFISH, (edited )

How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?

wolf,

You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)

IMHO the cleanest option is SystemD.

Hiro8811,

Try paveaucontrol, it has an option to lock settings plus it’s a neat app to call when you need to customise settings. You could also add user to the group that has access to mic.

cyclohexane,

Running something at start-up can be done multiple ways:

  • look into /etc/rc.d/rc.local
  • systemd (or whatever init system you use)
  • cron job
baseless_discourse, (edited )

If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.

The service file /etc/systemd/system/<your service>.service should like


<span style="color:#323232;">[Unit]
</span><span style="color:#323232;">Description=some description
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Service]
</span><span style="color:#323232;">ExecStart=alsactrl restore
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Install]
</span><span style="color:#323232;">WantedBy=multi-user.target
</span>

then


<span style="color:#323232;">systemctl enable <your service>.service --now
</span>

you can check its status via


<span style="color:#323232;">systemctl status <your service>.service
</span>

you will need to change <your service> to your desired service name.

For details, read: linuxhandbook.com/create-systemd-services/

HATEFISH,

This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn’t hold after reboot.


<span style="color:#323232;">
</span><span style="color:#323232;">[Unit]
</span><span style="color:#323232;">Description=Runs alsactl restore to fix microphone loop into headphones
</span><span style="color:#323232;">After=multi-user.target
</span><span style="color:#323232;">[Service]
</span><span style="color:#323232;">ExecStart=alsactl restore
</span><span style="color:#323232;">
</span><span style="color:#323232;">[Install]
</span><span style="color:#323232;">WantedBy=multi-user.target
</span>

When I run a status check it shows it deactivates as soon as it runs


<span style="color:#323232;">Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones.
</span><span style="color:#323232;">Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
</span>
baseless_discourse, (edited )

Does after=… solve the problem or cause the problem? Sorry, I cannot parse what you were trying to say.

HATEFISH,

It seems to have no effect either way. Originally I attempted without, then when it didn’t hold after a reboot and some further reading I added the After= line in attempt to ensure the service isn’t trying to initiate before it should be possible.

I can manually enable the service with or without the After= line with the same results of it actually working. Just doesn’t hold after a reboot.

baseless_discourse, (edited )

That is interesting. BTW, I don’t assume that command will run forever right, i.e. it will terminate relatively soon? so that could be why the service is deactivated, not because it is not run. You can try to add ; echo “command terminated” at the end of ExecStart to see if it is terminated, you can also try to echo the exit code to debug.

If the program you use has a verbose mode, you can also try to turn it on to see if there is any error. EDIT: indeed, alsactrl restore --debug

There is also a possiblity that this service is run before the device you need to restore is loaded, so it won’t have any effect.

On a related note, did you install the program via your package manager, and what distro are you running. Because sometimes SELinux will block the program running. But the error message will say permission denied, instead of your message.

Tovervlag,

Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

Plasma,
@Plasma@lemmy.ml avatar

They are TTYs, they’re like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can’t launch my DE/WM.

ArcaneSlime,

If your system is borked sometimes you can boot into those and fix it. I’m not yet good enough to utilize that myself though, I’m still fairly new to linux too.

mexicancartel,

Useful if your gui breaks or if you uninstall all your terminal emulators

bloodfart,

Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.

Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.

I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.

I used to treat them like multiple desktops.

With libcaca I was even able to watch movies on it without x.

I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.

d3Xt3r, (edited )

To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.

In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.

With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term “TTY” persisted, however, now referring to these electronic terminals.

When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.

This is also where the term “terminal” or “console” originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a “terminal emulator” - and now you know why they’re called an “emulator”).

With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don’t force a reset, instead try jumping into a different TTY. And if that fails, there’s REISUB.

Tovervlag,

thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)

Presi300,
@Presi300@lemmy.world avatar

Mostly for headless systems, servers and such. That and debugging, if your desktop breaks/quits working for some reason, you need some way to run multiple things at once…

jaagruk,

What you think about Declarative system management.Do u use it?

baseless_discourse,

It is pretty great, but for now they are still mainly aimed at power users. I have used home manager for a bit, but I feel some module are not exactly well maintained, and using it is not exactly “maintenance-free”. BTW, they pollute your hone dir like crazy, as if xdg has never existed.

I feel like nix is aimed as ease of deployment, but not the ease of maintenance especially for desktop use. However, I love atomic distros, they are on the part of the spectrum, you cannot replicate your setup exactly by copying a dir, but they are very easy to use, with sane default.

cyclohexane,

Not sure what that is. Plesse explain more.

jaagruk,

Like in Nix.

U write whole system config in a file or few (including grub,ssh,etc) then rebuild system and u have a system based on that config. There are projects for arch like blendos (the alpha release)

Nibodhika,

I like it as a concept, but it gets bothersome to maintain on the long run, sometimes you just want to install something not write configs.

I think Gentoo has a nice middle ground, where you can install packages as a one-off without adding them to the world file, which makes it very meat to maintain both your regular packages and some random things you’re trying out before settling in on adding them permanently.

That being said I’m currently looking into writing some ansible for kick-starting machines, so I’m very much moving in that direction. Why not use nix then? Few reasons:

  • Using Nix means I’m forced to use Nix, whereas with Ansible I can use whichever distro I want, more than one even.
  • I don’t want to have to define EVERYTHING, I want to be able to bootstrap systems quickly, but after the initialization I want to be able to mold each system to what I need without worrying about making it reproducible.
  • Nix uses a language that’s only usable in Nix, in short I would need to study and learn something that’s only usable on one specific distro.
elltee,

Nix has an ephemeral command to “install” packages to try out before installing permanently. nix-shell -p <package> will install the package, and drop you into an ephemeral shell to test it out. Exit the shell and it’s gone.

It’s also possible to install permanently straight from the CLI, but that ruins composability. To each his own.

My bigger problem w nix is the lack of FHS and the hoops you have to jump thru to get a non standard app to work.

noughtnaut, (edited )
@noughtnaut@lemmy.world avatar

How the hell do I set up my NAS (Synology) and laptop so that I have certain shares mapped when I’m on my home network - AND NOT freeze up the entire machine when I’m not???

For years I’ve been un/commenting a couple of lines in my fstab but it’s just not okay to do it that way.

possiblylinux127,

You could simply use a graphical tool to mount it. Nautilus has it built in and I’m sure other tools have it as well.

billgamesh,

wiki.archlinux.org/title/Fstab#External_devices

looks like this will do it. no-fail and a systemd timeout

noughtnaut,
@noughtnaut@lemmy.world avatar

Aha, interesting, thank you. So setting nofail and a time out of, say, 5s should work… but what then when I try to access the share, will it attempt to remount it?

atzanteol,

Look up “automount”. You can tell linux to watch for access to a directory and mount it on demand.

ipkpjersi, (edited )

This is also what I’d like to know, and I think the answer is no. I want to have NFS not wait indefinitely to reconnect, but when I reconnect and try going to the NFS share, have it auto-reconnect.

edit: This seemed to work for me, without waiting indefinitely, and with automatic reconnecting, as a command (since I don’t think bg is an fstab option, only a mount command option): sudo mount -o soft,timeo=10,bg serveripaddress:/server/path /client/path/

pmk,

laptop så that

Sneaky swedes :)

bloodfart,

User login script could do it. Have it compare the wireless ssid and mount the share if it matches. If you set the entry in fstab to noauto it’ll leave it alone till something says to mount it.

cosmicrookie,
@cosmicrookie@lemmy.world avatar

In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

(Using Mint cinnamon)

wewbull,

…because that would make Ctrl+C Cut/Copy and that would be really bad. It would kill whatever was running.

So, it becomes Ctrl+Shift+C and paste got moved in the same way for consistency.

maxxxxpower,

I use Ctrl+C to copy far more often than to break a process or something. I demand that Ctrl+Shift+C be reconfigured! 😀

been_jamming,

Does Ctrl+Shift+V work?

Allero,

Due to some old school terminal things. Add shift to shortcut combinations, such as Ctrl+Shift+V to paste.

cyclohexane,

The terminal world has Ctrl+C and Ctrl+(many other characters) already reserved for other things before they ever became standard for copy paste. For for this reason, Ctrl+Shift+(C for copy, V for paste) are used.

Thymos,

What usually also works on Linux is selecting text with the mouse and pasting it by pressing the middle mouse button (or scroll wheel). You’d still need the mouse, but it’s at least a little quicker ☺️

MonkeMischief,

This is a really great question and it was something that tripped me up back in the day!

The answers above cover it already, so upvoting hoping it’s more visible. Thanks for asking it. :)

possiblylinux127,

You need to use Ctrl+shift+v

baseless_discourse, (edited )

In most terminal (gnome terminal, blackbox, tilix etc.) you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

Gnome console is the only terminal I know that doesn’t allow you to change this.

cosmicrookie,
@cosmicrookie@lemmy.world avatar

Interesting! Ill check that out with mint!

downhomechunk,
@downhomechunk@midwest.social avatar

Shift + insert

r0ertel,

Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone “$(wl-paste)” will clone whatever repo you copied to your clipboard. I use this one all the time

Trainguyrom,

so git clone “$(wl-paste)” will clone whatever repo you copied to your clipboard. I use this one all the time

That’s a lot of confidence in not accidentally grabbing a leading/trailing space and grabbing unformatted text. I never trust that I’ve copied clean text and almost exclusively Ctrl+Shift+V to paste without formatting

r0ertel,

You’re probably correct. I guess I only do this with right click, copy of URLs.

Plasma,
@Plasma@lemmy.ml avatar

Ctrl+shift+V is what you should do. Ctrl+V is used by shells for I believe inserting characters without doing some sort of evaluation. I don’t remember the specifics though, but yes Ctrl+shift+V to paste.

ArcaneSlime,

Try ctrl+shift+v, iirc in the terminal ctrl+v is used as some other shortcut (and probably has been since before it was standard for “paste” I’d bet).

Also linux uses two clipboards iirc, the ctrl+c/v and the right click+copy/paste are two distinct clipboards.

Pesopes,

Ctrl+V is already a shortcut for something (I don’t even know what) but to paste just add shift so Ctrl+Shift+V.

(Also a beginner btw)

captain_aggravated,
@captain_aggravated@sh.itjust.works avatar

In Terminal land, Ctrl+C has meant Cancel longer than it’s meant copy. Shift + Insert does what you think Ctrl+V will do.

Also, there’s a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C’d something, this won’t overwrite that.

Nyanix,
@Nyanix@lemmy.ca avatar

While I don’t have the answer as to why, it usually works if you just add a shift, ie. SHIFT+CTRL+V Many terminals also allow you to change the shortcut to copy and paste, so you can adjust for comfort’s sake.

jack, (edited )

Why are debian-based systems still so popular for desktop usage? The lack of package updates creates a lot of unnecessary issues which were already fixed by the devs.

Newer (not bleeding edge) packages have verifiably less issues, e.g. when comparing the packages of a Debian and Fedora distro.

That’s why I don’t recommend Mint

bloodfart,

Because people have the opposite experience and outlook from what you wrote.

I’m one of those people.

I’m surprised no one brought up the xz thing.

Debian specifically targeted by complex and nuanced multi prong attack involving social engineering and very good obfuscation. Defeated because stable (12 stable, mind you, not even 11 which is still in lots of use) was so slow that the attack was found in unstable.

jack,

If that is a good tradeoff for you, old/broken packages but more trusted, then that’s okay. Btw, the xz backdoor was found so quickly it didn’t even ship to most distros in use, except for Debian Sid and Arch I think

bloodfart,

I see it as a fantastic trade off. There are some packages I use that need to be more up to date than stable repos and I either install them from different repos or in a different way.

And arch never even had the whole backdoor because they built from source and didn’t include the poison pill binary component from the attacker.

cyclohexane,

This is not a good argument imo. It was a miracle that xz vulnerability was found so fast, and should not be assumed as standard. The developer had been contributing to the codebase for 2 years, and their code already landed in debian stable iirc. There’s still no certainty that that code had no vulnerabilities. Some vulnerabilities in the past were caught decades after their introduction.

possiblylinux127,

Its not a miracle it is just probability. When you have enough eyes on something you are bound to catch bugs and problems.

Debian holds back because its primary goal is to be stable, reliable and consistent. It has been around longer that pretty much everything else and it can run for decades without issue. I read a article about a university that still had the original Debian install from the 90’s. It was on newer hardware but they just copied over the files.

cyclohexane,

Lots of eyes is not enough. As I mentioned earlier, there are many popular programs found on most machines, and some actually user facing (unlike xz) where vulnerabilities were caught months, years, and sometimes decades later. xz is an exception, not a rule.

bloodfart,

I was running 12 stable on a machine that had been updated and upgraded in between the time when the backdoor was introduced and when it was discovered. At no point in time did either dpkg query or the self report show that system had the affected 5.6.0(?) version.

Stable had versions of xz that contained commits from the attacker and has been walked back to before those were made out of an abundance of caution.

There’s a lot of eyes on that software now and I haven’t seen anyone report that versions between the attacker gaining commit rights and the attacked version were compromised yet, as you said though: that doesn’t mean it isn’t and vulnerabilities have existed for many years without being discovered.

As to whether it’s a good argument, vulnerabilities have a short lifespan generally. Just hanging back and waiting a little while for something to crop up is usually enough to avoid them. If you don’t believe me, check the nist database.

I’m gonna sound like a goober here, but the easiest way to not trip is to slow down and look where you’re going.

AMDIsOurLord,

Debian systems are verified to work properly without subtle config breakages. You can run Debian practically unattended for a decade and it’s chug along. For people who prefer their device to actually work, and not just be a maintenance princess, it’s ideal.

jack,

Okay, I get that it’s annoying when updates break custom configs. But I assume most newbs don’t want to make custom dotfiles anyways. For those people, having the newest features would be more beneficial, right?

Linux Mint is advertised to people who generally aren’t willing to customize their system

possiblylinux127,

I customized Mint. It is a great system that I run in a VM.

AMDIsOurLord,

having a stable base helps. Also, config breakage can happen without user intervention. See Gentoo or Arch’s NOTICE updates

Nibodhika,

Breaks can happen without user intervention in other distros, there are some safeguards around it, but it happens. Also new users are much more likely to edit their configs because a random guy on the Internet did it than an experienced person who knows what they’re doing, also a lot more likely not to realize that this can break the system during an upgrade.

jdnewmil,

Noob question?

You do seem confused though… Debian is both a distribution and a packaging system… the Debian Stable distribution takes a very conservative approach to updating packages, while Debian Sid (unstable) is more up-to-date while being more likely to break. While individual packages may be more stable when fully-updated, other packages that depend on them generally lag and “break” as they need updating to be able to adapt to underlying changes.

But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability. But it turns out that there is no optimal balance that satifies everyone.

Mint is a fine distro… but if you don’t like it, that is fine for you too. The only objection I have to your objection is that you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

jack, (edited )

Noob question?

Should I’ve made a new post instead?

You do seem confused though… Debian is both a distribution and a packaging system…

Yes, Debian is a popular distro depending on Debian packages. My concern is about the update policy of the distro

But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability.

Debian is pure stability, not the balance between stability and newness. If you mean debian-BASED in particular, trying to introduce more newness with custom repos, I don’t think that is a good strategy to get balance. The custom additional repos quickly become too outdated as well. Also, the custom repos can’t account for the outdatedness of every single Debian package.

you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

Yes, I don’t understand/approve the philosophy around the update policy of Debian. It doesn’t make sense to me for desktop usage. The technology of the package system however is great and apt is very fast

KISSmyOSFeddit,

Debian is a balance between stability and newness.
If you want to see what pure stability looks like, try Slackware.

possiblylinux127,

I’m not sure what planet you are on but Debian is more stable and secure than anything I have ever tested. Maybe Debian gets a bad rap because of Ubuntu.

cyclohexane,

I disagree. Stable, yes. But stable as in unchanging (including bug-for-bug compatibility), which imo is not what most users want. It is what server admins want though. Most newbie desktop users don’t realize this about debian based systems, and is one of the sources of trouble they experience.

Debian tries to be secure by back porting security fixes, but they just cannot feasibly do this for all software, and last I checked, there were unaddressed vulnerabilities in debian’s version of software that they had not yet backported (and they had been known for a while). I’m happy to look up the source for you if you’re interested.

possiblylinux127,

Show me a source. I run Debian everywhere including production critical systems. I’ve never had an issue

Maybe start here: www.debian.org/security/

cyclohexane,

Here’s an example:

reddit.com/…/debian_chromium_package_has_many_sec…

Being able to run a distribution on multiple machines does not mean it is free of vulnerabilities. You’d only know if you’re checking CVEs for each package you use.

possiblylinux127,

A reddit post from 3 years ago is not valid evidence

cyclohexane,

Why is that? It shows proof of the exact thing I said. If you don’t like that it’s on Reddit, I can copy paste it here.

If you want more examples, I’m happy to provide them. Here is another example:

security-tracker.debian.org/tracker/…/linux

jack, (edited )

As if Debian has changed fundamentally since then…

possiblylinux127,

It has

cyclohexane,

Are you able to demonstrate with supporting evidence?

wolf,

Debian is for sure not more secure than most other distributions/operating systems. (Might be true for what you tested).

Not even mentioning the famous Debian weak SSH key fuck up (ups), Debian is notoriously understaffed to take care of back ports of security patches for everything which is not the kernel/web server/Python etc. (and even there I would not be too sure) and don’t get me started on starting services/opening ports on an apt install etc.

cyclohexane,

Unlike other commenters, I agree with you. Debian based systems are less suitable for desktop use, and imo is one of the reasons newcomers have frequent issues.

When installing common applications, newcomers tend to follow the windows ways of downloading an installer or a standalone executable from the Internet. They often do not stick with the package manager. This can cause breakage, as debian might expect you to have certain version of programs that are different from what the installer from the Internet expects. A rolling release distro is more likely to have versions that Internet installers expect.

To answer your question, I believe debian based distros are popular for desktop because they were already popular for server use before Linux desktop were significant.

Nibodhika,

That’s a bad example, Debian is bad because people use it wrong and it breaks is not a really strong argument, same can be said about every other distro.

I believe Debian based distros are popular because Ubuntu used to be very beginner friendly back in the early 2000s, while other distros not so much. Then a lot of us started with it, and many never switched or switched and came back.

cyclohexane,

Debian is not bad. It is just not suitable for newcomers using it for desktop. I think my arguments hold this stance.

wolf, (edited )

Debian desktop user here, and I would happily switch to RHEL on the desktop.

I fully agree, outdated packages can be very annoying (running a netbook with disabled WIFI sleep mode right now, and no, backported kernel/firmware don’t solve my problem.)

For some years, I used Fedora (and I still love the community and have high respect for it).

Fedora simply does not work for me:

  • Updated packages can/did break compatibility for stuff I need to get stuff done. Fine if Linux is your hobby, not acceptable if you need to deliver something
  • In the industry, many times not the last recent packages of development environments are used (if you are lucky, you are only a few months or years behind), so having the most recent packages in Fedora helps me exactly zero
  • With Debians 2 years release cycle (and more years of support), I can upgrade to the next version when it is appropriate for me (= 1-2 days when there is a slow week and the worst bugs have been found already)
  • My setup/desktop is heavily customized and fully automated via IaC, no motivation to tweak this stuff constantly (rolling) or every 6-12 months (Fedora)
  • From time to time I have to use software packages from 3rd parties, with Fedora, I might be one update way from breaking this software packages because of version incompatibilities (yes, I might pin a version of something to use a 3rd party software, but this might break Fedora updates (direct and transitive dependencies)
  • I once had a cheap netbook for travel with an infamous chip set bug concerning sleep modes, which would be triggered by some kernels. You can imagine how it is to run Fedora, when you get often Kernel updates and the bug will be triggered or not after double digit numbers of minutes of work.

Of course, I could now start playing around with containerizing everything I need for work somehow and run something like Silverblue, perhaps I might do it someday, but then I would again need to update my IaC every 6-12months, would have to take care of overlays AND containers etc…

When people go ‘rolling’ or ‘Fedora’, they simply choose a different set of problems. I am happy we have choice and I can choose the trouble I have to life with.

On a more positive note: This also shows how far Linux has come along, I always play around with the latest/BETA Fedora Gnome/KDE images in a VM, and seriously don’t feel I am missing anything in Debian stable.

LoreleiSankTheShip,
@LoreleiSankTheShip@lemmy.ml avatar

As someone not working in IT and not very knowledgeable on the subject, I’ve had way less issues with Manjaro than with Mint, despite reading everywhere that Mint “just works”. Especially with printers.

Nibodhika,

Yeah, Manjaro just works, until it doesn’t. Don’t get me wrong, I love Manjaro, used it for years, but if it breaks it’s a pain in the ass to fix, and also hard to get help because the Arch community will just reply with “Not Arch, not my problem” even if it’s a generic error, and the Manjaro community is not as prominent.

I could also mention them letting their SSL certificate expire, which doesn’t inspire a lot of trust, but they haven’t done that in a while.

unionagainstdhmo,
@unionagainstdhmo@aussie.zone avatar

This is where I see atomic distros like Silverblue becoming the new way to get reliable systems, and up to date packages. Because the base system is standardised there can be a lot more QA as there is alot less entropy in the installed system. Plus free rollbacks if something goes wrong. You don’t get that by default on Debian.

Distrobox can be used to install other programs (including GUI apps), I currently run Steam in a distrobox container on Silverblue and vscode with all of my development stuff in another one. And of course use flatpaks from FlatHub where I can, these are more stable than distro packages imo (when official) as the developers are developing for a single target with defined library versions. Not whatever ancient version Debian has or the latest which appeared on Arch very soon after release.

I’ve tried Debian a couple of times but it’s just too out of date. I like new stuff and when developing stuff I need new stuff and it’s not good enough to just install the development/unsupported versions of Debian. It’s probably great for servers, but I think atomic distros will be taking over that space as well, eventually.

jack, (edited )

You should definetely check out Bazzite, it’s based on Fedora Atomic and has Steam on the base image. Image and Flatpak updates are applied automatically in the background, no need to wait for the update on next boot. Media codecs and necessary drivers are installed by default.

The Bazzite image also directly consists of the upstream Fedora Atomic image, just with quality of life changes added and optimized for gaming

unionagainstdhmo,
@unionagainstdhmo@aussie.zone avatar

It looks pretty good, I’ve been planning on installing it on another computer for use as a media centre. Probably wouldn’t use it as my main image as I’m not a huge fan of their customised GNOME experience (I quite like vanilla GNOME with maybe a system tray extension). But I must admit watching some of the videos by the creator of Bazzite and ublue got me interested in this atomic desktop thing again

Trainguyrom,

Distrobox can be used to install other programs (including GUI apps)

I need to play around with that sometime. Is it a chroot or a privileged container or is it a sandboxed container with limited access? How’s hardware excelleration in those?

unionagainstdhmo,
@unionagainstdhmo@aussie.zone avatar

It’s just a podman/docker container. I’m pretty sure it is unprivileged (you don’t need root). I’ve tried it on both NVIDIA (RTX 3050 Mobile) and AMD (Radeon RX Vega 56) and setting up the distrobox through BoxBuddy (a nice GUI app that makes management easy) I didn’t need to do anything to get the graphics drivers working. I only mentioned BoxBuddy because I haven’t set one up from the command line so I don’t know if it does any initial set up. I haven’t noticed any performance issues (yet).

ICastFist,
@ICastFist@programming.dev avatar

Why does it feel that Linux infighting is the main reason why it never takes off? It’s always “distro X sucks”, “installing from Y is stupid”, “any system running Z should burn”

acockworkorange,

Have you ever seen any other software centered forum? It’s not different. That’s not the reason.

msch,

It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.

ICastFist,
@ICastFist@programming.dev avatar

just not so much on the Desktop

Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn’t a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars

AnUnusualRelic,
@AnUnusualRelic@lemmy.world avatar

Convincing companies to switch to no name free software coming from Sun or Digital certainly was a big jump.

MonkeMischief,

the price of zero is a lot more attractive when the alternative option costs several thousand dollars

Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure “because market share”. (This is what I was told when I was interviewing for a library IT position)

They might have gotten “lucky” with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno… paying teachers, maybe?

Trainguyrom,

Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)

cyclohexane,

Doesn’t feel like that to me. I’ll need to see evidence that that is the main reason. It could be but I just don’t see it.

ICastFist,
@ICastFist@programming.dev avatar

I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it’s just “the loud detractors”, dunno

cyclohexane,

Why would one be discouraged by the fact that people have options and opinions on them? That’s the part I’m not buying. I don’t disagree that people do in fact disagree and argue. I don’t know if I’d call it fighting. People being unreasonably aggressive about it are rare.

I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.

billgamesh,

I’m using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion

MonkeMischief,

I like the fact that I can exercise my difficulty with usage commitment by installing both and switching between them :D.

Wayland is so buttery smooth it feels like I just upgraded my computer for free…but I still get some window Z-fighting and screen recording problems and other weirdness.

I’m glad X11 is still there to fall back on, even if it really feels janky from an experience point of view now.

billgamesh,

For me, it’s building software from source on musl. Just one more variable to contend with

captain_aggravated,
@captain_aggravated@sh.itjust.works avatar

Being I’m on Mint Cinnamon and using an Nvidia card, I’ve never even tried to run Wayland on this machine. Seems to work okay on the little Lenovo I put Fedora GNOME on. X11 is still working remarkably well for me, and I’m looking forward to the new features in Wayland once the last few kinks are worked out with it.

IzzyJ,

I can only use x11 myself. The drivers for Wayland on nvidia aren’t ready for prime time yet, my browser flickers and some games don’t render properly. I’m frankly surprised the KDE folks shipped it out

atzanteol,

Only dweebs on social media fight over distros. Nobody who matters.

AnUnusualRelic,
@AnUnusualRelic@lemmy.world avatar

There’s no infighting. It just feels that way because you picked an inferior distribution.

bloodfart,

Because you don’t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesn’t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.

ipkpjersi,

Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.

johannesvanderwhales,

Linux generally has a higher (perceived?) technical barrier to entry so people who opt to go that route often have strong opinions on exactly what they want from it. Not to mention that technical discussions in general are often centered around decided what the “right” way to do a thing is. That said regardless of how the opinions are stated, options aren’t a bad thing.

wolf,

This.

It is a ‘built-in’ social problem: Only people who care enough to switch to Linux do it, and this people are pre-selected to have strong opinions.

Exactly the same can be observed in all kind of alternative projects, for example alternative housing projects usually die because of infighting for everyone has their own definition of how it should work.

stammi,

Thank you for this nice thread! My question: what is Wayland all about? Why would I want to use it and not any of the older alternatives?

possiblylinux127,

Wayland vs Xorg is mostly equivalent to a beat up 80’s car compared to a brand new car.

NoisyFlake,

Because there is only one alternative (Xorg/X11), and it’s pretty outdated and not really maintained anymore.

For now it’s probably still fine, but in a couple of years everything will probably use Wayland.

AMDIsOurLord,

Because the older alternatives are hacky, laggy, buggy, and quite fundamentally insecure. X.Org’s whole architecture is a mess, you practically have to go around the damn thing to work it (GLX). It should’ve been killed in 2005 when desktop compositing was starting to grow, but the FOSS community has a way with not updating standards fast enough.

Hell, that’s kinda the reason OpenGL died a slow death, GL3 had it released properly would’ve changed everything

nyan,

Wayland has better support for some newer in-demand features, like multiple monitors, very high resolutions, and scaling. It’s also carrying less technical debt around, and has more people actively working on it. However, it still has issues with nvidia video cards, and there are still a few pieces of uncommon software that won’t work with it.

The only alternative is X. Its main advantage over Wayland is network transparency (essentially it can be its own remote client/server system), which is important for some use cases. And it has no particular issues with nvidia. However, it’s essentially in maintenance mode—bugs are patched, but no new features are being added—and the code is old and crufty.

If you want the network transparency, have an nvidia card (for now), or want to use one of the rare pieces of software that doesn’t work with Wayland/XWayland, use X. Otherwise, use whatever your distro provides, which is Wayland for most of the large newbie-friendly distros.

d3Xt3r,

The network transparency thing is no longer a limitation with Wayland btw, thanks to PipeWire and Waypipe.

atzanteol,

It’s… complicated. Wayland is the heir apparent to Xorg. Xorg is a fork of an older XFree86 which is based on the X11 standard.

X11 goes back… a long time. It’s been both a blessing and a liability at times. The architecture dates back to a time of multi-user systems and thin clients. It also pre-dates GPUs. Xorg has been updating and modernizing it for decades but there’s only so much you can do while maintaining backward compatibility. So the question arose: fix X or create something new? Most of the devs opted for the later, to start from scratch with a replacement.

I think they bit off a bit more than they could chew, and they seemed to think they could push around the likes of nvidia. So it’s been a bumpy road and will likely continue to be a bit bumpy for a bit. But eventually things will move over.

d3Xt3r, (edited )

In addition to the other replies, one of the main draws of Wayland is that it’s much less succeptible to screen-tearing / jerky movements that you might sometimes experience on X11 - like when you’re dragging around windows or doing something graphics/video heavy. Wayland just feels much smoother and responsive overall. Other draws include support for modern monitor/GPU features like variable refresh rates, HDR, mixed DPI scaling and so on. And there’s plenty of stuff still in the works along those lines.

Security is another major draw. Under X11, any program can directly record what’s on your screen, capture your clipboard contents, monitor and simulate keyboard input/output - without your permission or knowledge. That’s considered a huge security risk in the modern climate. Wayland on the other hand employs something called “portals”, that act as a middleman and allow the user to explicitly permit applications access to these things. Which has also been a sore point for many users and developers, because the old way of doing these things no longer works, and this broke a lot of apps and workflows. But many apps have since been updated, and many newer apps have been written to work in this new environment. So there’s a bit of growing pains in this area.

In terms of major incompatibilities with Wayland - XFCE is still a work-in-progress but nearly there (should be ready maybe later this year), but some older DE/WMs may never get updated for Wayland (such as OpenBox and Fluxbox). Gnome and KDE work just fine though under Wayland. nVidia’s proprietary drivers are still glitchy/incomplete under Wayland (but AMD and Intel work fine). Wine/Proton’s Wayland support is a work-in-progress, but works fine under XWayland.

Speaking of which, “XWayland” is kinda like a compatibility layer which can run older applications written for X11. Basically it’s an X11 server that runs inside Wayland, so you can still run your older apps. But there are still certain limitations, like if you’ve got a keyboard macro tool running under XWayland, it’ll only work for other X11 apps and not the rest of your Wayland desktop. So ideally you’d want to use an app which has native Wayland support. And for some apps, you may need to pass on special flags to enable Wayland support (eg: Chrome/Chromium based browsers), otherwise it’ll run under XWayland. So before you make the switch to Wayland, you’ll need to be aware of these potential issues/limitations.

marathon,
@marathon@liberdon.com avatar
d3Xt3r,

That wasn’t the point I was trying to make though. :)

Chrome(ium) still doesn’t run natively under Wayland by default - you’ll need to manually pass specific flags to the executable to tell it to use Wayland. See: wiki.archlinux.org/title/chromium#Native_Wayland_…

Firefox also needed manual flags, but not anymore - Wayland support is enabled by default since version 121, released around three months ago. But some distros had enabled Wayland for Firefox much before that, Fedora being one of them.

marathon,
@marathon@liberdon.com avatar

@d3Xt3r
No, FWIU with Chrome browsers you shouldn't need special switches anymore.

d3Xt3r,

Are you sure? I just tested it on Fedora 39, using Chrome v123 (Flatpak) and Chromium v123 (repo package), both of them were running under XWayland.

Linkerbaan,
@Linkerbaan@lemmy.world avatar

How do people not using Debian/Ubuntu follow along with tutorials when their package manager doesn’t have a package that’s in Apt?

nyan,

My first step is usually to figure out whether the package should exist as a separate entity under Gentoo (which, for instance, doesn’t have separate dev packages). Then I check the overlay masterlist to see if there’s an unofficial package (which there often is).

If there is no package, I can package it myself (since I’ve been working with the same distro for years and can handle the basic packaging cases), install from source, get the .deb and apply alien or deb2targz and proceed from there, or give the whole thing up as a bad job.

baseless_discourse, (edited )

I typically search the package name + fedora, it will probably tell me the alternative package that is in fedora.

Nowadays, I have moved to an atomic fedora distro, so I would severely limit the amount of package I install on my system for stability and security.

I think I only have two packages installed on my machine: fish, because it is the only popular shell that follows xdg dir; and a latex-like input method to use in slack.

bloodfart,

Back in my slackware days I’d just convert other distros packages to the tgz format or compile the package and its requirements.

If the dependencies were really complex I’d draw a picture to help me understand them better.

d3Xt3r,

As an Arch user (btw), that’s rarely an issue thanks to the AUR and it’s vast package pool :) But on the very rare occasion that it’s not there on the AUR but available as a deb, I can use a tool called Debtap to convert the .deb to the Arch’s .tar.zst package.

For rpm-based distros like Fedora and OpenSUSE etc, there’s a similar tool called alien that can convert the .deb to .rpm.

In both instances though, dependencies can be a pain, sometimes you may need to trawl thru the dependencies and convert/install them, before to do the main package.

Ideally though, you’d just compile from source. It used to be a daunting task in the old days, but with modern CPUs and build systems like meson, it’s not really a big deal these days. You’d just follow the build instructions on the package’s git repo, and usually it’s just a simple copy-paste job.

Finally, if these packages are just regular apps (and not system-level packages like themes etc), then there are multiple options these days such as using containers like Distrobox, or installing a Flatpak/Appimage version of that app, if available.

Linkerbaan,
@Linkerbaan@lemmy.world avatar

Thanks for the explanation btw. Ive tried other distros but quickly ran into missing dependencies and drivers and instantly gave up on it and just went back to Ubuntu.

MonkeMischief,

The other reply here mentioned Arch and a “more user friendly” flavor called Manjaro, but Manjaro isn’t quite Arch.

I would suggest if you want to try the challenge of Arch without the terminal-only install, give EndeavourOS a look! Their community is exceptionally friendly and helpful, and you get a lot of the benefits of “pure Arch” with an indtaller and usability features. :)

Linkerbaan,
@Linkerbaan@lemmy.world avatar

I have had problems even just using Kubuntu and certain libraries not working because of the QT dependency version from KDE. Then I wasn’t able to update QT because KDE didn’t support the newer version. I quickly gave up on trying to solve dependency hell and went back to stock Ubuntu. Am now traumatized for life of using other distros.

MonkeMischief,

Aw that’s such a bummer! I don’t have a ton of experience here but currently I’m running OpenSUSE Tumbleweed, and something it does is say “This dependency can’t be satisfied, what do you want to do?”

You can usually “keep obsolete”, “remove the thing”, “ignore this and risk breaking it”.

If I keep hitting “keep obsolete” and it just goes in circles, I wait a few weeks and usually everything has been pushed by then. :)

I wonder if this mismatch could be from the 'buntu distros holding back certain packages differently from KDE’s schedule… I sadly don’t know enough to comment. 😅

Linkerbaan,
@Linkerbaan@lemmy.world avatar

Yes Ubuntu is definitely slower in the update cycle. But virtually everyone in the programming world seems to use it and wikis almost always get based on an Ubuntu LTS version.

As a random example take ROS2

https://lemmy.world/pictrs/image/172c3006-1d4c-4306-a6a3-86c53e639068.png

https://lemmy.world/pictrs/image/7027bf12-44af-4fa3-9ef7-a23f2568d7a4.png

Even when a new version such as 24.04 comes it usually takes almost six months for people to start migrating and libraries to become well supported on it.

Allero,

Never actually missed important dependencies in AUR.

So you might give Arch or derivatives a spin. Warning: Arch relies on you knowing what you’re doing. You can bork something by doing it the wrong way. Manjaro helps, but has its own issues - mainly, you better not use AUR unless you actually need it.

AnUnusualRelic, (edited )
@AnUnusualRelic@lemmy.world avatar

Running OpenSuSE Tumbleweed, I can’t say that’s ever been a real issue. I ran Ubuntu a long time (10 years, at least) ago, but too much stuff was broken and version updates were a chore. Maybe it’s gotten better since.

jadedwench,

I usually look at their GitHub or what have you to see if there are packages or instructions there. I have been able to solve most issues this way. Otherwise I see how much of a bitch it will be to compile from source. Depending what it is, I also check to see if there is a docker image instead.

hardaysknight,

I bought a cheap Intel i226-v nic to use 2.5gbe in Unraid and it tries to auto configure to 100mbps. I realize now that the Intel 2.5gbe nics have issues, so is there anything I could do to get it to play nice, or does anyone know of a solid low profile 2.5gbe nic I could use without breaking the bank?

bloodfart,
hardaysknight,

Thanks!

DmMacniel,

OP. Gotta say that this thread is a brilliant idea!

cyclohexane,

Thank you 😄

fuzzy_feeling,

inbox going brrr…

JustEnoughDucks,
@JustEnoughDucks@feddit.nl avatar

How do programs that measure available space like ‘lsblk’, ‘df’, ‘zfs list’ etc see hardlinks and estimate disk space.

If I am trying to manage disk space, does the file system correctly display disk space (for example a zfs list)? Or does it think that I have duplicate files/directories because it can’t tell what is a hardlink?

Also, during move operations, zfs dataset migrations, etc… does the hardlinked file continue tracking where the original is? I know it is almost impossible at a system level to discern which is the original.

possiblylinux127,

I believe that zfs has its own disk usage utilities

d3Xt3r, (edited )

I’m not super familiar with ZFS so I can’t elaborate much on those bits, but hardlinks are just pointers to the same inode number (which is a filesystem’s internal identifier for every file). The concept of a hardlink is a file-level concept basically. Commands like lsblk, df etc work on a filesystem level - they don’t know or care about the individual files/links etc, instead, they work based off the metadata reported directly by the filesystem. So hardlinks or not, it makes no difference to them.

Now this is contrary to how tools like du, ncdu etc work - they work by traversing thru the directories and adding up the actual sizes of the files. du in particular is clever about it - if one or more hardlinks to a file exists in the same folder, then it’s smart enough to count it only once. Other file-level programs may or may not take this into account, so you’ll have to verify their behavior.

As for move operations, it depends largely on whether the move is within the same filesystem or across filesystems, and the tools or commands used to perform the move.

When a file or directory is moved within the same filesystem, it generally doesn’t affect hardlinks in a significant way. The inode remains the same, as do the data blocks on the disk. Only the directory entries pointing to the inode are updated. This means if you move a file that has hardlinks pointing to it within the same filesystem, all the links still point to the same inode, and hence, to the same content. The move operation does not affect the integrity or the accessibility of the hardlinks.

Moving files or directories across different filesystems (including external storage) behaves differently, because each filesystem has its own set of inodes.

  • The move operation in this scenario is effectively a copy followed by a delete. The file is copied to the target filesystem, which assigns it a new inode, and then the original file is deleted from the source filesystem.
  • If the file had hardlinks within the original filesystem, these links are not copied to the new filesystem. Instead, they remain as separate entities pointing to the now-deleted file’s original content (until it’s actually deleted). This means that after the move, the hardlinks in the original filesystem still point to the content that was there before the move, but there’s no link between these and the newly copied file in the new filesystem.

I believe hardlinks shouldn’t affect zfs migrations as well, since it should preserve the inode and object ID information, as per my understanding.

JustEnoughDucks,
@JustEnoughDucks@feddit.nl avatar

This really clears things up for me, thanks! I guess I am not so “new” (been using linux for 8 years now), but every article I read on hardlinks just confused me. This is much of a more “layman’s” explanation for me!

AssortedBiscuits,
@AssortedBiscuits@hexbear.net avatar

Someone gifted me with some old iPad that’s more than 10 years old. What steps should I take to install Linux on the iPad?

llii,

I don’t think that’s possible.

eugenia,
@eugenia@lemmy.ml avatar

You can’t. Apple’s iPads and iPhones are e-waste from the moment they run out of security and OS updates. Apple doesn’t allow third party installations.

Andromxda,
@Andromxda@lemmy.dbzer0.com avatar

It’s technically possible, but there aren’t any proper instructions, I assume it’s pretty complicated and you could break the device while doing this. Also, I don’t think it would run particularly well, since there are no Linux drivers for Apple’s proprietary hardware (except for M1 because it was reverse-engineered, but iPads use A-Series chips)

starman,
@starman@programming.dev avatar

On Android, when an app needs something like camera or location or whatever, you have to give it permission. Why isn’t there something like this on Linux desktop? Or at least not by default when you install something through package manager.

possiblylinux127,

Flatpaks get permission though XDG-portals. The difference is there are usually no popups

urfavlaura,

Android apps are sandboxed by default while packages on Linux run with the users permission.

There is already something like this with Flatpak since it also sandboxes every installed program and only grants requested permissions.

DmMacniel,

Sandboxing wasn’t considered during development of Linux. But recent development incorporates this practice and can be found for example in flatpaks.

baseless_discourse, (edited )

It is technically doable, but that would require a unified method to call when an app needs camera, and that method will show the prompt.

This would technically require developers to rewrite their apps on linux, which is not happening anytime soon.

Fortunately, pipwire and xdg-portal is currently doing this work, like when you screen share on zoom using pipwire, a system prompt will pop up asking you for what app to share. Unlike on Windows, zoom cannot see your active windows when using this method, only the one that you choose to share.

Most application framework, including GTK and electron, are actively supporting pipwire and portal, so the future is bright.

There is a lot of work in improving security and usablity of linux sandbox, and it is already much better than Windows (maybe also better than macos?). I am confident, in 5 years, linux sandbox stack (flatpak, protal, pipewire) will be as secure and usable as on android and ios.

MonkeMischief,

I’d love to just skip to “Linux being secure and running on my smartphone instead of Android” but we know how much an uphill battle that is hahaha.

possiblylinux127,

It probably would end up being implemented though XDG portals

baseless_discourse,

If I understand correctly pipwire is supposed to be the “portal” but for audio and videos.

But I believe camera portal is already there, using pipwire. All they need to add is a popup to request usage when the app needs it.

possiblylinux127,

XDG portals is the standard interface that applications (should) use to do things on your system. It is most commonly associated with flatpaks and Wayland.

You could have pipewire as the back end but XDG portal implementation usually is controlled by the desktop.

baseless_discourse,

Thanks for correcting me!

eugenia,
@eugenia@lemmy.ml avatar

Because it requires a very specific framework to be built from the ground up, and FDO doesn’t specify these. A lot of breakage would happen if were to shoehorn such changes into Linux suddenly. Android has many layers of security that they’re fundamentally different than that of the unix philosophy. That’s why Android, even if it’s based on Linux, it’s not really considered “a distro”.

sag, (edited )

Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.

I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.

nyan,

You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably “ABI stability”.

AMDIsOurLord,

That’s the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it’s a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)

Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library’s behavior with an update, your app won’t work it no more unless you go and actually update your own code and find everything that’s broken.

So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can’t update that shit anymore. So you fix on a particular version of the library.

PlexSheep,

Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.

As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.

sag,

Hey, Thanks I have one more question. Is it possible to ship all required library with software?

d3Xt3r,

In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you’re using gcc, you could set the LD_RUN_PATH environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they’d be spitting out an ELF, and RPATH is part of the ELF spec.

BUT I agree with what @Nibodhika wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you’re using several different libraries, then you’ll have to verify each of their licenses and ensure that you’re not violating or conflicting with any of them.

Another issue is that the libraries you ship with your program may not be optimal for the user’s device or use case. For instance, a user may prefer libraries compiled for their particular CPU’s microarchitecture for best performance, and by forcing your own libraries, you’d be denying them that. That’s why it’s best left to the distro/user.

In saying that, you could ship your app as a Flatpak - that way you don’t have to worry about the versions of libraries on the user’s system or causing conflicts.

sag,

Thanks to let me know about Licensing issue.

Bienenvolk,

That is possible indeed! For more context, you can look up “static linking vs dynamic linking”

Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system’s PATH and loads them dynamically at runtime

sag,

Thanks

PlexSheep,

Absolutely! That’s called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.

jack, (edited )

Doesn’t that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient

PlexSheep,

It’s true that boundaries get inflated as a result, but with today’s hard drives it’s not really a problem.

sag,

Yea, That’s why I am learning Rust but I didn’t know it called Static Linking I think it just how Rust works LMAO. And Thanks again

PlexSheep,

No problem. Good luck with your rust journey, it’s imo the best programming language.

Nithanim,

Appimage might also be a way

Nibodhika,

It is, that’s what Windows does. It’s also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.

Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you’ll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it’s own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn’t you might be susceptible to bugs/attacks through that program.

sag,

Thanks you so much for explanation.

beeng,

To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.

If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.

If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.

PlexSheep,

When having a breaking change pre 1.0.0, I’d expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.

eugenia,
@eugenia@lemmy.ml avatar

Because it’s not guaranteed that it’ll work. FOSS projects don’t run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can’t really rely on full compatibility.

wolf,

IMHO the answer is social, not technical:

Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.

The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • ethstaker
  • DreamBathrooms
  • cubers
  • mdbf
  • everett
  • magazineikmin
  • Durango
  • Youngstown
  • rosin
  • slotface
  • modclub
  • kavyap
  • GTA5RPClips
  • ngwrru68w68
  • JUstTest
  • thenastyranch
  • cisconetworking
  • khanakhh
  • osvaldo12
  • InstantRegret
  • normalnudes
  • Leos
  • tacticalgear
  • tester
  • megavids
  • anitta
  • provamag3
  • lostlight
  • All magazines