Brickardo,

Using Linux on a GTX660 without proprietary drivers. I never managed to succeed. Desktop would always freeze. Never again.

KillingTimeItself,

idk how i would define difficult, but the thing i probably put the most time into figuring out thus far is LXC containers.

Or LXC, if you like not using redundant acronyms. Those containers are good shit, weird shit, but good shit nonetheless.

bitchkat,

Are you including back in the day when we had to use windows device drivers via ndiswrappers?

I’ve managed to remove a critical library once but did manage to extract it from an RPM on another machine and manually install it. That was good enough to get me to the point where I could yum reinstall.

Pre-linux we had an HP workstation where the disc drive died and of course we had no backups. I managed to frankenstein the disc by connecting the platters on the broken disc to the circuit board of a working disc. This worked and I was able to back up the disk and reload on to a new drive.

And then we bought an 8mm tape drive for backups and I had to port some drivers to HP-UX to get it to work. But we had awesome backups after that!

prime_number_314159,

I managed a CentOS system where someone accidentally deleted everything from /usr, so no lib64, and no bin. I didn’t have a way to get proper files at the time, so I hooked the drive up to my Arch system, made sure glibc matched, and copied yum and other tools from Arch.

Booted the system, reinstalled a whole lot of yum packages, and… the thing still worked.

That’s almost equivalent to a reinstall, though. As a broke college student, I had a laptop with a loose drive, that would fall out very easily. I set it up to load a few crucial things into a ramdisk at boot, so that I could browse the web and take notes even if the drive was disconnected, and it would still load images and things. I could pull the cover off and push the drive back in place to save files, but doing that every time I had class got really tiring, so I wanted it to run a little like a live system.

bitchkat,

I have taken a drive with filesystem issues, mounted on a different machine and either backup data I wanted to keep or copy files to make the original machine runnable.

Jordan_U,

More than a decade ago a user came into -server on Freenode (now libera.chat ) and said that they had accidentally run “rm -rf /* something*” in a root shell.

Note the errant space that made that a fatal mistake. I don’t remember how far it actually got in deleting files, but all of /bin/ /sbin/ and /usr/ were gone.

He had 1 active ssh connection, and couldn’t start another one.

It was a server that was “in production”, was thousands of miles away from him, and which had no possibility for IPMI / remote hands.

Everyone (but me) in the channel said that he was just SoL and should just give up.

I stayed up most of the night helping him. I like challenges and I like helping people.

This was in the sysv-init (maybe upstart) days, and so a decent number of shell scripts were running, and using basic *nix commands.

We recovered the bash binary by running something along the lines of


<span style="color:#323232;">bash_binary_contents="$( </proc/self/exe)"
</span><span style="color:#323232;">printf "%s" > /tmp/bash
</span>

(If you can access “lsof” then “sudo lsof | grep deleted” will show you any files that are open, but also “deleted”. You may be surprised at how many there are!)

But bash needed too many shared libraries to make that practical.

Somehow we were able to recover curl and chmod, after which I had him download busybox-static. From there we downloaded an Ubuntu LiveCD iso, loop mounted it, loop mounted the squashfs image inside the iso, and copied all of /bin/ , /sbin/ , /etc , and so on from there onto his root FS.

Then we re-installed missing packages, fixed up /etc/ (a lot of important daemons, including the one that was production critical, kept their configuration files open, and so we were able to use lsof to find the magic symlinks to them in /proc/$pid/fd/ and just cp them back into /etc/.

We were able to restart openssh-server, log in again, and I don’t remember if we were brave enough to test rebooting.

But we fucking did it!

I am certainly getting a lot of details wrong from memory. It’s all somewhere at irclogs.ubuntu.com though. My nick was / is Jordan_U.

I tried to find it once, and failed.

bitchkat,

I just told this story to a friend but I did the standard rm -rf * as root while in the / directory. And this was back in the day where we nfs mounted every other machine and root privileges propagated through NFS. I think it was on the 2nd or 3rd machine when I thought – “this seems to be taking longer than I thought”.

Veneroso,

I used to main Gentoo.

Breaking the install was more of a guarantee.

I once removed most of X by trying to remove Gnome dependencies and it lead to an interesting couple of hours but I did have a working system when I was done.

There were countless dependency bugs and broken systems but at least I learned how to use the Gentoo Forum and also a lot of how Linux works.

I kind of want to give it another go.

AVincentInSpace,

I did a partial system upgrade when installing nginx without upgrading the rest of my Arch system. One of the things it upgraded was libssl.

Turns out systemd depends on that.

Turns out programs won’t start at all if one of their shared libraries is missing.

Turns out that if you write init=bash in the kernel command line, not even Ethernet connections work if systemd isn’t running.

I had to boot off archiso, chroot into my / partition, and run the system upgrade from there.

frezik,

A Gentoo upgrade package list with over 100 packages and conflicts all over the place. Then do it again when the list grows to the same size in a few months.

This is why I don’t use Gentoo anymore.

Veneroso,

I haven’t used Gentoo in years, maybe I should try to main it again.

It was a pain sometimes but man did I learn a lot from using it.

Adderbox76,

Some programs still relying on python2 when the operating system has long since upgraded to python3.

Not really an issue per se, I just had to switch those apps over to using the flatpak version which would have it installed as needed. (I’m looking at you GIMP)

Draconic_NEO,
@Draconic_NEO@lemmy.world avatar

Rescuing home partition from ZFS, actually that doesn’t really count since I did have to reinstall (was no longer booting), but recovering the Home partition from ZFS and to the other ext4 drive was much harder than it should’ve been and that’s why I would never recommend people use ZFS.

jenny_ball,
@jenny_ball@lemmy.world avatar

agree. zfs is a hairy beast with nice features

AVincentInSpace,

oh god zfs.

tell me, please, who thought it was a good idea for a filesystem to remember the last machine it was mounted from and refuse to let itself be mounted by a different operating system instance even if all the hardware is present?

Zacryon,

Nvidia driver fucking X in the arse without lube.

Nolegjoe,

Installing a hadoop cluster across 5 machines. I wouldn’t say I fixed it, but I made it so it wouldn’t collapse until long after I’d left that company.

swordgeek,

Not Linux, but Solaris, back in the day.

We had a system with a mirrored boot disk. One of the disks failed. And we were unable to boot from the other, because the boot device in OBP (~BIOS) pointed to a device-specific partitIon. When we manually booted from the live device, it was lacking the boot sector code, and wouldn’t boot. When we booted from CDROM, the partitions wouldn’t mount because the virtual device mapping pointed to the dead drive.

This was a gas futures trading system, and rebuild wasn’t an option. Restoring from backup woyld have lost four hours of trades, which would be an extreme last resort.

A coworker and I spent all night on the box. We had a whiteboard covered with every stage of the boot sequence broken down, and every redirection we needed to (a) boot and (b) repair the system. The issue started mid-afternoon, and we finally got it back up by around 6:30 am.

raspberriesareyummy,

Fast data transmission via TCP over a lossy link.

JargonWagon,

Nothing. I’ve fixed nothing.

Waffelson,

You’re still with us, right?

JargonWagon,

Still trying to use Linux Mint on my 2013ish MacBook Pro as a daily driver. Got the MacBook for free and it wouldn’t update anymore, so installed Linux Mint and it’s been great for the most part. Still trying to access my NAS on it though. Having to manually mount drives is a new experience for me, and it’s not coming to me intuitively. Reached out via IRC, got some help but still working on it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linuxmemes@lemmy.world
  • PowerRangers
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • hgfsjryuu7
  • Youngstown
  • InstantRegret
  • slotface
  • khanakhh
  • rosin
  • ngwrru68w68
  • kavyap
  • tsrsr
  • tacticalgear
  • Leos
  • cubers
  • everett
  • vwfavf
  • ethstaker
  • osvaldo12
  • Durango
  • mdbf
  • cisconetworking
  • modclub
  • GTA5RPClips
  • tester
  • normalnudes
  • anitta
  • All magazines