Pons_Aelius,

I spent a decade working in insolvency.

When we were going into a business that had failed the question was "Are the idiots, criminals or both?"

One highlight:

A boat sales / marine business goes bust. When we arrive with the paper work and seize the place there are about a dozen new boats on the lot worth several million. We change the locks on the gates.

Arrive the next day, the gates have been busted open and several million in boats are now missing. We look up the addresses of the owners (one of them lives on acreage) and drive to their property...from the road we can see the boats stashed there. Really smart guys.

So we call the police. Someone inside notices use there and decides to flee with one of the boats, it is huge but they think they can get away.

We then have the slowest car chase in history as we calmly follow this guy towing a boat on a trailer down the road while talking to the cops to meet us.

grabyourmotherskeys,

Love this. Lol.

I used to maintain a system that was used to track loss prevention. Basically security company that followed delivery trucks. It was wild to read database records about these guys openly selling their stolen goods (building supplies) while testing my code changes.

The database was sanitized of identifying info if anyone cares.

Hubi,

I used to work at a car dealership. One day I had to use a bay in a different building because my usual workplace was occupied. The other building had a lift that I hadn’t used before.

Anyways, I drove the car onto the lift, got out and placed the arms of the lift under the jacking points like I had done a thousand times before. I raised the lift a little and checked if the placement was still correct. It looked good, so I raised the car to a medium height. When I looked again, I realized that this lift had a central platform that was also raised and was set about 20 centimeters higher than the four arms that usually lift the car.

This 90.000 Euro SUV was basically balancing on a 180x50cm piece of metal right in the center. I managed to lower it down safely but my pulse goes up just thinking about that day.

Dio9sys,

Sharing my story for posterity.

I used to work at a medical center for old folks with varying disabilities. It was a great job all things considered, just didn’t pay very well and the scheduling was a mess.

Anyway, one day I’m cleaning tables on the dining room when I hear on my walkie talkie that one of the new people need help with a guy in the bathroom. Usually “they need help” means “something has gone awry, please unfuck the situation” and, since I was the supervisor on shift, my job frequently involved untucking a situation.

I arrive outside the bathroom door and the new employee tells me that she walked into a situation that she wasn’t prepared for. I figured it was some poop, or the guy fell asleep on the toilet or something.

I walk in and the walls were all painted with poop. The sink was painted with poop. The floor was painted with poop. The paper towel dispenser had poop all over the front of it.

The poor guy had gone to the bathroom, got confused and tried to remember what toilet paper was. He saw me and knew I was there to help, but he was nonverbal. His way of saying thank you was to gently take his hand and rest it under your chin.

He did so, but his hand was also still covered on poop.

I’m used to poop. It’s a normal job hazard in that line of work. But something about having to clean myself and every surface in the room from caked poop while somebody else gave the poor guy a shower…that kind of story sticks with you. To this day I can’t look at finger paints without feeling a little queasy.

RandomStickman,
RandomStickman avatar

Your story makes up for the non-work related stories in this thread. It's both work related and shitty lol. I'm sorry you had to go through that.

deegeese,

I’m sorry, that sounds like a really shitty day.

tetris11,
@tetris11@lemmy.ml avatar

Normally I’m very much anti “lets use robots to replace jobs”, but this is one case where I think it would be a win for everybody. The robot won’t care, and the elderly person won’t feel their dignity lost, and all is taken care of behind closed doors.

My grandma started losing control of herself towards the end, and my mother did overtime in taking care of her and cleaning her. This sounds sweet, but it was a bad situation for everyone. My mother essentially started treating her own mother like a baby, often in front of us, and my grandmother (a proud and strong woman my entire life) essentially lost her sense of dignity and independence. I still remember her as the strong and proud woman she was, and I do my best to forget her last year.

We need robot caretakers.

Dio9sys,

The only problem is that robots don’t have the kind of sense of connection and humanity that human caretakers often have, on top of the general complexity of the task. I was always frustrated when family would visit and treat their aunt/cousin/etc like a baby when like, no, they’re 80 years old and were raised on a farm. It’s really just a matter of needing appropriately trained caretaking staff who are also paid enough, which sadly the industry lacks both of those things

skullgiver, (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

deleted_by_author

  • Loading...
  • Dio9sys,

    A coworker once did a bad pull request and knocked down all the Canadian servers. God bless

    radisson,

    Same idea: restore a backup to the automatically selected database in the drop down instead of a new database (Thanks Sql Server 2000 entreprise manager!)

    Overwrote a client production database that the sysadmin couldn’t find any backup for, two hours before I went on a two week vacation.

    Fun times.

    GreyShuck,
    @GreyShuck@feddit.uk avatar

    An isolated shingle spit nature reserve. We’d lost mains power in a storm some while back and were running on a generator. Fuel deliveries were hard to arrange. We’d finally got one. We were pretty much running on fumes and another storm was coming in. We really needed this delivery.

    To collect the fuel, I had to take the Unimog along a dump track and across 5 miles of loose shingle - including one low causeway stretch through a lagoon that was prone to wash out during storms. We’d rebuilt it a LOT over the years. On the way up, there was plenty of water around there, but it was still solid.

    I get up to the top ok and get the tank full - 2000L of red diesel - but the wind is pretty strong by the time I have. Half way back, I drop down off the seawall and reach the causeway section. The water is just about topping over. If I don’t go immediately, I won’t get through at all and we will be out of fuel for days - maybe weeks. So I put my foot down and get through that section only to find that 200 meters on, another section already has washed out. Oh shit.

    I back up a little but sure enough the first section has also washed through now. I now have the vehicle and a full load of fuel marooned on a short section of causeway that is slowly washing out. Oh double shit. Probably more than double. Calling it in on the radio, everyone else agrees and starts preparing for a pollution incident.

    In the end I find the firmest spot that I can in that short stretch and leave the Moggie there. Picking my route and my moment carefully I can get off that ‘island’ on foot - no hope with the truck - BUT due to the layout of the lagoons only to the seaward ridge, where the waves are now crashing over into the lagoon with alarming force. I then spend one of the longest half-hours I can remember freezing cold and drenched, scrambling yard by yard along the back side of that ridge and flattening myself and hoping each time a big wave hits.

    The firm bit of causeway survived and there was no washed away Unimog or pollution in the end - and I didn’t drown either - but much more by luck than judgement.

    These days I am in a position where I am responsible for writing risk assessments and methods statements for procedures like this. It was another world back then.

    Dio9sys,

    That is seriously some action movie shit

    xkforce,

    One of the students was pissed at the grade they got and apparently flooded one of our teaching labs with gas by turning on all the bunsen burners.

    Dio9sys,

    Holy shit! That could have ended very, very badly

    Admetus,

    Probably didn’t blow due to him flooding the air with gas, barely any oxygen left I bet.

    One Bunsen burner though…

    xkforce,

    No there was plenty of Oxygen… at least enough that the lab coordinator working when it happened could get in there and shut all of them off

    jarredpickles87,
    @jarredpickles87@lemmy.world avatar

    I know most of these stories are going to be IT or food service, so I’ll chime in with mine to change it up.

    TLDR: We caused some explosions on a transformer because someone didn’t read test results.

    I work for a power utility. One night, we were energizing a new transformer. It fed a single strip mall complex with a major grocery chain on it, so that’s why it was at night, as we couldn’t affect the service while they were open.

    Anyways, we go to energize, close the primary switches and one of the lightning arrestors blows up. And I mean blows up, like an M80 just went off. Lit up the sky bright as day for a couple moments at 1 in the morning. The protection opened the switches and everybody is panicking, making sure nobody was hurt.

    Well after everybody settled down, the arrestor was replaced, they decide to throw it in again. Switches come closed, and explosion #2 happens. A second arrestor blows spectacularly. I tried to convince the one supervisor on site to go for a third time, because why not, but he didn’t want to do it again. Whatever.

    A few days go by and we find out what the issue was. This transformer was supposed to be a 115kV to 13.2kV. Come to find out there was an internal tap selection that was set for 67kV for the primary, and not 115kV. So what was happening was the voltage was only being stepped down half as much as needed so there was like 28kV or so on the secondary instead of 13.2kV and that was over the lightning arrestors ratings, hence why they were blowing up. So the transformer had to have its oil drained, guys had to go inside it and physically rewire it to the correct ratio.

    We had a third party company do the acceptance testing on this transformer, and our engineering department just saw all the green checkmarks but didn’t pay attention to the values for the test results. Nobody expected to run into this because we don’t have any of this type of transformer in our system, but that’s certainly no excuse.

    Moral of the story: read your acceptance test results carefully.

    JCPhoenix,
    @JCPhoenix@beehaw.org avatar

    Several years ago, when I was more just the unofficial office geek, our email was acting up. Though we had Internet access as normal. At the time, email (Exchange) was hosted on-prem on our server. Anything server related, I’d contact our MSP to handle it. Which usually meant they’d simply reboot the server. Easy enough, but I was kinda afraid and hesitant to touch the server unless the MSP explicitly asked/told me to do something.

    I reported it to our MSP, expecting a quick response, but nothing. Not even acknowledgment of the issue. This was already going on for like an hour, so I decided to take matters into my own hands. I went to the server, turned on the monitor…and it was black. Well, shit. Couldn’t even do a proper shutdown. So I emailed again, waited a bit, and again no response.

    Well, if the server was being unresponsive, I figured a hard shutdown and reboot would be fine. I knew that’s what the MSP would (ask me to) do. What difference was them telling me to do it versus just me doing it on my own? I was going to fix email! I was going to be the hero! So I did it.

    Server booted up, but after getting past the BIOS and other checks…it went back to black screen again. No Windows login. That’s not so terrible, since that was the status quo. Except now, people were also saying Internet all of a sudden stopped working. Oh shit.

    Little did I know that the sever was acting as our DNS. So I essentially took down everything: email, Internet, even some server access (network drives, DBs). I was in a cold sweat now since we were pretty much dead in the water. I of course reached out AGAIN to the MSP, but AGAIN nothing. Wtf…

    So I told my co-workers and bosses, expecting to get in some trouble for making things worse. Surprisingly, no one cared. A couple people decided to go home and work. Some people took super long lunches or chitchatted. Our receptionist was playing games on her computer. Our CEO had his feet up on his desk and was scrolling Facebook on his phone. Another C-suite decided to call it an early day.

    Eventually, at basically the end of the day, the MSP reached out. They sent some remote commands to the server and it all started working again. Apparently, they were dealing with an actual catastrophe elsewhere: one of their clients’ offices had burned down so they were focused on BCDR over there all day.

    So yeah, I took down our server for half a day. And no one cared, except me.

    mkhopper, (edited )
    @mkhopper@lemmy.world avatar

    Strap in friends, because this one is a wild ride.

    I had stepped into the role of team lead of our IS dept with zero training on our HP mainframe system (early 90s).
    The previous team lead wasn’t very well liked and was basically punted out unceremoniously.
    While I was still getting up to speed, we had an upgrade on the schedule to have three new hard drives added to the system.

    These were SCSI drives back then and required a bunch of pre-wiring and configuration before they could be used. Our contact engineer came out the day before installation to do all that work in preparation of coming back the next morning to get the drives online and integrated into the system.

    Back at that time, drives came installed on little metal sleds that fit into the bays.
    The CE came back the next day, shut down the system, did the final installations and powered back up. … Nothing.
    Two of the drives would mount but one wouldn’t. Did some checking on wiring and tried again. Still nothing. Pull the drive sleds out and just reseat them in different positions on the bus. Now the one drive that originally didn’t mount did and the other two didn’t. What the hell… Check the configs again, reboot again and, success. Everything finally came up as planned.

    We had configured the new drives to be a part of the main system volume, so data began migrating to the new devices right away. Because there was so much trouble getting things working, the CE hung around just to make sure everything stayed up and running.

    About an hour later, the system came crashing down hard. The CE says, “Do you smell something burning?” Never a good phrase.
    We pull the new drives out and then completely apart. One drive, the first one that wouldn’t mount, had been installed on the sled a bit too low. Low enough for metal to metal contact, which shorted out the SCSI bus, bringing the system to its knees.

    Fixed that little problem, plug everything back in and … nothing. The drives all mounted fine, but access to the data was completely fucked,
    Whatever… Just scratch the drives and reload from backup, you say.

    That would work…if there were backups. Come to find out that the previous lead hadn’t been making backups in about six months and no one knew. I was still so green at the time that I wasn’t even aware how backups on this machine worked, let alone make any.

    So we have no working system, no good data and no backups. Time to hop a train to Mexico.

    We take the three new drives out of the system and reboot, crossing all fingers that we might get lucky. The OS actually booted, but that was it. The data was hopelessly gone.

    The CE then started working the phone, calling every next-level support contact he had. After a few hours of pulling drives, changing settings, whimpering, plugging in drives, asking various deities for favors, we couldn’t do any more.

    The final possibility was to plug everything back in and let the support team dial in via the emergency 2400 baud support modem.
    For the next 18 hours or so, HP support engineers used debug tools to access the data on the new drives and basically recreate it on the original drives.
    Once they finished, they asked to make a set of backup tapes. This backup took about 12 hours to run. (Three times longer than normal as I found out later.)
    Then we had to scratch the drives and do a reload. This was almost the scariest part because up until that time, there was still blind hope. Wiping the drives meant that we were about to lose everything.
    We scratched the drives, reloaded from the backup and then rebooted.

    Success! Absolute fucking success. The engineers had restored the data perfectly. We could even find the record that happened to be in mid-write when the system went down. Tears were shed and backs were slapped. We then declared the entire HP support team to be literal gods.

    40+ hours were spent in total fixing this problem and much beer was consumed afterwards.

    I spent another five years in that position and we never had another serious incident. And you can be damn sure we had a rock solid backup rotation.

    (Well, there actually was another problem involving a nightly backup and an inconveniently placed, and accidentally pressed, E-stop button, but that story isn’t nearly as exciting.)

    DudeDudenson,

    Imagine the difference trying to get that kind of support these days. Especially from HP

    ThatWeirdGuy1001,
    @ThatWeirdGuy1001@lemmy.world avatar

    Had to unload a pistol I’d found in a box of potatoes at Taco Bell.

    Witnessed a five man brawl at steak n shake

    Had a girl puke up her Oreo mint shake all over the bathroom also at steak n shake

    Food service is fuckin wild

    Followupquestion,

    What did the box of potatoes do to you to deserve that?

    captainjaneway,
    @captainjaneway@lemmy.world avatar

    TIL never go to Steak n Shake.

    ThatWeirdGuy1001, (edited )
    @ThatWeirdGuy1001@lemmy.world avatar

    Tbf it was third shift after the bars closed so. It was a wild experience.

    As far as I’m aware they don’t even do thirds anymore they stopped during the pandemic. At least the ones near me.

    beardedmoose,

    This is actually my own Oh Shit story.

    Early days of being a sysadmin and making changes on a major Linux server that we have in production. Running routine commands and changing permissions on a few directories and I make a typo. “sudo chmod 777 /etc/” instead of typing the rest of the directory tree I accidentally hit return.

    It only ran for a fraction of a second before I hit CTRL + C to stop it but by then the damage had been done. I spent hours mirroring and fixing permissions by hand using a duplicate physical server. As a precaution we moved all production services off this machine and it was a good thing too as when we rebooted the server a few weeks later, it never booted again.

    For those that don’t know, chmod is used to set access permissions on files and folders, the 777 stands for “Read + Write + Execute” for the owner, group, and everyone else. The /etc directory contains many of the basic system configuration files for the entire operating system and many things have very strict permissions for security reasons. Without certain permissions in place those systems will refuse to load files or boot if not properly set.

    Dio9sys,

    That is literally a nightmare scenario for me, holy shit

    Djtecha,

    If you didn’t use the recursion flag this wouldn’t be to bad

    beardedmoose,

    in hindsight I should have just changed into the directory directly first then used chmod without needing the full path. Or run the flag that asks you to confirm each transaction or dry run. I’m a much smarter idiot nowadays.

    Hadriscus,

    would that mean it doesn’t affect anything other than top level files ?

    Djtecha,

    Yea without the R flag it only does the file (and since folders are files in Unix…)

    EnderMB,

    (Not my story, but a coworker)

    This person started working for a large retailer. On their first week, they contributed a fix to a filtering system that inadvertently disabled the “adult” filter.

    The team got paged, an error review took place, and we went through the impact of the error - a ~10% increase in sales worldwide, and the best sales outside of the holiday period.

    On one hand, extremely damaging for the company. On the other, on his first week he delivered impact that exceeded the whole team’s goal for the year.

    That person is now in senior management at a huge tech company.

    cashews_best_nut,

    Twitch?

    Tar_alcaran,

    Happy ending story, but it’s still gross.

    I do workplace safety and hazardous material handling (instructions, plans, regulation, etc), for all sorts of facilities, from dirty ground to lab waste.

    Hospitals have a number of types of dangerous waste, among them stuff that get disinfected in bags in an autoclave (oven) and stuff that shouldn’t be in a bag, like needles, scalpel blades etc.

    I was giving some on-site instructions, which included how to dispose of things. So I tell the people to never assume someone does everything right, because we’ve all thrown trash in the wrong bag at some point, and you don’t want to find out someone left a scalpel in the autoclave bag by jamming it into the hole and pulling a needle from your hand.

    My eye drifts slightly left, to one of my students current assisting another worker doing literally that, stuffing a second bag into the autoclave and then shouting “OW, fuck”, before dripping blood on the ground.

    Now, nobody knows what’s in the bag. Some moron threw sharps in with the bio waste, who knows where it’s from. For all I know, they just caught zombie-ebola, and it’s my fault for talking slightly too slow.

    Thankfully, after some antibiotic and fervent prayer, everything turned out to be OK.

    NickwithaC,
    @NickwithaC@lemmy.world avatar

    How is that a happy ending story?

    Congratulations, you didn’t get coronAIDSyphilis.

    peter,
    @peter@feddit.uk avatar

    Well that is a happy ending isn’t it

    cheesymoonshadow,
    @cheesymoonshadow@lemmings.world avatar

    I know I was waiting for the splooge part too.

    dan,
    @dan@upvote.au avatar

    I broke the home page of a big tech (FAANG) company.

    I added a call to an API created by another team. I did an initial test with 2% of production traffic + 50% of employee traffic, and it worked fine. After a day or two, I rolled out to 100% of users, and it broke the home page. It was broken for around 3 minutes until the deployment oncall found the killswitch I put in the code and turned it off. They noticed the issue quicker than I did.

    What I didn’t realise was that only some of the methods of this class had Memcache caching. The method I was calling did not. It turns out it was running a database query on a DB with a single shard and only 4 replicas, that wasn’t designed for production traffic. As soon as my code rolled out to 100% of users. the DBs immediately fell over from tens of thousands of simultaneous connections.

    Always use feature flags for risky work! It would have been broken for a lot longer if I didn’t add one and they had to re-deploy the site. The site was continuously pushed all day, but building and deploying could take 45+ mins.

    Vendetta9076,
    @Vendetta9076@sh.itjust.works avatar

    I work on a SOC team and were really trying to hammer the usage of feature flags into our devs.

    WhyAUsername_1,

    What are feature flags?

    dan,
    @dan@upvote.au avatar

    Feature flags are just checks that let you enable or disable code paths at runtime. For example, say you’re rewriting the profile page for your app. Instead of just replacing the old code with the new code, you’d do something like:

    
    <span style="color:#323232;">if (featureIsEnabled('profile_v2')) {
    </span><span style="color:#323232;">  // new code
    </span><span style="color:#323232;">} else {
    </span><span style="color:#323232;">  // old code
    </span><span style="color:#323232;">}
    </span>
    

    Then you’d have some UI to enable or disable the flag. If anything goes wrong with the new page after launch, flip the flag and it’ll switch back to the old version without having to modify the code or redeploy the site.

    Fancier gating systems let you do things like roll out to a subset of users (eg a percentage of all users, or to 50% of a particular country, 20% of people that use the site in English, etc) and also let you create a control group in order to compare metrics between users in the test group and users in the control group.

    Larger companies all have custom in-house systems for this, but I’m sure there’s some libraries that make it easy too.

    At my workplace, we don’t have any Git feature branches. Instead, all changes are merged directly to trunk/master, and new features are all gated using feature flags.

    WhyAUsername_1,

    Wow that’s so effing smart!

    Vendetta9076,
    @Vendetta9076@sh.itjust.works avatar

    Everything Dan said and more. They’re sometimes also called canaries, although thats not quite the same thing. There’s been a ton of times where services have been down for hours instead of minutes because a dev never built in a feature flag.

    Hadriscus,

    Canaries, relating to mine work ?

    Vendetta9076,
    @Vendetta9076@sh.itjust.works avatar

    Thats where the term derives from, yes

    cashews_best_nut,

    What language? PHP, python?

    jjjalljs,

    Always use feature flags for risky work! It would have been broken for a lot longer if I didn’t add one and they had to re-deploy the site. The site was continuously pushed all day, but building and deploying could take 45+ mins

    This reminds me of the old saying: everyone has a test environment. Some people are lucky enough to have a separate production environment, too.

    shani66,

    Now that i think about my first job was fucking wild.

    My buddy was in a forklift taking some stock down and i was spotting, basically just hanging out and making sure no one got in the way. A few minutes after the normal time it’d take he thinks something is wrong and calls me to take a look (from afar) to see how fucked we are; the answer was very, the pallet was barely holding together at all, but i couldn’t see a damn thing from my position. Before i could get back to spotting we heard a loud crack and the world went still, i imagine for much longer by him, and not a second later we had hundreds of pounds of foul smelling mulch everywhere.

    I had a lot more there too; babysitting an old man that looked on the verge of death with no management anywhere to be found, moving hundreds of pounds at a time by hand, dealing with the best conspiracy theorist ever.

    I’ve been bored everywhere else I’ve ever worked.

    Dio9sys,

    There’s something about physical labor jobs that result in everybody having one story about babysitting somebody who is actively dying

    Hadriscus,

    what do you mean, the best conspiracy theorist ??

    shani66,

    Turns out the world is flat. And under a dome. And Jesus is on top of the dome. And aliens have visited us in the dome. And so much more.

    Hadriscus,

    wow ! I knew the world was flat and under a dome, but I had no idea Jesus was literally on top. Thanks for completing my world view 😂 I’ve had crazy exchanges with some of these people but I have yet to meet any IRL. I guess they don’t hang out where I live

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.ml
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Durango
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • tacticalgear
  • ethstaker
  • JUstTest
  • InstantRegret
  • Leos
  • normalnudes
  • everett
  • khanakhh
  • osvaldo12
  • cisconetworking
  • modclub
  • anitta
  • tester
  • megavids
  • provamag3
  • lostlight
  • All magazines