wild1145

@wild1145@mastodonapp.uk

Site Owner https://mastodonapp.uk/about/more?instance_actor=true & @universeodon.com

If you wish to support https://mastodonapp.uk/about/more?instance_actor=true feel free to donate over on our Ko-Fi Page: https://ko-fi.com/mastodonappuk

My wonderful boyfriend: https://mastodonapp.uk/@PrinceMumbles

Systems Engineer @ Amazon Web Services

Managing Director @ ATLAS Media Group Ltd.

Doing things with clouds

He / Him / His

Views are always my own.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

wild1145, to random

On that note (And as I have to be up early potentially) I'm going to head to bed. Good night all.

wild1145, to random

We are back folks, I'm really sorry for the major outage we've had since around 3pm today. I'll write up a proper report on this and share it shortly.

We will need to perform some further maintenance tomorrow but for now I really do need some sleep!

wild1145, to macos

Since my new update iMessage was trying to do predictive text. If it happens to annoy you as much as it annoys me, here is the guide to disable it which saves you finding the exact keywords to google (As it took me a lot longer than I'd care to admit) - https://www.macrumors.com/how-to/how-to-disable-macos-inline-predictive-text/

wild1145, to random

One of the fall-outs of sorts from the maintenance this morning is it's identified a fairly significant storage issue across our servers. Right now there isn't a huge amount I can realistically do other than invest a lot of money in more servers (And that's not entirely financially viable with the current state of donations from here and Universeodon (And it impacts both)).

wild1145,

I'm looking to see if there are any cost effective solutions that don't introduce issues where the sites are more likely to run slow or otherwise crash but it's easier said than done.

Right now our database server host is running at 99.75% disk usage, and it's having some fairly significant operational impact behind the scene (As well as me loosing a lot of sleep over the fact) and while the container running the database has some headroom, it's not enough.

wild1145,

Our other database host for Universeodon is sitting at a bit over 80% disk usage, and we have one of our more general purpose application servers running at 80% and another at over 90% as well.

This means I'm starting to have to look at if the current scale we are running the site at is sustainable (For both here and Universeodon) and if reducing the amount of servers we have is going to be the only real option forward so we can invest in a larger capacity server for our DB's.

wild1145,

I'm fortunate enough to get a solid amount per month in Donations, with a relatively small chunk being monthly subscriptions (A lot of our monthly income is you kind folks giving ad-hoc contributions) / payments, however we're just starting to out grow our original architecture and design and I am locked in with some of our infrastructure to contractual obligations / purchase agreements which helped to save us a lot of money in the longer term.

wild1145,

MastodonApp.UK and Universeodon won't be going anywhere I want to make that very clear. I just need to figure out if there is a cost effective and scalable way to grow our storage capacity without causing more technical headaches or making things a lot worse. If I can't I will have a look into options to scale back some of our servers and reduce our footprint so we can make investments into new servers with the higher capacity we urgently need.

wild1145,

@thisismissem We managed to shave around 80GB in our statuses table today through a vacume full and ultimately exporting and re-importing the database to fix the encoding issue. One of the bigger issues is needing to re-export the DB, re-build the database VM entirely and re-creating it as the disk has ballooned a lot larger than it actually needs, but at the same time we need 2.5x the size of the DB to be free on disk at any given time for effective / full backups of the DB.

wild1145,

@thisismissem And with a 150GB current size (ish) DB that's hit that critical tipping point where we're starting to see issues with backups reliably working, especially if one tries to run while another is already backing up.

In terms of posts / content, I suspect a lot of the content we store is never seen in terms of remote servers, but that was originally a comfortable trade-off I was willing to make for better content visibility within the server given fetching remote posts isn't a thing.

wild1145,

@renchap @thisismissem Yes, currently it'll dump the database to disk, gzip it and upload it to a bucket. I'm just testing now that I think I have free'd up just enough space that the current VM in it's current config might just be able to perform a single backup at a time (Which was an issue with my old setup as it was backing up different backups but during overlapping times)

wild1145,

@renchap @thisismissem Actually I've just spotted that my number and wording was incorrect, I had meant to say we need more like 1.5x the db size to be free on the disk. That's me confusing total space needed and free space.

wild1145,

@aroundthehills This is the toots and metadata around it. The images / video's / other content is stored in an S3 like provider and served separately with the DB (As far as I know) just knowing where to look for it. The 99.75 is unsustainable and I need to figure out quite how to get that down with minimal disruption to the site here, which as usual may be easier said than done!

wild1145,

@ReCyclist As it currently stands there are 16.4k active users across the instances. I haven't got a 100% accurate number on current / potential future costs (Though right now what we get in monthly is more or less covering month to month costs). The reality is if every one of those active accounts gave £1 a year it would comfortably cover the server costs but I know (And it's very understandable with how things work) that the majority don't contribute towards costs.

wild1145,

@ReCyclist Looking at the December data we had 285 contributions of various amounts through our Ko-Fi and we currently have 166 members on Patreon who are also contributing various amounts.

I may need to update my posts because it wasn't intended to be an entirely financial issue right now, we can cover our costs (mostly and most months)without issue, my concern is how to technically grow our server capacity without breaking the bank and that being sustainable financially long term.

wild1145,

@emilweth @thisismissem @renchap I had forgot about pgbackrest, I'll have to take another look as that might actually do everything and more that my current script tries to do (and probably does poorly). I didn't remember barman but will also give that a look and see if that at least stops the "extra" storage being an issue longer term.

wild1145,

@anotheramy Ahh that was my inability to write proper HTML last night. Thank you for letting me know, that should now be fixed! :)

wild1145,

@LiveByReason I've considered (a few times) doing something similar myself as I've got a 1G broadband link, the biggest challenge for the major pain areas we currently have is latency, and that (combined with speed) is what made the solution we tried to use pretty bad.

It would however be a pretty interesting project, I've seen some similar type of technical solutions for things like archiving sites that are going away but that's more scraping than distributed storage.

wild1145, to random

Sorry for the performance issues there, it looks like our content processing service was absolutely thrashing our database as we had a bug in our configuration. That should now be resolved and the site should be running a lot smoother!

wild1145, to random

There seem to be some folks that missed the announcements, my apologies to you all. We have an ongoing challenge on how to notify people of maintenance here and across our services without doing huge e-mail blasts.

To try to mitigate this, I've been working on a new e-mail subscription service where we can send out e-mails. If you would like to be notified when we plan maintenance or have unexpected issues going forward please feel free to subscribe: https://listmonk.mastodonapp.uk/subscription/form

wild1145,

@peteorrall It will be yes, I was originally intending to run a single mailing list for both sites but it didn't quite work as well. Once I get a bit more free time I'll get a version setup for Universeodon :)

UrbanCityCowboy, to random

Is Mastodong just bonged for me or are other peeps having probs today?

wild1145,

@UrbanCityCowboy Are you still having issues now? There was maintenance scheduled from 9am and it didn't finish until around 3:20pm uk time

wild1145, to random

Attention Community:

I will be taking the site offline on Jan 3rd between 09:00 and 13:00 UK Time to perform essential maintenance to our database server. This is one of a few pieces of database maintenance I need to get completed to make the site more maintainable and to make the upcoming Mastodon upgrade that bit easier.

During this time the site will be fully offline.

You will be able to view updates on https://status.atlas-media.co.uk or one of my alt accounts on the fedi.

wild1145,

I'll look to post updates from @wild1145 primarily with @wild1145 being my fallback in case anything here spills into Universeodon (Which I don't think it should)

Moocher, to random

Dear @bbcnewsfeed Please stop giving air time to that#unelected -seeking . Thank you.

wild1145,

@Moocher @bbcnewsfeed FWIW it's both an automated bot and isn't run / maintained by the BBC. Unfortunately the BBC have still not really adopted the fediverse as a whole.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • magazineikmin
  • cubers
  • everett
  • rosin
  • Youngstown
  • GTA5RPClips
  • tester
  • slotface
  • khanakhh
  • Durango
  • kavyap
  • DreamBathrooms
  • normalnudes
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • thenastyranch
  • Leos
  • osvaldo12
  • anitta
  • mdbf
  • tacticalgear
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines