mwguy,

You should be able to take the binlogs and upload them. Then in a restore situation you’d restore your last full db snapshot and replay your binlogs up until the point you lost the server.

hypnotic_nerd,
@hypnotic_nerd@programming.dev avatar

Thanks its very helpful 🙌🙌

RandomDevOpsDude,

Im a bit late to the show, but I personally feel like you are heading down the wrong path. Unless you are trying to completely host locally, but for some reason want your backups in the cloud, and not simply on separate local server, you are mixing your design for seemingly no reason. If you are hosting locally, you should back up to a separate local instance.

If you indeed are cloud based, you SHOULD NOT be hosting a DB separately. Since you specified S3, you are using AWS, and you should rather use RDS managed mySQL and should use the snapshot feature built in. ref

dbx12,

I know that we specifically don’t use the snapshot feature for a reason. I think it has to do with how snapshots are restored. But I would need to ask my colleague why exactly we’re not doing it.

We do full dumps and data-only dumps in regular intervals.

ExperimentalGuy,

I bet you could make a lambda function that periodically backs up your database. That’s probably the route that I’d go down because it’s more cost effective than other things. Only thing I’d be concerned ab is configuring perms for the lambda function and s3 bucket. Take this with a grain of salt, I’ve only recently started getting into cloud stuff.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • devops@programming.dev
  • DreamBathrooms
  • magazineikmin
  • Durango
  • ngwrru68w68
  • rosin
  • Youngstown
  • osvaldo12
  • cubers
  • slotface
  • InstantRegret
  • tester
  • kavyap
  • thenastyranch
  • khanakhh
  • JUstTest
  • mdbf
  • modclub
  • everett
  • cisconetworking
  • GTA5RPClips
  • Leos
  • ethstaker
  • tacticalgear
  • normalnudes
  • anitta
  • provamag3
  • megavids
  • lostlight
  • All magazines