vwbusguy,
@vwbusguy@mastodon.online avatar

Is still used anywhere? 10 years ago, it was a big deal.

johnl,
@johnl@mastodon.social avatar

@vwbusguy we've been using it for like 13+ years and still use it! Gluster lets us sacrifice a bit of consistency to get better availability. Something you cannot do with ceph. only works for specific workloads though (ideal for WORM stuff). It's also rather nice to just have the backing store be actual files on a filesystem!

rfc2549,
@rfc2549@fosstodon.org avatar

@vwbusguy I've had to do some gluster recently and found that it's been unfortunately running out of maintenance.

I mean things like the glusterfs Ansible collection being taken out of the community package because of that or very outdated container images on both Docker Hub and Quay.io.

I feel the project was gradually subject to awkward times after the acquisition of Ceph.

I've seen OpenShift exclusively on gluster and then eventually things changed.

¯_(ツ)_/¯

vwbusguy,
@vwbusguy@mastodon.online avatar

@rfc2549 Yeah, rook was a game changer for ceph, even if it was fairly volatile at first. I've watched rook IO burn entire k8s clusters to the ground.

AndreasDavour,
@AndreasDavour@dice.camp avatar

@vwbusguy glusterfs had some serious limitations as it was file based, and the brick sizes and file sizes had to align, otherwise it would get an error even if there where space left in the pool. I can imagine that didn't help.

vwbusguy,
@vwbusguy@mastodon.online avatar

@AndreasDavour What was your alternative? Did you not need multiple writable mounts of the same FS in your environment?

AndreasDavour,
@AndreasDavour@dice.camp avatar

@vwbusguy we used some manual sorting and queue control, and used NFS mostly instead.

vwbusguy,
@vwbusguy@mastodon.online avatar

@AndreasDavour Yeah, at my last shop, we migrated prod stuff from Glusterfs to NFS (eventually via DRBD) because Glusterfs misbehaving would set the world on fire. NFS might be old and has some significant limitations, but it's generally very reliable and ubiquitously supported. Once you get DRBD set up, it tends to "just work". NFS choking because a directory had tens of thousands of files being queried multiple times a second tended to only impact those queries and not everything else.

vwbusguy,
@vwbusguy@mastodon.online avatar

@AndreasDavour (In that case, it was shared php session files that the application was failing to clear, so stale session data kept endlessly accumulating until performance degradation happened.)

6013dc0a775ea416a5b620159c7094edd7c7153f93aa83981ae67ea9d0581f3f,

We tried but performance was terrible, with storage class and kubernetes rke2.

vwbusguy,
@vwbusguy@mastodon.online avatar

@6013dc0a775ea416a5b620159c7094@mostr.pub Was glusterfs running on the compute nodes or external to them? I would think performance should be very good if it's using local storage.

opsitive,

@vwbusguy 10 years ago, our GlusterFS cluster started causing issues on a weekly basis. We switched to MooseFS and sleep peacefully ever since.

carbontwelve,
@carbontwelve@notacult.social avatar

@vwbusguy I remember using it for a high availability cluster; seems the company behind it folded or something like that.

vwbusguy,
@vwbusguy@mastodon.online avatar

@carbontwelve Red Hat owns Gluster.

carbontwelve,
@carbontwelve@notacult.social avatar

@vwbusguy they do now, yes. As far as I am aware it’s being sunset?

vwbusguy,
@vwbusguy@mastodon.online avatar

@carbontwelve Yup. Red Hat seems to have standardized on Ceph, which is now owned by its parent, IBM.

vwbusguy,
@vwbusguy@mastodon.online avatar

@carbontwelve To be clear, I don't think this is a controversy (apart from IBM saying it would leave Red Hat alone and then taking Ceph and its team from Red Hat). GlusterFS seems to have clearly significantly dropped off and Ceph seems to have largely won over previous GlusterFS users. I'm curious about why that is, though. It's not like Ceph is a simple undertaking, as far as distributed filesystems go.

carbontwelve,
@carbontwelve@notacult.social avatar

@vwbusguy same and I don’t really know the differences between GlusterFS and Ceph to make a comparison other than they are similar solutions.

You’re right that a while back there was a lot of buzz around GlusterFS; it’s why I began using it when I deployed a high availability cluster, however nowadays I’d use something like S3, mirrored database cluster and server scale-set or lambda equivalent.

This might be why it’s no longer talked about? Cloud ate its lunch?

vwbusguy,
@vwbusguy@mastodon.online avatar

@carbontwelve Ceph eventually found its way there with Rook. And I'm sure Ceph's eventual versatility wrt RGW (for S3), NFS, and getting CephFS support built into the Linux kernel, all help to understand why it's more popular now, but I'm still curious why GlusterFS didn't keep growing and adapting in those ways. It's possible that Red Hat couldn't give both Ceph and Gluster its full attention. Considering they also abruptly dropped btrfs support at the time, that seems likely.

vwbusguy,
@vwbusguy@mastodon.online avatar

@carbontwelve I also honestly don't grok why LINBIT's stuff isn't more popular. A lot of people are deploying Ceph where DRBD and/or Piraeus would be more practical and performant.

dwm,
@dwm@mastodon.social avatar

@vwbusguy @carbontwelve

Sadly, I can't agree. I ran DRBD + XFS + Corosync + Pacemaker for a while for a mission-critical highly-available workload, and found it desperately unreliable. I once had XFS error out on me, reporting filesystem corruption; DRBD subsequently reported 1GiB of inconsistency between the two replicas supposedly bit-for-bit replicas.

CephFS went stable shortly after; I migrated, and it's been absolutely rock solid—even with datacentre snap-to-black incidents—ever since.

vwbusguy,
@vwbusguy@mastodon.online avatar

@dwm @carbontwelve xfs was the achilles heel there. I've had to recover xfs over ceph maybe a dozen times over the past year.

dwm,
@dwm@mastodon.social avatar

@vwbusguy @carbontwelve
DRBD's one job is to keep identical two different devices across a network. It failed; I ended up with a GB of bit differences. Regardless of the filesystem in use on top of that, that's game over.

vwbusguy,
@vwbusguy@mastodon.online avatar

@dwm @carbontwelve I mean, it sounds like it did. If xfs eats itself, that corruption is going to get sync'd, too. That's certainly also been true for me with ceph rbd!

dwm,
@dwm@mastodon.social avatar

@vwbusguy @carbontwelve

I may not have clearly explained what I meant.

Running drbdadm verify on the live replicated block device showed a gigabyte of data was not correctly mirrored.

If it was the case that XFS had gotten into an inconsistent state and that inconsistency was replicated across devices, then sure — that would make sense, and would not be DRBD's fault.

But what I saw was what should have been two bit-for-bit identical block devices weren't, and that was DRBD's job.

gurubert,

@vwbusguy @carbontwelve GlusterFS is file based replication on the client side whereas Ceph is object based replication the cluster side. Much more flexible IMHO.

vwbusguy, (edited )
@vwbusguy@mastodon.online avatar

@gurubert @carbontwelve You're not wrong, architecturally, though Ceph also has NFS and CephFS, and S3 via RGW in addition to rbd - so yeah, definitely more flexible. I'm curious why Glusterfs never expanded its offerings like ceph did.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • kavyap
  • mdbf
  • osvaldo12
  • ethstaker
  • tacticalgear
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • modclub
  • Youngstown
  • everett
  • slotface
  • rosin
  • GTA5RPClips
  • JUstTest
  • khanakhh
  • cisconetworking
  • tester
  • ngwrru68w68
  • normalnudes
  • Durango
  • InstantRegret
  • cubers
  • provamag3
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines