Is RAID still needed?

Me and my friend were discussing this the other day about how he said RAID is no longer needed. He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

I replied with the fact that arrays allow for redundancy that create a faster uptime if there are issues and drive needs to be replaced. And depending on what you are doing, that is more valuable than just doing the new thing. Especially because RAID allows redundancy that can replicate lost data if needed depending on the configuration.

What do you all think?

KeepFlying,

Maybe maybe MAYBE for a prosumer desktop situation it’s less necessary than it used to be. But it’s absolutely still needed, your friend is dumb and reckless with their data.

Drives fail all the time, not just sectors.

thorbot,

This is a total load of bullshit, your friend is wrong

Doombot1,

…absolutely, positively, super false. I work in a sector where we’re constantly dealing with huge capacity enterprise SSDs - 15 and 30 terabytes at times. Always using RAID. It’s not even a question. Not only can you have controller malfunctions, but even though you’ve got what’s known as “over provisioning” on the SSDs, you still need to watch out for total disk failures!

winnie,
@winnie@lemmy.ml avatar

you can replace sectors within them if a problem occurs

That won’t help you if sector where your data is located dies!

xkforce,

Higher end Samsung ssds were dying a lot faster than they should. I dont know what drugs your friend is on thinking they cant fail but theyd better have enough for the rest of the class.

LemmyHead,

I’d say “old” RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial. But there’s also modern RAID like btrfs and zfs that also can repair corrupted filed, caused by bitrot for example. Old RAID can’t do that also hardware based RAID couldn’t either when I used it until years ago. Maybe that changed but I don’t see the point of hardware based RAID in most cases anymore

mosiacmango, (edited )

Hardware raid can 100% do any of the above tasks, and has always been able to do them. You need an actual raid card, not some half assed baked in mobo raid.

Hardware RAID was doing all of the above before software RAID was available to end users.

winnie,
@winnie@lemmy.ml avatar

I wonder how to detect real raid card from simple switch? I guess to look at price and it should be really high?

mosiacmango,

Most discrete raid cards will do the job, but look for on card caching and a battery for “quality.”

winnie,
@winnie@lemmy.ml avatar

But AFAIK real RAID don’t perform CRC, thy rely on drive to report bad sector. In case if on one drive data got corrupted, it would return data from one drive or another. In case of mirroring. If we aren’t talking about RAID 6 I think.

winnie,
@winnie@lemmy.ml avatar

AFAIK only officially supported RAID modes in BTRFS are RAID0 and RAID1.

RAID56 is officially considered unstable.

LemmyHead,

Raid56 is a risky one in more filesystem than just btrfd though, but if you have a ups as backup, you should be fine.

winnie,
@winnie@lemmy.ml avatar

What about dm-raid? Is it still risky? I guess so, because it’s separate devices. So any software raid with 5-6 would be problematic?

winnie,
@winnie@lemmy.ml avatar

UPS won’t protect from Kernel Panic, sadly

LemmyHead,

True

BorgDrone,

I’d say “old” RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial.

RAID and backups serve different purposes. Backups are to prevent data loss, RAID is to prevent downtime in case of hardware failure. They are not interchangeable.

LemmyHead,

Different purposes true, but not exclusively. RAID only has effect on drive failure specifically. If downtime is intolerable then it’s not the right solution to just use RAID and you should look into total redundancy of the hardware and more. It also comes with performance bottlenecks or improvements depending on the setup, that’s another factor to take into account. So in the end it really depends on your requirements and backups can actually serve as an alternative, depending on your setup and as long as it meets your RTO

Glass0448,

SSDs still have component bottlenecks that can kill the whole drive, same as hard drives.

Also, 3-2-1 is far superior to RAID, but having RAID on top of that is nice.

  • Maintain three copies of your data: This includes the original data and at least two copies.
  • Use two different types of media for storage: Store your data on two distinct forms of media to enhance redundancy.
  • Keep at least one copy off-site: To ensure data safety, have one backup copy stored in an off-site location, separate from your primary data and on-site backups. www.veeam.com/blog/321-backup-rule.html
dbilitated,
@dbilitated@aussie.zone avatar

3-2-1 is for backup, RAID is also for availability, eg your domain server not going down in case of drive failure. good point though.

Zagorath,
@Zagorath@aussie.zone avatar

People say RAID isn’t backup, but I’ve never understood that. Yes it’s only one medium and it’s probably not off-site, but if you’ve got an off-site copy in a different medium, why doesn’t a single RAID 5 count as 2 copies of your data to add up to get the 3 in 321 backup?

atimehoodie,

Media failure isn’t the only reason to back up. If you delete a file on a RAID array, it’s gone on all disks. If you need to recover that deleted file, you can’t recover from RAID. The same goes for formatting/damage of the file system, recovery from something wrong inside a database, etc.

brygphilomena,

RAID is resiliency, but not a backup. It doesnt hold a previous dates version, it doesn’t protect against accidental deletion. Nor does it protect against changes to files.

taladar,

Many causes of data loss affect all RAID drives equally from accidental deletion over power surges, fire, water damage, theft,…

IphtashuFitz,

Suppose you’re hit by a ransomware attack and all the data on your NAS gets encrypted. Your RAID “backup” is just as inaccessible as everything else. So it’s not a backup. A true backup would let you recover from the ransomware attack once you have identified and removed the malware that allowed the attack.

blurg,

What if the RAID 5 gets encrypted with ransomware, how many backups are there?

dbilitated,
@dbilitated@aussie.zone avatar

I do recall google apparently stopped using raid in some data centres, but it was because they had whole-machine redundancy.

RAID is probably redundant for some of the uses it used to have, like optimising read performance by using many drives (SSD is fast) and honestly I suspect that SSDs are probably more reliable as they don’t have a bunch of platters and bearings and screaming rotational speeds.

So if you needed it for a base level of reliability, an SSD on its own may have exceeded that.

I suspect there are still uses for drive redundancy in some high availability setups… although your friend might be right. If the likelihood of drive failure is lower than other parts in the machine and you need high redundancy for availability it might make more sense to replicate the whole machine rather than the drives.

It’s possible redundancy specifically for the drives was an artifact of unreliable drives back in the day 🤔 they might have a point! I think it’s likely still useful at times though.

I’d rather hotswap a drive than set up a new server, even if it’s a less likely scenario.

mindlight,

Yeah and Titanic was unsinkable.

If the controller in your SSD fries, it doesn’t matter how many unused gigabytes your SSD has got for relocating bad sectors. It is still fried. For you, that data is forever gone.

This is why you have redundancy. Full redundancy. You can go for RAID1, one disk die and you still have no data loss, or go bananas with RAID6, two full disks can die and you’re still going strong.

Ps. Spinning harddrives have had hidden sectors used for relocation of bad sectors for ages. It’s nothing new. If you have to much time on your hand, Google harddrive hidden sectors nsa.

redcalcium, (edited )

Unlike hdd, I never experienced graceful disk failures on ssd. Instead, they just randomly decided to die at the most inconvenient time. Raid 1 saved my hide a couple times now from those ssd failures.

bluGill,
bluGill avatar

Raid often comes with snapshots which can recover from your mistakes. Often the raid can even recover after malware encryhts your disk. you still need offline, offsite backups for the best protection but raid is still a useful part of your data safe

Dekkia,
@Dekkia@this.doesnotcut.it avatar

I don’t think the internal wear-leveling and overprovisioning of SSDs can or should be able to replace raid. Disregarding a dead sector without losing capacity is great, but it won’t help you when (for example) the controller dies.

Depending on the amount of data you’re storing SSDs also might be too expensive.

The only exception is maybe Raid 0 in a normal PC. Here it’s probably better to just get one disk for each logical drive.

Revan343,

RAID0 has always been playing with fire

MsPenguinette,

Depends on the type of RAID, probably not needed for just expanding storage but having one that allows for a drive to fail is absolutely still prudent

lemmylommy,

This has nothing to do with ssd or their size. Harddisks also have a little spare area (though not as big) and can mark and remap failing sectors.

RAID (1) is still (possibly) good for the only thing it ever was (possibly) good for: Keeping the system running long enough for you to put in a new harddisk if one fails.

Think of industrial systems where every minute of downtime can cost thousands of dollars. And even there the usefulness of RAID can be questioned: should you not in that case have a whole spare system, easy to swap in, because more than just storage can fail?

And what about the RAID controller itself? Does it not add complexity and another point of failure to the whole system?

And most importantly: will anyone actually get notified of a failing disk and replace it quickly? Or will the whole thing just prolong the inevitable?

Would you even trust a system that had one disk fail already to keep going in a critical place? Or would it not be safer to just replace the whole thing anyway after one failure?

redcalcium,

And what about the RAID controller itself? Does it not add complexity and another point of failure to the whole system?

This is why people prefers software raid these days instead of hardware raid.

Atemu,
@Atemu@lemmy.ml avatar

That does not address the point made. It doesn’t matter whether it’s a complex hardware or software component in the stack; they will both fail.

redcalcium,

Yes, I didn’t address the point made, just want to mention that people are increasingly avoiding hardware raid these days.

tobogganablaze,

He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

Buying SSDs with the same capacity as my NAS with 70TB (after raid 6) would cost almost tripple of what my setup (including the NAS) costs.

So unless you shit money, SSDs are not an option for anything with a decent capacity.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.ml
  • tacticalgear
  • DreamBathrooms
  • cisconetworking
  • Durango
  • mdbf
  • magazineikmin
  • ethstaker
  • osvaldo12
  • rosin
  • Youngstown
  • slotface
  • everett
  • kavyap
  • thenastyranch
  • JUstTest
  • ngwrru68w68
  • modclub
  • cubers
  • normalnudes
  • InstantRegret
  • GTA5RPClips
  • tester
  • khanakhh
  • provamag3
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines