Silly Question

From: Ken (SHIELDSIT)23 Dec 2015 03:58
To: ALL11 of 19
I knew there was fucking goofy math involved with this.  I'll shamelessly steal this from Tom's Hardware and assume the people who chose this as the correct answer are smarter than I am, not hard to be...
 
Quote: 
It is more dangerous than a single drive. But as-per the people before me, not by a huge amount. The odds of a failure are 1−(1−r)^n (Thanks Wikipedia) Where r is the failure rate, and n is the number of drives.

That is, if a drive has a 1 in 100 chance of failing, 1 drive: 1-(.99)^1 = 1% chance of failing, 2 drives: 1-(.99)^2 = ~2%
3 : ~3%

What if the failure rate were higher? I don't know the odds of a given disk failing, ask Google for those numbers. 1 : 5%, 2: 9.75%, 3: ~14.3%

My math could be wrong, but I blame Wikipedia. It seems right to me. As you can see, you really start gambling when you get to higher storage levels. If you are alright with risking losing everything (other than what is already backed up) I say go for it. Personally, I am all HDD, no RAID. Don't have the money to spend on SSDs, especially since I just don't need that performance yet.
 


 
From: Peter (BOUGHTONP)23 Dec 2015 12:37
To: Ken (SHIELDSIT) 12 of 19
I do RAID-Z :<
EDITED: 23 Dec 2015 12:39 by BOUGHTONP
From: Ken (SHIELDSIT)23 Dec 2015 14:51
To: Peter (BOUGHTONP) 13 of 19
So do I on a few systems, like my file servers, but you can do the math!
From: Peter (BOUGHTONP)23 Dec 2015 16:20
To: Ken (SHIELDSIT) 14 of 19
You already did the maths for me?
From: Ken (SHIELDSIT)23 Dec 2015 17:53
To: Peter (BOUGHTONP) 15 of 19
Do you agree with that formula?
From: Peter (BOUGHTONP)24 Dec 2015 00:14
To: Ken (SHIELDSIT) 16 of 19
If disk failure is relevant, RAID-0 is the wrong answer.

If you're asking from academic interest, you'd need to consider whether striping across disks increases or decreases the likelihood of failure due to different read/write behaviour. (I don't know what the contributing factors to a disk failing are - i.e. is it purely random, actively increased by use, etc.) That formula doesn't include any such parameters; it may well be an approximation of something that does. I don't see on the linked Wikipedia page where the poster apparently sourced it from, so it's currently just a way to produce a number slightly less than directly multiplying the failure rate by number of drives.

From: CHYRON (DSMITHHFX)24 Dec 2015 00:30
To: Peter (BOUGHTONP) 17 of 19
Disks can fail in several ways: bearings, actuators, surfàce, PCB. The failure rate increased dramatically after the flooding in Thailand destroyed many mfg facilities, leading to speculation that flood-damaged parts continued to be used. Outside of production servers, the main factors influencing failure rates appears to be make, model and even batch. Some guy running a cloud service periodically publishes failure stats on assorted makes and models.
From: ANT_THOMAS24 Dec 2015 10:17
To: CHYRON (DSMITHHFX) 18 of 19
From: Dave!!28 Dec 2015 12:31
To: Ken (SHIELDSIT) 19 of 19
Another important question these days is the capabilities of the drives and the controller (most controllers do share the bandwidth to some degree between the SATA ports), and why using solid state drives isn't an option. For example, striping mechanical hard drives will almost always still provide poorer performance than a single SSD. Especially seeing as modern SSDs can easily saturate a 6Gb SATA link. For ultimate performance, you'd connect a fast SSD to either a SATA Express port, or a bespoke PCIe controller.

I just can't help but think that RAID-0 is a lot more redundant these days than it used to be. Faster performance, increased risk of data loss, 2 drives required, or just slap a fast SSD in instead. Unless of course you need lots and lots of storage, and even then you can now get 1TB SSDs for less than £60...
EDITED: 28 Dec 2015 12:31 by DAVE!!