bit-tech.net

The Black Dwarf 16TB NAS

The Black Dwarf 16TB NAS

The Black Dwarf holds over 16TB of storage - with 12.7TB available - in a custom-made enclosure.

A veteran of the modding scene has unveiled his latest creation - a stunning-looking NAS dubbed the Black Dwarf, which holds an incredible 12.7TB of data.

Created by video editing and modding whiz Will Urbina and showcased over on his site - via Engadget - the Black Dwarf is a stunning-looking build constructed from aluminium, steel, and a Lexan transparent lid - and entirely made by hand, with nary a mass-produced part in sight.

The NAS - which is powered by a 1.66GHz Atom N270 processor, which helps keep the power draw and heat as low as possible while still providing the grunt to keep the data travelling at top speed - holds eight 2TB hard drives in a RAID 5 setup, providing 12.7TB of usable space while allowing any single drive to fail without losing data. A 320GB drive is included for a boot drive, and a 30GB SSD allows certain data to be cached for rapid access.

While the processor is a little on the slow side, the overall performance is pretty nice: an 88MB per second write speed is dwarfed by the units incredible 266MB per second read speed.

Urbina has posted a pair of videos detailing his work, along with an impressive time-lapse video which compresses over 100 hours of work into just six minutes.

Are you impressed at the amount of storage Urbina has put into the world's most stylish looking shoebox, or are you aghast that he hasn't spread his risk by using drives from differing manufacturers - or at least batches - for his array? Share your thoughts over in the forums.

33 Comments

Discuss in the forums Reply
rickysio 10th May 2010, 10:49 Quote
Differing manufacturers? What?

Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.
Instagib 10th May 2010, 10:56 Quote
The point is that if all the drives have come from a bad batch of drives, he's bu**ered. Chances are slim, but after all that time and effort...
Baz 10th May 2010, 11:01 Quote
Quote:
Originally Posted by rickysio
Differing manufacturers? What?

Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.

I think what Gareth means is that by using drives from different manufacturers, you again minimise total array failure. Say that batch of WD 2TB greens had a bug - wouldn;t you be happy that half of your array was on entirely unlinked 2TB Seagates, or visa-versa?
Picarro 10th May 2010, 11:31 Quote
Well, the chances of them failing at EXACTLY the same time seems far fetched. If one of the drives in my server would fail, I'd just replace it, and rebuild the RAID.
Gareth Halfacree 10th May 2010, 11:34 Quote
Quote:
Originally Posted by rickysio
Differing manufacturers? What?

Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.

I cannot even begin to comprehend how you read that sentence. That was entirely my point - if there's a bad firmware or manufacturing error that affects the entire batch, the NAS is ruined. Splitting the drives across manufacturers would help prevent such an issue - it's SOP for RAID.
scrimple3D 10th May 2010, 11:40 Quote
What about cooling? There doesn't appear to be a lot of space between the drives for air to flow.
halcyondays 10th May 2010, 12:43 Quote
Although I agree it's good practice to select drives from multiple vendors / batches as RAID-5 can only withstand the loss of 1 drive before data loss occurs even having 50% of the drives from the other manufacturer is going to lose all the data. Ideally the 8 drives would be from 8 vendors - now that would be a thing! And seeing as RAID isn't backup - he really needs to build a second one... Great project though.
Scootiep 10th May 2010, 13:19 Quote
Drive manufacturer argument aside, this is the kind of minimalist design that I LOVE! It's sad that no mainstream manufacturer can get it right. But bravo Will, very sleek, very clean, something I would be proud to stick next to a HTPC in my entertainment center.
DST 10th May 2010, 13:57 Quote
I thought it was a bad idea to put HDD at any angle other than 0 or 90.
Iorek 10th May 2010, 14:40 Quote
I'm impressed... very nice build to that.

Out of interest, whats it going to be running? Being a media PC (Windows?) or a file server (Linux?)
rickysio 10th May 2010, 16:06 Quote
Quote:
Originally Posted by Gareth Halfacree
Quote:
Originally Posted by rickysio
Differing manufacturers? What?

Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.

I cannot even begin to comprehend how you read that sentence. That was entirely my point - if there's a bad firmware or manufacturing error that affects the entire batch, the NAS is ruined. Splitting the drives across manufacturers would help prevent such an issue - it's SOP for RAID.

I read "or are you aghast that he hasn't spread his risk by using drives from differing manufacturers - or at least batches - for his array?" as "or are you aghast that he has increased the chances of drive failure by using drives from differing manufacturers - or at least batches - for his array?"

I was quite upset with a school related matter (My hatred of lessons may or may not have any relation with that), and quite a few people on YouTube happened to be commenting "Why is he using different drives?" so my frazzled mind was confused.. Apologies for that!
darkb 10th May 2010, 16:10 Quote
hmm.. I think raid5 might be pointless for this though.. I remember reading somewhere that the chance of an error on hdd's is about once in ever 12TB, but if one drive fails and you have an error when rebuilding the array....
Gareth Halfacree 10th May 2010, 16:22 Quote
Quote:
Originally Posted by rickysio
I read "or are you aghast that he hasn't spread his risk by using drives from differing manufacturers - or at least batches - for his array?" as "or are you aghast that he has increased the chances of drive failure by using drives from differing manufacturers - or at least batches - for his array?"
Ah - that'd do it.
Quote:
Originally Posted by rickysio
I was quite upset with a school related matter (My hatred of lessons may or may not have any relation with that), and quite a few people on YouTube happened to be commenting "Why is he using different drives?" so my frazzled mind was confused.. Apologies for that!
Heh! Nay worries - was concerned that I'd been writing even more gibberish than usual, if such a thing is possible.
Tulatin 10th May 2010, 17:32 Quote
Quote:
Originally Posted by rickysio
Differing manufacturers? What?

Are you on crack? All of these data drives are WD Caviar Green 2.0TB drives! Even the 320GB drive is a WD Scorpio Blue. Only the SSD is from OCZ.

The reason you use drives from different manufacturers and different batches is so that you really reduce your risk of simultaneous failures. While this isn't traditionally an issue, it was REALLY highlighted by the Seagate BSY issue. We had a client who spent quite a bit of money when he built a system to make sure that drive failures wouldn't **** him over, and wham, lookit that.
TomH 10th May 2010, 18:43 Quote
Quote:
Originally Posted by darkb
hmm.. I think raid5 might be pointless for this though.. I remember reading somewhere that the chance of an error on hdd's is about once in ever 12TB, but if one drive fails and you have an error when rebuilding the array....
Well, the sample URE (see the detailed drive specs) in that semi-famous ZDNet article was given as 10^14, meaning that for every 12TB of data, statistically you would have at least one Unrecoverable Read Error. The theory being that if you lose a disk and try to rebuild the array, you're certain to hit an error during the rebuild.

For URE's of 10^15 (quite possible with Enterprise-grade disks) the chances are lessened, significantly. Well, as long as the manufacturers are correct.... RAID-6 doesn't make it much better.

My personal preference would be to keep two smaller arrays and concatenate them (not stripe, i.e. RAID 50) so that one disk failure has more chance of rebuilding, two disk failures in either array is passable and if two fail in one array (very unlucky) then at least you haven't lost all of your data. Just the end of the file system, but that's still more recoverable than striped data.
ch424 10th May 2010, 19:36 Quote
That looks pretty awesome :D


How did he manage 266MB/sec read on GigE though? Surely it's limited to 125MB/sec?
HourBeforeDawn 10th May 2010, 19:52 Quote
Quote:
Originally Posted by scrimple3D
What about cooling? There doesn't appear to be a lot of space between the drives for air to flow.

I was thinking the same thing, at least have a quarter of an inch for some airflow. That would worry me.
Iorek 10th May 2010, 19:59 Quote
Quote:
Originally Posted by ch424
That looks pretty awesome :D


How did he manage 266MB/sec read on GigE though? Surely it's limited to 125MB/sec?

I would assume thats raw drive read rather than over network throughput
ch424 10th May 2010, 20:46 Quote
Quote:
Originally Posted by Iorek
I would assume thats raw drive read rather than over network throughput

But that's rubbish for 12 drives! Three Samsung F1s in RAID0 would be faster!
Iorek 10th May 2010, 20:48 Quote
Quote:
Originally Posted by ch424
But that's rubbish for 12 drives! Three Samsung F1s in RAID0 would be faster!

I assume the raid 'grunt' is being done via the Atom CPU though, I don't think that is hardware raid.

Combined with Raid 0 having no redundancy which that box does have (and thus the overheads for that)

Read / write performance - I don't know if thats RAW performace, or done on top of a file system, if so what file system / what OS - and overheads that that puts in too?
dark_avenger 11th May 2010, 00:58 Quote
With that many drives I would have thought RAID6 or RAID50 would be a better choice?

Either way is a very nice build ;)
Cupboard 11th May 2010, 01:00 Quote
That's a nice looking thing, and the video is great.

The bits where the other cameras seem to slide around amuse me. And I think I counted an SLR and two video cameras in addition to the time lapse one? One well documented build!
Star*Dagger 11th May 2010, 02:51 Quote
Thats alot of Amy Reid videos!!!
stonedsurd 11th May 2010, 02:55 Quote
I would sell my soul for a workshop like that.
rickysio 11th May 2010, 08:45 Quote
Quote:
Originally Posted by Iorek
I assume the raid 'grunt' is being done via the Atom CPU though, I don't think that is hardware raid.

Combined with Raid 0 having no redundancy which that box does have (and thus the overheads for that)

Read / write performance - I don't know if thats RAW performace, or done on top of a file system, if so what file system / what OS - and overheads that that puts in too?

He has a RAID slot in card?

There should be something handling RAID on the card itself - there's a HS on the card.
Iorek 11th May 2010, 11:23 Quote
Quote:
Originally Posted by rickysio
He has a RAID slot in card?

There should be something handling RAID on the card itself - there's a HS on the card.
I didn't look too much into the card, I just know most "affordable" ones used to rely on the general CPU to do some fo the work, where as true hardware ones would offload all the work to a dedicated processor.

I can't comment on that card, I've never tried it :)
Berius 11th May 2010, 17:18 Quote
Quote:
Originally Posted by stonedsurd
I would sell my soul for a workshop like that.

You can have my workshop in exchange for your soul... :D
Tulatin 11th May 2010, 17:18 Quote
Quote:
Originally Posted by rickysio
He has a RAID slot in card?

There should be something handling RAID on the card itself - there's a HS on the card.

All the raid card really does is bundles up the drives and addresses commands to them, but it doesn't really do much of the processing. You need a much more expensive card for that.
JustLeigh 11th May 2010, 23:52 Quote
Um, I'm a novice admittedly, but shouldn't there be some kinda cushioning between the drives to dampen down vibrations?
airchie 12th May 2010, 01:21 Quote
I think RAID 6 might have been better.
At least he could survive the loss of two disks with a small loss in overall capacity and write performance.

I have to say I don't like the idea of using 8 different branded drives.
I'd rather have them all the same as well.
To me, adding drives from several Manufacturers is like adding more drives to a RAID0 array, just increasing the chances of having an issue.
I understand there's always the risk of a firmware issue taking out all the drives but it's not really likely is it?
Even the Seagate one was recoverable (albeit with sending your drives in for repair) and all the drives wouldn't die at exactly the same time.

But, more importantly, frikkin awesome skills (modding and video editing) and yeah, I'd love to have a workshop like that too. :)
Tulatin 12th May 2010, 05:51 Quote
The main reason for "other manufacturers" is so that you can make sure to get drives from different batches, as drives of similar batches can fail at similar times. It's a very bad thing having many drives fail at once
rickysio 12th May 2010, 11:27 Quote
Quote:
Originally Posted by JustLeigh
Um, I'm a novice admittedly, but shouldn't there be some kinda cushioning between the drives to dampen down vibrations?

They're WD Caviar Green drives. Probably one of the most silent and low vibration drives on the market.
JustLeigh 12th May 2010, 18:26 Quote
Quote:
Originally Posted by rickysio
Quote:
Originally Posted by JustLeigh
Um, I'm a novice admittedly, but shouldn't there be some kinda cushioning between the drives to dampen down vibrations?

They're WD Caviar Green drives. Probably one of the most silent and low vibration drives on the market.

That explains that then. Thanks! :)
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums