HighPoint RocketRAID 640 Review

Comments 1 to 10 of 10

Blackmoon181 4th January 2011, 10:48 Quote
will any of the new sandy bridge motherboards support RAID for these SDD's therefore making cards such as

'the rocket' only suitable for those with previous gen motherboards ?
perplekks45 4th January 2011, 11:33 Quote
RAID = no TRIM... not really what I'd want.

While the 620 seems like a bargain the 640 seems like a pretty useless product. Unless you're happy with not using more than 2 SSDs and never intend to reboot, that is.
hyperion 4th January 2011, 11:59 Quote
For the price of 2x C300 + controler card I think it would be better to opt for an ocz revodrive or ibis. So far sata 6Gbps seems a little underwhelming, especially after the integrated controllers' review.
MrTeal 4th January 2011, 12:09 Quote
It would definitely be interesting to see what impact UEFI will have on all these SATA performance issues.
Evildead666 4th January 2011, 13:22 Quote
It would probably be good to test it with some 2.5" and 3.5" 7200rpm drives.
Mechanical drives as a storage medium, and SSD boot is the way to go atm.

i would like to see 4x500Gb 2.5" drives tested for example.
mrbens 4th January 2011, 18:06 Quote
Could you please review a card that has both Sata 6Gb/s and USB 3 such as this?
True SATA 6Gb/s Support We Have the Latest Revision!

True SATA 6Gb/s Support
Unique PCIe x4 Bridge Chip for Ultra Performance
Supporting next-generation Serial ATA (SATA) storage interface, this motherboard delivers up to 6.0Gbps data transfer rates. Additionally, get enhanced scalability, faster data retrieval, double the bandwidth of current bus systems.
2 x SATA 3 (6Gbps) Cables Included!

True USB 3.0 Support
Unique PCIe x4 Bridge Chip for Ultra Performance
Experience ultra-fast data transfers at 4.8Gbps with USB 3.0--the latest connectivity standard. Built to connect easily with next-generation components and peripherals, USB 3.0 transfers data 10X faster and is also backward compatible with USB 2.0 components.
do_it_anyway 4th January 2011, 18:41 Quote
I bought a rocket 620 for my C300 and have shoved it back in the drawer.

The card takes a while to initialise in the bios (I would guess 10 secs or so) and so ultimately ADDED to boot time compared to using a SATA2 connection.

While the SATA3 connection did improve my read rates for large files and I hit 350MB/s, the write rates and small file rates were slower and ultimately I felt windows "felt" smoother over a sata2 connection.

So if anyone wants a rocket 620, let me know. Its useless to me.
AndyD 5th January 2011, 03:31 Quote
"So if anyone wants a rocket 620, let me know. Its useless to me"
Damn, mine arrives tomorrow to partner my crucial C300 128Gb.... there maybe 2 controllers going free to good homes on the forum..........
PocketDemon 16th January 2011, 04:25 Quote
Ummmm... Mostly about the testing methods/conclusions here rather than the actual card...

Right, forgetting my previous comments about the lack of similarity to r.l. of writing 500MB of data to 'dirty' the SSDs in a short space of time as you're doing for the moment, according the way your testing method reads, the same 100MB is either written to (& then deleted from) 1x 256MB/s, 2x 256MB/s or 3x 256MB/s drives five times.

Now, if i'm remembering correctly, the actual formatted capacity of the C300 is just under 240MB & has ~7% over provisioning which gives a total of just over 255MB... (i assume it's actually 256MB as this is the closest divisible value for nand) whilst all of the single SSD will nominally be 'dirtied', there will be 12MB of the 2x & 268MB of the 3x setups that will be clean whether trim &/or GC did anything at all.

& will mean that the 1st test carried out on a dirtied 2x R0 array will have an advantage over both the single SSD & the subsequent tests, & the dirtied 3x R0 array will have an advantage for all tests...

Whilst i still vehemently disagree with the validity of the test compared to r.l., the way around it would be to duplicate the 100MB so that 200MB is written to the 2 SSDs & 300MB to the 3, before deleting & repeating 5x).

As a second point, (again ignoring the actual review of the highpoint card & whilst i agree wholeheartedly about the C300 being less resilient in R0) stating -
While the RAID setup was easy and our arrays remained stable through-out testing, we’ll once again advise users to avoid using RAID with SSDs, especially the C300 family. Not only do you lose TRIM support, gradually degrading your SSD’s performance, but you’ll have, as in this case, the added headache of a 40-second extension to your boot times.

- in the conclusion does not stand up to a more general examination of SSDs.

Similarly, you state at the beginning -
"While some drives are less affected by heavy use, the C300 series of SSDs relies heavily on TRIM to maintain performance. As such, while we'll be testing the HighPoint 640 with RAID arrays, this will really only be to test its bandwidth and performance."

- which is all perfectly sensible, however this doesn't add up to -
"In real world usage, we still recommend running SSDs in JBOD on a TRIM enabled-OS in AHCI mode."

- so restating advice about trim being recommended in real world usage (based on a flawed test, as no one's going to be copying 100MB of data on & off their SSD 5 times for laughs) based on a limitation that can more greatly impede the performance of the C300, doesn't hold up across the board with SSDs.

As said previously on the forum, the Revodrive & ibis (as examples) are alternative high speed implementations of R0 (basically using the same controllers as the current 3Gb/s SFs with a raid controller built in) & they do not become shonky despite the lack of trim.

[NB - & logging off in a S1 sleep state overnight once or twice a week will allow recovery from normal heavier r-e-w cycles 'if' needed (most users should never need to do this)... ...& obviously would be required much more regularly 'if' someone erroneously went a bit mad with their copying & deleting, as you seem to think we all do, & so needed it.]

&, as a quick final criticism, it is the case that you can multiply the sequential speeds by ~2.5x & see notable improvements in most other r/w b/ms (remembering of course that 4K results mean next to nothing without a reasonable QD since that's how OSes send almost all of their small r/ws), simply by putting 3 or more (though remember the 2.5 limit on max sequential speeds) 3Gb/s SSDs onto a decent onboard controller; esp the ich9r/ich10r controllers with the RST driver.

Okay, there's not yet a 6Gb/s onboard raid controller, but this does means that there is an inherent financial cost to raid in general - (again as discussed heavily on the forum) not least as a R0 array using 3Gb/s SSDs can seriously outperform a single C300 &, unless there's some serious increases in small r/ws, any of the proper 6Gb/s SSD that's likely to be out in the next months across the board...

...well, the C400 won't exactly be much of a challenge but,, although still slower on sequentials than a 3x R0, the 6Gb/s indilinx, intel & SF SSDs 'may' change things (though my gut is that it won't be until the 2nd gen of proper ones).

Otherwise, it's good to see that i may have been wrong about my half-assumption that you were on HighPoint's pay role, & also that you're no back pedalling somewhat about the 620 - at least admitting that it has limitations...

&, separately, 6Gb/s SSDs in R0 will be pretty fantastic despite the shonkiness of another highpoint thing - though without spending money on something like one of the lsi 9260 cards (there have been b/ms of indilinxes, SFs & C300s on them starting ~a year ago now showing a tiny glimpse of the potential - obviously the latter 2 were later as they didn't exist a year ago), it means waiting for pcie 3.0 (as it provides far more pcie lanes) which should coincide with an onboard 6Gb/s intel raid controller.
mobiuspizza 13th May 2011, 14:01 Quote
If you populate all 4 ports, the detection time should drop to a few seconds. This is well known problem with any highpoint RAID cards:-you need to populate all channels.

I was hoping to see normal HDDs benchmarked in RAID 1 and RAID10. Although HDD is a dying breed, having 2 or 3TB HDDs suddenly RAID 1 or RAID10 seems essential.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.

Discuss in the forums