Automotive Holiday Deals Books Holiday Gift Guide Shop Men's Cyber Monday Deals Week Learn more nav_sap_SWP_6M_fly_beacon Train egg_2015 Fire TV Stick Luxury Beauty Gifts for Her Amazon Gift Card Offer mithc mithc mithc  Amazon Echo Starting at $49.99 Kindle Voyage AutoRip in CDs & Vinyl Outdoor Deals on bgg

Your rating(Clear)Rate this item

There was a problem filtering reviews right now. Please try again later.

1,535 of 1,689 people found the following review helpful
on August 20, 2012
Here is a quote from a review at

I'm going to let the cat out of the bag right here and now. Everyone's home RAID is likely an accident waiting to happen. If you're using regular consumer drives in a large array, there are some very simple (and likely) scenarios that can cause it to completely fail. I'm guilty of operating under this same false hope - I have an 8-drive array of 3TB WD Caviar Greens in a RAID-5. For those uninitiated, RAID-5 is where one drive worth of capacity is volunteered for use as parity data, which is distributed amongst all drives in the array. This trick allows for no data loss in the case where a single drive fails. The RAID controller can simply figure out the missing data by running the extra parity through the same formula that created it. This is called redundancy, but I propose that it's not.

Since I'm also guilty here with my huge array of Caviar Greens, let me also say that every few weeks I have a batch job that reads *all* data from that array. Why on earth would I need to occasionally and repeatedly read 21TB of data from something that should already be super reliable? Here's the failure scenario for what might happen to me if I didn't:
* Array starts off operating as normal, but drive 3 has a bad sector that cropped up a few months back. This has gone unnoticed because the bad sector was part of a rarely accessed file.
* During operation, drive 1 encounters a new bad sector.
* Since drive 1 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
* The RAID controller exceeds its timeout threshold waiting on drive 1 and marks it offline.
* Array is now in degraded status with drive 1 marked as failed.
* User replaces drive 1. RAID controller initiates rebuild using parity data from the other drives.
* During rebuild, RAID controller encounters the bad sector on drive 3.
* Since drive 3 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
* The RAID controller exceeds its timeout threshold waiting on drive 3 and marks it offline.
* Rebuild fails.

At this point the way forward varies from controller to controller, but the long and short of it is that the data is at extreme risk of loss. There are ways to get it all back (most likely without that one bad sector on drive 3), but none of them are particularly easy. Now you may be asking yourself how enterprises run huge RAIDs and don't see this sort of problem? The answer is Time Limited Error Recovery - where the hard drive assumes it is part of an array, assumes there is redundancy, and is not afraid to quickly tell the host controller that it just can't complete the current I/O request.

Here's how that scenario would have played out if the drives implemented some form of TLER:
* Array starts off operating as normal, but drive 3 has developed a bad sector several weeks ago. This went unnoticed because the bad sector was part of a rarely accessed file.
* During operation, drive 1 encounters a new bad sector.
* Drive 1 makes a few read attempts and then reports a CRC error to the RAID controller.
* The RAID controller maps out the bad sector, locating it elsewhere on the drive. The missing sector is rebuilt using parity data from the other drives in the array.
*Array continues normal operation, with the error added to its event log.

The above scenario is what would play out with an Areca RAID controller (I've verified this personally). Other controllers may behave differently. A controller unable to do a bad sector remap might have just marked drive 1 as bad, but the key is that the rebuild would be much less likely to fail as drive 3 would not drop completely offline once the controller ran into the additional bad sector. The moral of this story is that typical consumer grade drives have data error timeouts that are far longer than the drive offline timeout of typical RAID controllers, and without some form of TLER, two bad sectors (totaling 1024 bytes) is all that's required to put multiple terabytes of data in grave danger.

The Solution:
The solution should be simple - just get some drives with TLER. The problem is that until now those were prohibitively expensive. Enterprise drives have all sorts of added features like accelerometers and pressure sensors to compensate for sliding in and out of a server rack while operating, as well as dealing with rapid pressure changes that take place when the server room door opens and the forced air circulation takes a quick detour. Those features just aren't needed in that home NAS sitting on your bookshelf. What *is* needed is a WD Caviar Green that has TLER, and Western Digital delivers that in their new Red drives.

End quote and back to reviewer.
I've got 5 of these in a Synology DiskStation 5-Bay (Diskless) Network Attached Storage (DS1512+). It is really a sweet setup.

The Synology software has a S.M.A.R.T. test that can do surface scans to detect bad sectors. I have their Quick Test check every disk daily and the Extended Test set to automatically run on each of the 5 disks every weekend. (The Extended Test takes about 5 hours per disk so I separate the tests by 12 hours.)
152152 commentsWas this review helpful to you?YesNoReport abuse
174 of 191 people found the following review helpful
on November 29, 2012
If you're looking at this review, you're probably in the market for some honkin' big drives to stuff into a server or a NAS box. These Western Digital "Red" series drives are probably a total waste of money if you're planning to put them into a regular PC. If you're not doing a raid array of some kind, then save your money and buy the green or black series drives instead. If you're looking to set up a raid array of some sort, these are a bargain. They aren't the fastest drives, but they are rated to run 24x7 serving up data! Their 3 year warranty is above the current industry standard for consumer hard drives.

For my home-made FreeNAS (google it!) NAS/Server, I bought 5 WD Red drives from Adorama (purchased through Amazon) and 1 drive directly from Amazon.

The one drive from Amazon came very well packaged, double boxed in what looks like a WD cardboard box with a shock absorbing cradle. Very well packaged for shipment. Honestly, Amazon has been stellar for packaging boxes for shipment.

The 5 hard drives from Adorama came in a big box which 'clunked' when it was tilted. Opening the box revealed some big plastic pillow air strips, and 5 loose smaller boxes. Inside each of the smaller boxes was a few pillows and a factory bagged hard drive. There were not enough pillows in each box to securely cushion the hard drives against rattling around, so there's a high likelihood of damage in shipment. BAD SHIPPERS! NO DONUT!

Anyway, getting on to the performance of the drives... I'm running 6 drives in a ZFS RaidZ2 array. They are all controlled using an IBM M1015 PCIE 8x SATA 3 controller which has been flashed to be an HBA providing JBOD to the ZFS OS. That's a lotta acronyms! The speed of the array is quite fast... more than fast enough to saturate a gigabit network. I currently have about 5TB of data stored on the 10TB array.

On to the bad stuff...
One of the drives (I haven't checked the serial number to see which shipper it came from) is starting to give signs of premature failure after about 70 hours of operation. During a scrub of the data pool, drive DA5 is experiencing unreadable sectors. Luckily ZFS is able to calculate the correct values for the corrupted data, and is busily recreating the data onto another part of the drive. ZFS rocks for data reliability! If the drive does turn out to be bad, I have a WD Green 3TB drive that I can put into the array as a hot swap temporarily until the failed drive can be replaced. *UPDATE* The ZFS scrub just finished, and it repaired 1.53MB of data, with no data loss. Did I mention that ZFS rocks?

Warning/Advice about Data Storage:
Note 1: If you're going to be using these drives, or any data storage device for that matter, make sure that you take into account that these are highly fragile and delicate devices which can be easily damaged in shipment, or just plain up and fail when you least expect it. You really need to use some sort of redundant array of drives so that if one drive fails, your data doesn't vanish. In my case, the final configuration is going to be 6 drives in a RaidZ2 (dual parity striping), which means that my data stays intact and accessible even if 2 drives fail simultaneously. Also, there is going to be a 3TB Green drive as a hot spare that can take over for any failed drive in the array. With the hot-spare, my data can survive the loss of 3 drives without losing data (as long as the failures don't happen all at the same time).
Note 2: Always, always, always have a backup. In my case, I have two external 3TB USB3.0 drives which will be used only for backup purposes. Every so often, I'll backup the critical data onto the drives and stash them in my locker at work. If you don't have TrueCrypt, google it and see why your backup removable drives should be using it. If someone steals the drives, they only get the drives and not my data.

I'm giving 5 stars for the drives that work... 1 star for the failing drive... averages to about 4 stars score! I'll update this review once I have details on how the drives do in a week or so. Currently it ain't looking too good for drive DA5!
77 commentsWas this review helpful to you?YesNoReport abuse
66 of 71 people found the following review helpful
on August 3, 2014
On Windows it comes out to 5.45TB. I transferred a little over 2TB to it and average write speed was 110MB/s. Right now it's just sitting on top of my computer case but after that long transfer I used a temp gun and the surface of the case was 87F and the drive was 101F.


I just purchased a second drive and did some testing on it while blank and uploaded the results under customer images to the right.
review image review image review image review image
11 commentWas this review helpful to you?YesNoReport abuse
195 of 227 people found the following review helpful
on January 6, 2014
So WD apparently has no ability to perform the most basic configuration management at their factories.

Once again the "load cycle count" issue has returned to their line of drives. If you don't know what this is you can Google, but basically the drive repeatedly parks the drive heads, thousands of times a day, because of an improper firmware setting on the drive. The drives are only rated to 600,000 load cycles and with them ticking of once every few seconds the drive will exceed its rating in less than a year.

This can be corrected by the user but it is a pain and requires you have certain hardware/software to do. Probably beyond most users. Exchanging the drive may or may not help because you have no idea if the next drive will have the same erroneous setting.

This has happened many times in the past few years, check the reviews for this and other WD drives. WD acknowledges this is the incorrect setting as do many NAS vendors that recommend these drives. And yet, WD can't seem to help itself from setting the IDLE3 parameter wrong every few months. This is a sign that they simply are unable to perform basic configuration management in their production facilities. If you can't manage that you don't belong in the HD business.

For reference, Google WDIDLE3 for how to fix this problem - but you better have a machine you can boot to a DOS image in order to use it. Alternatively return your WD drive and buy the equivalent Seagate drive instead. They have a nice NAS ready drive equivalent to the WD Red and they know how to set IDLE3 correctly.
3636 commentsWas this review helpful to you?YesNoReport abuse
161 of 191 people found the following review helpful
on August 18, 2012
After about six months of searching for the perfect drive, I finally settled on two of these Western Digital Red 2TB WD20EFRX hard drives. I was ready to purchase HGST enterprise drives, the former Hitachi, but WD came out with these drives just in-time. I wanted to get the 3TB WD30EFRX version for my Synology DS212 NAS, but the price difference didn't make that much of a sense, and 2TB drives are more than enough for a few years of my home office use. I am very happy that these drives MTBFs are rated at 1,000,000 hours, they use less power, and they are cheaper than other enterprise drives.

Upon receiving, I immediately installed them in my NAS. It took about 15 minutes to install DSM 4 and begin the inspection process. I neither chose Raid 1, JBOD, or SHR, and I took some online advice and created two separate volumes, one on each disk, to have two independent file systems. In this case, you don't have to worry about rebuilding disk arrays if any drives fail, and you always have a backup present. I was planning on using Folder Sync feature to sync all folders from Disk 1 to Disk 2 every other hour, but I found out this feature only works on two independent Synology Disk Stations; however, you can use automated backup feature to backup data from Disk 1 into Disk 2, and it produces about the same result as Folder Sync does, and it gives you a few more options for backing up system and application files as well.

Synology volume creation took about 7 hours for each drive with automatic bad sector reallocation feature. I later tested each drive with S.M.A.R.T extended test--each took about 4 hours--and I am happy to report that I did not have any bad sectors on either of the drives. That is, the "Reallocated Sector Count" reads zero in S.M.A.R.T report.

The drives are surprisingly quiet. I had an enterprise RE2 500GB in my NAS, and it was thunderstorm loud compared to these. The temperature is also very reasonable. When the drive is resting it is about 31C/88F, and under heavy usage it rises up to 35C/95F. Although these drives speed are only 5000 rpm, I don't see any difference in file transfer speed. The only downside that I could sense was the startup time from sleep. I feel that compared to my old WD RE2 drive, it takes a good 2 to 5 seconds more for the NAS to come out of sleep each time. Not a deal breaker, but something to consider when you invest in these drives.

I think WD has done a good job with these drives, and they are currently the best on the market for home or home office use. That being said, I still think WD RE4 drives are the best enterprise drives and ultimate in performance; however, if you are looking for a good set of drives for your NAS, and the power consumption and noise are important to you, these WD Red drives will work just fine. Compared to desktop drives, these come with a few enterprise features that come in handy and will save you some time and money down the road.
1414 commentsWas this review helpful to you?YesNoReport abuse
52 of 61 people found the following review helpful
on November 5, 2013
Load cycle count issue has returned. Seems to a problem with Netgear readynas, synology and qnap NAS units.
Red drives up to 3 tb are ok with these devices, but the 4 tb accumulates load cycles very rapidly, potentially shortening the useful life of the drive.
88 commentsWas this review helpful to you?YesNoReport abuse
10 of 10 people found the following review helpful
on August 3, 2015
I am sorry but WD's warranty is a joke. Bought two brand new HDDs, one failed a few months after being in my RAID 1 NAS. Returned under warranty (had to pay for shipping) - got back a RECERTIFIED HDD. Which, of course, failed after 1 month and lost ALL data. Returned it again (had to pay for shipping) - again got a RECERTIFIED HDD which was simply DOA and could not start at all. So their "5 year warranty" is simply feeding you with faulty recertified HDDs until you run out of money for shipping them dead ones. Great idea, WD. Sorry, cannot trust you my data anymore. Going to your competitors.
0CommentWas this review helpful to you?YesNoReport abuse
30 of 36 people found the following review helpful
on September 6, 2014
Some of the 2014-2015 drives have the IntelliPark setting set incorrectly causing 110,000x more wear and tear than the 2013 ones.

I purchased four of these drives. Two in October 2013, two in January 2014.

The 2014 drives have parked their heads 442,710 times.
The 2013 ones have parked their heads 4 times.

(All of the drives are model WDC WD30EFRX-68EUZN0 running Firmware 80.00A80.)

I tried to register the drives to make a warranty claim and all of the serial numbers are blacklisted as being OEM. They were all purchased on from this product page.

If you reach out to WD they will RMA the drives and allow you to register them through customer service manually. They sent me refurbished drives.
22 commentsWas this review helpful to you?YesNoReport abuse
22 of 26 people found the following review helpful
on July 10, 2013
I purchased 4 of these drives along with a Synology 412+. The first drive went for a day before failing. I was surprised to say the least but Amazon had a new one shipped over night at no cost which was awesome (great return/exchange policy here). Just got home today, a week after putting the NAS together to find another failed drive (So far its a 50% failure rate). With all the positive reviews on this drive, its quite the bummer that I'm experiencing these issues. I'm crossing my fingers that the no other drives die out.
55 commentsWas this review helpful to you?YesNoReport abuse
23 of 28 people found the following review helpful
on November 22, 2013
I ordered 6 of these drives. One arrived DOA.

Beware of the "Load Cycle Issue" with the 4TB version of these drives. The drives are rated at 300,000 lifetime load cycles, and the drives I used in a Netgear ReadyNAS 314 were racking up hundreds of load cycles a day until I disabled the "Idle 3 Timer". The drives would probably have died quickly at this rate.

I used two of the drives in a Synology DS213+, and it appears the Synology firmware disables the Idle3 timer automatically. Those drives had very few load cycles and had the timers already disabled when I checked them after setting up the Synology.

It is currently unclear if disabling this timer affects your warranty. I used the idle3ctl utility in Linux to disable the timers. On a readynas os 6 unit, you can install this utility by typing "apt-get install idle3ctl" at a root prompt. Read the documentation and understand what you are doing before using this utility. You only have to run the utility once on each drive to disable the timer. The setting is persistent and survives power cycles.
11 commentWas this review helpful to you?YesNoReport abuse

Questions? Get fast answers from reviewers

Please make sure that you've entered a valid question. You can edit your question or post anyway.
Please enter a question.
See all 106 answered questions

Send us feedback

How can we make Amazon Customer Reviews better for you?
Let us know here.