|
From: Linux U. #3. <lin...@gm...> - 2010-04-21 19:39:00
|
Thanks for the replies! And sorry that it took me so long to get back here. I just couldn't manage to get some spare time… OVERVIEW: ========= DISK ACTIVE? POH MFG/MODEL ---- ------ ----- ------------------------------------------ 1 NO 3752 Seagate Momentus 5400.6 ST9500325AS 1 YES 150 Toshiba Hornet M250 500GB MJA2500BH 2 YES 3856 Hitachi Travelstar 5K500.B HTS545050B9A300 3 YES 1630 Western Digital Scorpio Blue WD5000BEVT 4 YES 3858 SAMSUNG SpinPoint M7 HM500JI -------------------------------------------------------------- Thanks for your answers and for your help so far. So, Franc Zabkar wrote: > Some attributes are best viewed in hexadecimal format. To this end > there is a "hex48" (48-bit hex) switch in smartctl, eg ... > > $ smartctl -v 5,hex48 -v 196,hex48 -v 203,hex48 -v > 200,Write_Error_Rate - v 240,Transfer_Error_Rate -A /dev/ice Thanks, I didn't know that. I'll try this hex48 view for unusually high int numbers next time. > You can also change the names of the attributes, as in the previous > example. That… makes no sense to me. Why should I change the names? > I believe there may also be a switch to display smartctl results in > "Fujitsu format". > > Here are three attributes that look better in hex: > > Reallocated_Sector_Ct = 9019431321600 = 0x083400000000 > Reallocated_Event_Count = 826015744 = 0x0000313c0000 > Run_Out_Cancel = 3728044065286 = 0x036400be0206 > > AIUI, the Reallocated_Sector_Ct consists of three 16-bit words as follows: > > (#spare sectors remaining) (#reallocated sectors) (#reallocation events) > > So, AISI, the drive has 2100 (=0x0834) spares with no reallocations. Thanks, that is good to hear. > I believe the lower 16 bits of the Reallocated_Event_Count store the > number of reallocation events. > > I have no idea what the Run_Out_Cancel data mean, except to say that > the raw value appears to consist of three 16-bit words. > > As for the Raw_Read_Error_Rate, Seek_Error_Rate, and Write_Error_Rate > (attribute 200), I understand the raw numbers to be a sector count, > not an error count. Which would make these values "technical" and not of interest for a normal user like me. Thanks. > Fujitsu (and Seagate) drives compute the error rates for each block > of sectors accessed. Seagate appears to count up to 250 million > sectors before the number rolls over to zero, whereas Fujitsu appears > to count up to a much smaller number, probably 0x3FFFF (= 262,143). > > In short, I don't see anything that would worry me. Good to hear. It's a new drive, so it was expected to be "good"… only the raw values were so cryptic to me. > The Raw_Read_Error_Rate for the Hitachi HTS545050B9A300 is 0x00010001 > (= 65537), which tallies with loss of one point from the normalised > value of 99, ie the "read error rate" appears to be 1. > > The Power_On_Hours figure of 3710 has resulted in a loss of 8 points. > So, according to SMART, the drive's rated life is between ... > > 3710 / 9 * 100 / 365 / 24 = 4.7 years > > ... and ... > > 3710 / 8 * 100 / 365 / 24 = 5.3 years > > The SAMSUNG HM500JI hasn't lost a point after 3712 hours, so it > appears that there is a bug in that attribute. > > The WDC WD5000BEVT has lost 2 points after 1485 hours, so its rated > life is between ... > > 1485 / 3 * 100 / 365 / 24 = 5.65 years > > ... and ... > > 1485 / 2 * 100 / 365 / 24 = 8.5 years Now that is strange. The drives were bought new in November 2009. I've installed them together (more or less… the 2 hours difference may be because I tested one drive in an external enclosure first, but I didn't test the other drives). The drives are running 24/7, they don't go into energy saving mode, they don't get stop sinning at any time. The reason is that my NAS is in the cellar, and the temperature there is between 5° Celsius in the winter and 15° Celsius in the summer. Right now the temperature is 14°C. If I'd let the drives shut down by the NAS they'd probably cool down to ~5°C in the winter, and this could shorten the life time drastically when this happens all the time. The other reason is that the NAS is seeding some (free) torrents, thus the drives are permanently accessed. Spin-down wouldn't make sense in such a scenario. So, coming back to the POH values: Running 24/7 for around 5½ months should be no more than around 3960 hours (quick calculation: 5.5 times 30 days times 24 hours). Since I had the NAS stopped for some days now and then, the value of 3710 was realistic. It seems to be correct. Thus, the WD drive reports a wrong value for POH. > Your Load_Cycle_Count figures are extremely worrying. > > Drive LCC POH LCC frequency (secs) > -------------------------------------------- > Hitachi 903893 3710 14.8s > WDC 263874 1485 20.3s > SAMSUNG 3480080 3712 3.9s <--- is this realistic ??? > > The rated number of load cycles for the Hitachi appear to be between ... > > 903893 / 91 x 100 = 993,289 > > ... and ... > > 903893 / 90 x 100 = 1,004,325 > > Assuming the maximum normalised value for the WD is 200, then its > rated number of load cycles appear to be between ... > > 263874 / 88 x 200 = 599,713 > > ... and ... > > 263874 / 87 x 200 = 606,606 > > The Samsung has already exceeded its expected lifetime. What are load cycles and why is the value of the Samsung drive so high? As I said before, it makes no sense to me, since the three remaining originally used drives are running exactly for the same amount of *everything*. Their values for POH, LCC, etc should be exactly the same. Tim Small wrote: > Franc Zabkar wrote: > > Your Load_Cycle_Count figures are extremely worrying. > > FYI: > > https://ata.wiki.kernel.org/index.php/Known_issues#Drives_which_perform_fre > quent_head_unloads_under_Linux > > Try fiddling with the APM values, and see if they stop increasing > (doesn't work for WD you'll need to use their DOS tool). Maciej Żenczykowski wrote: > I have a system with two 2.5" Seagate Momentus 500GB drives, and the > BIOS appears to default to hdparm -B 128. > Since I have these drives in a raid configuration, and this setting > (and indeed any setting < 254) results in these drives being very slow > (it appears every time raid sends a request to the other drive, the > previous drive 'unloads/parks/stops' or does something equivalently > stupid), for me the solution is to run "hdparm -B 254" as early as > possible during boot (as early as possible, because the drives are > _ridiculously_ slow otherwise and thus the boot takes ...ages...). Okay, I could do that. How do I find out, what the default (or currently set) values are? BTW, the boot process of my DiskStation DS409slim is not slow at all. Everything seems to run as expected. I didn't notice any performace issues yet. This is my NAS: http://www.synology.com/enu/products/DS409slim/index.php Now, I removed the Seagate Momentus because the DiskStation reported it to have failed. I've now installed it into my Laptop and booted into Parted Magic 4.8 (released 2009-12-28) and this is what it reports: ### ### REMOVED DISK 1 - BEGIN ### (removed due to being reported as borken by DiskStation) ### root@PartedMagic:~# smartctl -v 1,hex48 -v 7,hex48 -v 195,hex48 -a /dev/sda smartctl 5.39 2009-12-09 r2995 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-9 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 5400.6 series Device Model: ST9500325AS Serial Number: 5VE4HJES Firmware Version: 0001SDM1 User Capacity: 500,107,862,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Wed Apr 21 21:04:48 2010 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x73) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 142) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 114 099 006 Pre-fail Always - 0x0000042b22b0 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 073 060 030 Pre-fail Always - 0x0000015cc6dc 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3752 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 037 020 Old_age Always - 22 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 099 099 000 Old_age Always - 1 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 055 055 045 Old_age Always - 45 (Lifetime Min/Max 26/45) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 0 193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 706769 194 Temperature_Celsius 0x0022 045 045 000 Old_age Always - 45 (0 22 0 0) 195 Hardware_ECC_Recovered 0x001a 053 051 000 Old_age Always - 0x0000042b22b0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 3751 - # 2 Short offline Completed without error 00% 1570 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. ### ### REMOVED DISK 1 - END ### My question now is, how do I see that this disk is broken? The short offline test completed without errors. I cannot really interpret the values I'm getting. Is it the Raw_Read_Error_Rate V/114 W/099 T/006? So it is 15 point above the WORST and this is bad, right? Also the Seek_Error_Rate V/073 W/060 T/030 is 13 points above WORST. Franc Zabkar wrote: > I would examine the SMART reports a day later, and compare the POH > counts to see if they have advanced by the same amount. > > If the WDC lags behind, then it may be that the POH count is not > incremented during power saving mode. Or could there be a bug in the > firmware that causes the real time clock to advance by only 2 ticks > in every 5 ??? > > ie 3712 x (2/5) = 1484.8 I've attatched two gziped text files that have about 47-48 hours inbetween. Maybe this could clarify this issue, but I'm almost certain that the WD drive just "clicks" less than the other drives. If you care, I'd appreciate a litte bit more help with this. As my first priority I'd like to make the drives endure for a long time. This load/unload issue worries me, but I don't know what to do against it exactly. Also I don't understand the POH counter and why you calculated it to be around 4½ years, whereas it should be only < 6 months. Anyway, thanks for your help. Andreas aka Linux User #330250 |