Thread: [Mon-devel] Request to include monitor script with distribution
Brought to you by:
trockij
From: Todd L. <tl...@iv...> - 2005-12-08 17:51:31
Attachments:
netappraidstat.monitor
|
Could the following monitor be examined and tested for inclusion in the monitor repository? I'm using it in production at the moment and it works for me <tm>. With some feedback from one kind soul, I've made a few adjustments from what I posted on the Mon users list a couple days ago(longer vol name space). The following comment should answer most questions: # Borrowed heavily from framework of netappfree.monitor. The only real structural alteration made is that I added 'use strict'. It's just a preference on my part. :-) -- Regards... Todd when you shoot yourself in the foot, just because you are so neurally broken that the signal takes years to register in your brain, it does not mean that your foot does not have a hole in it. --Randy Bush Linux kernel 2.6.12-12mdksmp 2 users, load average: 0.22, 0.25, 0.16 |
From: Jim T. <tr...@ar...> - 2005-12-08 20:37:45
|
On Thu, 8 Dec 2005, Todd Lyons wrote: > Could the following monitor be examined and tested for inclusion in the > monitor repository? sure, we can add it there. |
From: Ed R. <er...@pa...> - 2005-12-08 20:53:18
|
On Thu, Dec 08, 2005 at 03:37:33PM -0500, Jim Trocki wrote: > On Thu, 8 Dec 2005, Todd Lyons wrote: > > >Could the following monitor be examined and tested for inclusion in the > >monitor repository? > > sure, we can add it there. I will be testing it out in the next couple of days on the NetApps in my shop. Do you have any documentation for it, in particular the format of the config file? |
From: Todd L. <tl...@iv...> - 2005-12-09 00:00:35
|
Ed Ravin wanted us to know: >> >Could the following monitor be examined and tested for inclusion in the >> >monitor repository? >> sure, we can add it there. >I will be testing it out in the next couple of days on the NetApps in >my shop. Do you have any documentation for it, in particular the >format of the config file? It's pretty flexible. 1) It looks for netappraidstat.cf in one of two places: /etc/mon/netappraidstat.cf or /usr/lib/mon/etc/netappraidstat.cf The configfile format is one hostname per line. 2) It's designed so that if you already have a netappfree.cf file, you can just symlink netappraidstat.cf to it and it will work (it ignores everything else on the line after the hostname. If more places need to be searched for the configfile, it would be trivial to add (but I lose my cool little oneliner and turn it into if {} elsif {} ... not as pretty, but ok :-) -- Regards... Todd I've visited conferences where the wireless LAN was deemed "secure" by the organisation because they had outlawed sniffers. --Neils Bakker Linux kernel 2.6.12-12mdksmp 2 users, load average: 0.10, 0.09, 0.08 |
From: Ed R. <er...@pa...> - 2005-12-09 05:02:32
|
On Thu, Dec 08, 2005 at 04:00:15PM -0800, Todd Lyons wrote: > 1) It looks for netappraidstat.cf in one of two places: > /etc/mon/netappraidstat.cf or /usr/lib/mon/etc/netappraidstat.cf > The configfile format is one hostname per line. > 2) It's designed so that if you already have a netappfree.cf file, you > can just symlink netappraidstat.cf to it and it will work (it ignores > everything else on the line after the hostname. Doess't work for me - "volIndex" is not in my copy of the NetAPP MIB: -- Version 1.5, May 2000 I have the raidVTable and the deprecated raidTable in that MIB, but nothing in the "vol" group. We're using ONTAP 6.1.2R3 (yes, I know). I can hack it to use the raidVTable MIB, and it gives some useful info, but I don't think I'm able to properly test it. |
From: Todd L. <tl...@iv...> - 2005-12-09 15:26:13
|
Ed Ravin wanted us to know: >Doess't work for me - "volIndex" is not in my copy of the NetAPP MIB: > -- Version 1.5, May 2000 >I have the raidVTable and the deprecated raidTable in that MIB, but nothing >in the "vol" group. We're using ONTAP 6.1.2R3 (yes, I know). :-) >I can hack it to use the raidVTable MIB, and it gives some useful info, >but I don't think I'm able to properly test it. Our ONTAP is 6.5. It looks like it will have to do some version checking and compensating before it can be accepted for general release. I'll grab the schema from NOW for the various ONTAP versions and dig through them to see 1) which ones support volIndex (or the volTable in general) 2) if anything else could be used in its stead Thanks for testing this for me. My hardware selection is very limited :-) -- Regards... Todd I've visited conferences where the wireless LAN was deemed "secure" by the organisation because they had outlawed sniffers. --Neils Bakker Linux kernel 2.6.12-12mdksmp 2 users, load average: 0.01, 0.01, 0.00 |
From: Todd L. <tl...@iv...> - 2005-12-15 21:32:14
Attachments:
netappraidstat.monitor
|
Ed Ravin wanted us to know: >Doess't work for me - "volIndex" is not in my copy of the NetAPP MIB: > -- Version 1.5, May 2000 >I have the raidVTable and the deprecated raidTable in that MIB, but nothing >in the "vol" group. We're using ONTAP 6.1.2R3 (yes, I know). Looking in the various NetApp MIBs, I see the following: 1) volTable exists for ONTAP 6.4 through 7.0, which is what this script was written for. 2) plexTable exists for ONTAP 6.2 through 7.0, which seems to give similarly useful information. 3) raidVTable exists for ONTAP 6.0 through 7.0, it could be mostly generated from that as well. ...a couple hours of coding and testing... For ONTAP 6.2 and 6.3, I can generate the appropriate values with the slight alteration of the Volume Status showing the Rebuild percent. It will show 0% in normal operation. For ONTAP 6.0 and 6.1, I can generate the appropriate values from the raidVTable with the same Volume Status limitation, but it prints a line out for every drive. Someone with a real 6.0 or 6.1 ONTAP system (<cough> Ed <cough>) ought to be able to figure out if the fields that I'm using will be duplicated across each drive. If yes, then you only keep one line by doing a 'next' or 'last' or set a flag to get out of the foreach loop. I think that there will need to be some trickery to get out of both the inner while and the outer foreach loops though. The new script is attached. There are some new commandline options available. The --forceold forces it to use the old raidVTable MIB. The --forceplex forces it to use the plexTable MIB. Not using either option allows it to auto-detect which version of ONTAP it's connecting to and uses the appropriate one (volTable, plexTable, and raidVTable, preference is in that order). There is also a --debug option which will print out the same info as if there were a problem detected. The force* commandline options are intended more for troubleshooting than normal operation, but they are available in both modes (normal and list). If need be, some documentation can be added in the comment section at the top of the script to explain them. Here's the output of the latest incarnation of the script for various methods of being called: admin51 mon.d # ./netappraidstat.monitor netapp1 netapp2 admin51 mon.d # ./netappraidstat.monitor --forceplex netapp1 netapp2 admin51 mon.d # ./netappraidstat.monitor --forceold netapp1 netapp2 admin51 mon.d # ./netappraidstat.monitor netapp1 netapp2 --debug netapp1 netapp2 netapp1 is online, status: 'raid4' netapp2 is online, status: 'raid4' admin51 mon.d # ./netappraidstat.monitor netapp1 netapp2 --debug --forceplex netapp1 netapp2 netapp1 is active, status: 'Rebuilding: 0%' netapp2 is active, status: 'Rebuilding: 0%' admin51 mon.d # ./netappraidstat.monitor --list netapp1 netapp2 filer ONTAP Volume Name Vol State Vol Status --------------------------------------------------------------------------- netapp1 6.5 vol0 online raid4 netapp2 6.5 vol0 online raid4 admin51 mon.d # ./netappraidstat.monitor --list --forceplex netapp1 netapp2 filer ONTAP Volume Name Vol State Vol Status --------------------------------------------------------------------------- netapp1 6.5 vol0 active Rebuilding: 0% netapp2 6.5 vol0 active Rebuilding: 0% admin51 mon.d # ./netappraidstat.monitor --list --forceold netapp1 netapp2 filer ONTAP Volume Name Vol State Vol Status --------------------------------------------------------------------------- netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp1 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% netapp2 6.5 /vol0/plex0 active Rebuilding: 0% >I can hack it to use the raidVTable MIB, and it gives some useful info, >but I don't think I'm able to properly test it. For reference, here's the volTable output, useful for ONTAP 6.4, 6.5 and 7.0: [todd@tlyons ~]$ /usr/bin/snmpwalk -c webBuilder -v 1 -m /home/todd/netapp/mib_6.5/netapp.mib.txt netapp1 volTable NETWORK-APPLIANCE-MIB::volIndex.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::volName.1 = STRING: "vol0" NETWORK-APPLIANCE-MIB::volFSID.1 = STRING: "3120846801" NETWORK-APPLIANCE-MIB::volOwningHost.1 = INTEGER: local(1) NETWORK-APPLIANCE-MIB::volState.1 = STRING: "online" NETWORK-APPLIANCE-MIB::volStatus.1 = STRING: "raid4" NETWORK-APPLIANCE-MIB::volOptions.1 = STRING: "root, diskroot, nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, raidtype=raid4, raidsize=8, nvfail=off, snapmirrored=off, resyncsnaptime=60, create_ucode=on, convert_ucode=off, maxdirsize=10470, fs_size_fixed=off, create_reserved=off" NETWORK-APPLIANCE-MIB::volUUID.1 = STRING: "censored" And here's the plexTable output, useful for ONTAP 6.2 and 6.3: [todd@tlyons ~/netapp]$ /usr/bin/snmpwalk -c webBuilder -v 1 -m /home/todd/netapp/mib_6.5/netapp.mib.txt netapp1 plexTable NETWORK-APPLIANCE-MIB::plexIndex.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::plexName.1 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::plexVolName.1 = STRING: "vol0" NETWORK-APPLIANCE-MIB::plexStatus.1 = INTEGER: online(3) NETWORK-APPLIANCE-MIB::plexPercentResyncing.1 = INTEGER: 0 For the name, I didn't know if I should use plexName or plexVolName, so I used plexVolName. Feel free to advise me one way or the other on that. And here's the raidVTable output, useful for ONTAP 6.0 and 6.1: [todd@tlyons ~/netapp]$ /usr/bin/snmpwalk -c webBuilder -v 1 -m /home/todd/netapp/mib_6.5/netapp.mib.txt netapp0 raidVTable NETWORK-APPLIANCE-MIB::raidVIndex.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVIndex.1.1.2 = INTEGER: 2 NETWORK-APPLIANCE-MIB::raidVIndex.1.1.3 = INTEGER: 3 NETWORK-APPLIANCE-MIB::raidVIndex.1.1.4 = INTEGER: 4 NETWORK-APPLIANCE-MIB::raidVIndex.1.1.5 = INTEGER: 5 NETWORK-APPLIANCE-MIB::raidVIndex.1.1.6 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.1 = STRING: "data disk 7.4" NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.2 = STRING: "data disk 7.1" NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.3 = STRING: "parity disk 7.0" NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.4 = STRING: "data disk 7.2" NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.5 = STRING: "data disk 7.5" NETWORK-APPLIANCE-MIB::raidVDiskName.1.1.6 = STRING: "dparity disk 7.6" NETWORK-APPLIANCE-MIB::raidVStatus.1.1.1 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVStatus.1.1.2 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVStatus.1.1.3 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVStatus.1.1.4 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVStatus.1.1.5 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVStatus.1.1.6 = INTEGER: active(1) NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.1 = INTEGER: 393217 NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.2 = INTEGER: 327681 NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.3 = INTEGER: 262145 NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.4 = INTEGER: 196609 NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.5 = INTEGER: 65537 NETWORK-APPLIANCE-MIB::raidVDiskId.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.1 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.2 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.3 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.4 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.5 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiAdapter.1.1.6 = STRING: "7" NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.1 = INTEGER: 4 NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.3 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.4 = INTEGER: 2 NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.5 = INTEGER: 5 NETWORK-APPLIANCE-MIB::raidVScsiId.1.1.6 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.1 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.2 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.3 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.4 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.5 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedMb.1.1.6 = INTEGER: 16979 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.1 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.2 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.3 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.4 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.5 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVUsedBlocks.1.1.6 = INTEGER: 34774016 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.1 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.2 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.3 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.4 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.5 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalMb.1.1.6 = INTEGER: 17366 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.1 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.2 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.3 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.4 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.5 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVTotalBlocks.1.1.6 = INTEGER: 35566480 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.1 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.2 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.3 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.4 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.5 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVCompletionPerCent.1.1.6 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVVol.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVVol.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVVol.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVVol.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVVol.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVVol.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroup.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.1 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.2 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.3 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.4 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.5 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVDiskNumber.1.1.6 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVGroupNumber.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.1 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.2 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.3 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.4 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.5 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVDiskPort.1.1.6 = INTEGER: portA(1) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.1 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.2 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.3 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.4 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.5 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskName.1.1.6 = "" NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.1 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.2 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.3 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.4 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.5 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVSecondaryDiskPort.1.1.6 = INTEGER: portNone(4) NETWORK-APPLIANCE-MIB::raidVShelf.1.1.1 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVShelf.1.1.2 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVShelf.1.1.3 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVShelf.1.1.4 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVShelf.1.1.5 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVShelf.1.1.6 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVBay.1.1.1 = INTEGER: 4 NETWORK-APPLIANCE-MIB::raidVBay.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVBay.1.1.3 = INTEGER: 0 NETWORK-APPLIANCE-MIB::raidVBay.1.1.4 = INTEGER: 2 NETWORK-APPLIANCE-MIB::raidVBay.1.1.5 = INTEGER: 5 NETWORK-APPLIANCE-MIB::raidVBay.1.1.6 = INTEGER: 6 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlex.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexGroup.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.1 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.2 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.3 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.4 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.5 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexNumber.1.1.6 = INTEGER: 1 NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.1 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.2 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.3 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.4 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.5 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVPlexName.1.1.6 = STRING: "/vol0/plex0" NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.1 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.2 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.3 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.4 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.5 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVSectorSize.1.1.6 = INTEGER: 512 NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.1 = STRING: "LKE378300000101639R9" NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.2 = STRING: "LKJ68162000010162L6L" NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.3 = STRING: "LKJ79075000010200C1S" NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.4 = STRING: "LKJ780870000101937T6" NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.5 = STRING: "LK53810700002923H8E4" NETWORK-APPLIANCE-MIB::raidVEntry.26.1.1.6 = STRING: "LKJ793720000101937R7" NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.1 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.2 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.3 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.4 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.5 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.27.1.1.6 = STRING: "SEAGATE " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.1 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.2 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.3 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.4 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.5 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.28.1.1.6 = STRING: "ST118202FC " NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.1 = STRING: "NA27" NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.2 = STRING: "NA27" NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.3 = STRING: "NA27" NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.4 = STRING: "NA27" NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.5 = STRING: "NA27" NETWORK-APPLIANCE-MIB::raidVEntry.29.1.1.6 = STRING: "NA27" -- Regards... Todd We should not be building surveillance technology into standards. Law enforcement was not supposed to be easy. Where it is easy, it's called a police state. -- Jeff Schiller on NANOG Linux kernel 2.6.12-12mdksmp 2 users, load average: 0.00, 0.07, 0.06 |
From: Ed R. <er...@pa...> - 2005-12-16 00:09:36
|
On Thu, Dec 15, 2005 at 01:31:58PM -0800, Todd Lyons wrote: > Ed Ravin wanted us to know: > > >Doess't work for me - "volIndex" is not in my copy of the NetAPP MIB: > > -- Version 1.5, May 2000 > >I have the raidVTable and the deprecated raidTable in that MIB, but nothing > >in the "vol" group. We're using ONTAP 6.1.2R3 (yes, I know). ... > ...a couple hours of coding and testing... > > For ONTAP 6.2 and 6.3, I can generate the appropriate values with the > slight alteration of the Volume Status showing the Rebuild percent. It > will show 0% in normal operation. > > For ONTAP 6.0 and 6.1, I can generate the appropriate values from the > raidVTable with the same Volume Status limitation, but it prints a line > out for every drive. Someone with a real 6.0 or 6.1 ONTAP system > (<cough> Ed <cough>) ought to be able to figure out if the fields that > I'm using will be duplicated across each drive. Todd, thanks for all the coding! I'm in the middle of a heavy operation at work, won't be able to get to this until late next week, will report back then. On a related note, is anyone monitoring their NetApp fans and other environmental info via SNMP? -- Ed |
From: Todd L. <tl...@iv...> - 2005-12-16 17:30:37
|
Ed Ravin wanted us to know: >> >Doess't work for me - "volIndex" is not in my copy of the NetAPP MIB: >> > -- Version 1.5, May 2000 >> >I have the raidVTable and the deprecated raidTable in that MIB, but nothing >> >in the "vol" group. We're using ONTAP 6.1.2R3 (yes, I know). >> ...a couple hours of coding and testing... >> For ONTAP 6.2 and 6.3, I can generate the appropriate values with the >> slight alteration of the Volume Status showing the Rebuild percent. It >> will show 0% in normal operation. Is that a reasonable thing to do? I could add some logic that prints "no problems" or similar if the value is "Rebuilding: 0%". I'm unsure if this output changing as the rebuild percentage increments up will cause an alert for each change. I'm not a mon guru, I just started using it last week, so I'm not real solid yet on the alert and upalert functionality. I'm doing testing right now to see if I can have multiple alerts and upalerts per service. It seems like it should be able to do it, but I'm only getting the email an alert generates, not the pages. It could be other things though, so have to work through that first. >> For ONTAP 6.0 and 6.1, I can generate the appropriate values from the >> raidVTable with the same Volume Status limitation, but it prints a line >> out for every drive. Someone with a real 6.0 or 6.1 ONTAP system >> (<cough> Ed <cough>) ought to be able to figure out if the fields that >> I'm using will be duplicated across each drive. >Todd, thanks for all the coding! I'm in the middle of a heavy operation >at work, won't be able to get to this until late next week, will report >back then. /me stamps foot... Just kidding. Yes, I have a vested interest in making this work as well. > >On a related note, is anyone monitoring their NetApp fans and other >environmental info via SNMP? I'm not, but I'd be willing to bet that another script could be modeled after this one and do the same checks and output. I'm also wondering if it would just be better to have a single netapp script that you can specify which functions you want monitored. Just a thought... -- Regards... Todd we're off on the usual strange tangents. next will be whether it is ethical to walk in your neighbor's open house if they're running ipv6:-). --Randy Bush Linux kernel 2.6.12-12mdksmp 2 users, load average: 0.18, 0.09, 0.17 |
From: Ed R. <er...@pa...> - 2005-12-18 06:10:08
Attachments:
netappraidstat.fixes.diff
|
On Thu, Dec 15, 2005 at 01:31:58PM -0800, Todd Lyons wrote: > For ONTAP 6.0 and 6.1, I can generate the appropriate values from the > raidVTable with the same Volume Status limitation, but it prints a line > out for every drive. Someone with a real 6.0 or 6.1 ONTAP system > (<cough> Ed <cough>) ought to be able to figure out if the fields that > I'm using will be duplicated across each drive. That's if he can get the script working. Which he could, but it needed a bit of hacking (I don't see how the "list" option worked at all in the version you sent). See attached diffs - I fixed a couple of things that turned up with "-w", typos in the names for the old MIB, some comments, and a small sample of coding style things. Oh, and added an env var for the community, every Mon script that uses communities should support that (to keep the community name from turning up in the mon.cgi details). Also, though it's not in the patch, I moved the duplicated array declarations to the top next to the other globals, and it worked fine. I'm using Perl 5.6.1, don't know what you have. > If yes, then you only > keep one line by doing a 'next' or 'last' or set a flag to get out of > the foreach loop. I think that there will need to be some trickery to > get out of both the inner while and the outer foreach loops though. You can provide a label argument to the 'last' statement that points to where you want to go. > The new script is attached. [...] I don't like "Rebuilding: 0%" as a status output - I first thought that the filer was rebuilding the RAID and it's so slow it hasn't even gotten to 1% yet. You should only add the "Rebuilding" tag if the status shows that the filer is reconstructing the volume. Here's the output of --list on my old NetAPP: $ ./netappraidstat.monitor --list --config /etc/mon/netappfree.cf toaster filer ONTAP Volume Name Vol State Vol Status --------------------------------------------------------------------------- toaster 6.1.2R3 parity disk 8.30 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.31 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.23 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.21 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.22 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.24 active Rebuilding: 0% toaster 6.1.2R3 parity disk 8.26 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.25 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.27 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.28 active Rebuilding: 0% toaster 6.1.2R3 data disk 8.29 active Rebuilding: 0% |
From: Ed R. <er...@pa...> - 2005-12-18 16:57:08
|
One of our filers noticed that I was testing the RAID status and helpfully failed a disk and began rebuilding. I now get these statuses: $ ./netappraidstat.monitor --config /etc/mon/netappfree.cf toaster toaster toaster is reconstruct, status: 'Rebuilding: 82%' And the --list option shows this: filer ONTAP Volume Name Vol State Vol Status --------------------------------------------------------------------------- trantor 6.1.2R3 parity disk 8.30 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.31 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.23 reconstru Rebuilding: 82% trantor 6.1.2R3 data disk 8.21 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.22 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.24 active Rebuilding: 0% trantor 6.1.2R3 parity disk 8.26 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.25 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.27 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.28 active Rebuilding: 0% trantor 6.1.2R3 data disk 8.29 active Rebuilding: 0% BTW, the filer is configured as two volumes, each with their own parity disk, so it looks like the code needs to learn a bit more. Here's the MIB dump that shows the different volumes: enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.1 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.2 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.3 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.4 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.5 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.6 = 1 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.1 = 2 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.2 = 2 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.3 = 2 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.4 = 2 enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.5 = 2 ----------------- [output below from my previous note, when the filer was in normal status] > Here's the output of --list on my old NetAPP: > $ ./netappraidstat.monitor --list --config /etc/mon/netappfree.cf toaster > filer ONTAP Volume Name Vol State Vol Status > --------------------------------------------------------------------------- > toaster 6.1.2R3 parity disk 8.30 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.31 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.23 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.21 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.22 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.24 active Rebuilding: 0% > toaster 6.1.2R3 parity disk 8.26 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.25 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.27 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.28 active Rebuilding: 0% > toaster 6.1.2R3 data disk 8.29 active Rebuilding: 0% > |
From: Ed R. <er...@pa...> - 2005-12-18 17:38:47
|
On Sun, Dec 18, 2005 at 11:56:58AM -0500, Ed Ravin wrote: > One of our filers noticed that I was testing the RAID status and > helpfully failed a disk and began rebuilding. I now get these > statuses: > > $ ./netappraidstat.monitor --config /etc/mon/netappfree.cf toaster I hurriedly put netappraidstat.monitor into production and noticed that it didn't support this filer: NetApp Release 5.2.3P1: Wed Jan 12 11:15:32 PST 2000 And the filer helpfully had an empty sysDescr string, further confusing things. The problem is that it didn't support raidVVol - this filer is so old, it only has one volume, thus no MIB entry for it. No big deal, that filer should be in a museum and we're decomissioning it soon. However, I did notice this fallout from adding -w: $ ./netappraidstat.monitor --forceold nonexistent Use of uninitialized value in concatenation (.) or string at ./netappraidstat.monitor line 96. nonexistent 91 if (!defined($s = new SNMP::Session (DestHost => $host, 92 Timeout => $TIMEOUT, Community => $COMM, 93 Retries => $RETRIES, Version => $SNMPVERSION))) { 94 $RET = ($RET == 1) ? 1 : 2; 95 $HOSTS{$host} ++; 96 push (@ERRS, "could not create session to $host: " . $SNMP::Session::ErrorStr); 97 next; 98 } Looking at the code of SNMP.pm, it looks like ErrorStr is not defined if new() fails - other mon scripts I've looked at don't even try to get ErrorStr at this stage. |
From: Todd L. <tl...@iv...> - 2005-12-21 17:58:43
Attachments:
netappraidstat.monitor
|
On Sun, Dec 18, 2005 at 01:10:05AM -0500, Ed Ravin wrote: >the version you sent). See attached diffs - I fixed a couple of things >that turned up with "-w", typos in the names for the old MIB, some comments, >and a small sample of coding style things. Oh, and added an env >var for the community, every Mon script that uses communities should >support that (to keep the community name from turning up in the >mon.cgi details). Incorporated all your fixes. >Also, though it's not in the patch, I moved the duplicated array >declarations to the top next to the other globals, and it worked fine. >I'm using Perl 5.6.1, don't know what you have. <sigh> Yes, I wasn't paying attention and I had put the array declaration after the list() function call. Doesn't take a genius to figure out what *THAT* didn't work. >You can provide a label argument to the 'last' statement that points >to where you want to go. Still as a TODO. >I don't like "Rebuilding: 0%" as a status output - I first thought that >the filer was rebuilding the RAID and it's so slow it hasn't even gotten >to 1% yet. You should only add the "Rebuilding" tag if the status shows >that the filer is reconstructing the volume. Done. It shows "normal" now when the value is zero. On Sun, Dec 18, 2005 at 11:56:58AM -0500, Ed Ravin wrote: >filer ONTAP Volume Name Vol State Vol Status >--------------------------------------------------------------------------- >trantor 6.1.2R3 parity disk 8.30 active Rebuilding: 0% >trantor 6.1.2R3 data disk 8.31 active Rebuilding: 0% >trantor 6.1.2R3 data disk 8.23 reconstru Rebuilding: 82% <snip> >BTW, the filer is configured as two volumes, each with their own parity disk, >so it looks like the code needs to learn a bit more. Here's the MIB dump >that shows the different volumes: > >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.1 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.2 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.3 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.4 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.5 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.1.6 = 1 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.1 = 2 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.2 = 2 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.3 = 2 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.4 = 2 >enterprises.netapp.netapp1.raid.raidVTable.raidVEntry.raidVGroup.1.2.5 = 2 I think that was the reason why I originally used raidVPlexName instead of raidVDiskName, but both pieces of information are uniquely useful in their own right. A bit more processing could possibly make it so that it prints out a single line for a non-critical array, and only the lines of the reconstructing/failed/offline drives when there is a critical array. That would mean that I need to query both and do a bit of processing first. On Sun, Dec 18, 2005 at 12:38:41PM -0500, Ed Ravin wrote: >I hurriedly put netappraidstat.monitor into production and noticed that >it didn't support this filer: > NetApp Release 5.2.3P1: Wed Jan 12 11:15:32 PST 2000 Yes, it stops checking below 6.0. If the raidVTable exists in that MIB, then it could be added to the regex that checks the version numbers. But reading on... >And the filer helpfully had an empty sysDescr string, further confusing >things. The problem is that it didn't support raidVVol - this filer is If we can't get the ONTAP version, then we can't use it. So I'd say that we should not try to make it use that old a version (though arguably that old a version is the one most in need of monitoring). >so old, it only has one volume, thus no MIB entry for it. No big deal, >that filer should be in a museum and we're decomissioning it soon. >However, I did notice this fallout from adding -w: > $ ./netappraidstat.monitor --forceold nonexistent > Use of uninitialized value in concatenation (.) or string at > ./netappraidstat.monitor line 96. > 96 push (@ERRS, "could not create session to $host: " . $SNMP::Session::ErrorStr); Fixed. Ed, thanks for the testing and suggestions. This version has your fixes in it. When you feel this is suitable for production, let me know and we'll submit it to Jim for inclusion into cvs--it's not currently in HEAD as far as I can see, and I can think of no reason why it would have been put in the 1.0 branch. Please reference the attached script (not a patch, it's the full script). -- Regards... Todd OS X: We've been fighting the "It's a mac" syndrome with upper management for years now. Lately we've taken to just referring to new mac installations as "Unix" installations when presenting proposals and updates. For some reason, they have no problem with that. -- /. Linux kernel 2.6.12-12mdksmp 3 users, load average: 0.13, 0.09, 0.09 |