From: SourceForge.net <no...@so...> - 2013-02-06 21:56:17
|
Bugs item #3577918, was opened at 2012-10-17 15:19 Message generated for change (Comment added) made by hemanthreddy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=532251&aid=3577918&group_id=71730 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: HP ProLiant plugin Group: 3.0.x Status: Open Resolution: None Priority: 5 Private: No Submitted By: Rick Lane (rvlane) Assigned to: Hemantha Beecherla (hemanthreddy) Summary: Excessive hpitree run time on DL380p Gen8 Initial Comment: HPI (hpitop and hpitree as well as our HPI client) are taking excessive time to collect all resource/sensor information from HP ProLiant DL380p Gen8 servers (iLO4). Compared with an 8 server DL380 G6 system which took < 50 seconds, an 8 server DL380p Gen8 system takes about 13 minutes and a 12 system Gen8 system takes over 17 minutes! This time is how long the saHpiDiscover() API takes until it returns Attached data are the dated times that hpitree took on (a) a 3-server Gen8 system, and (b) on a system with ---------------------------------------------------------------------- Comment By: Hemantha Beecherla (hemanthreddy) Date: 2013-02-06 13:56 Message: Rick, At last we finally got 12 Gen8 rackmount servers borrowed from other team, and tested the scenario. We have observed that Openhpid is taking excessive time (16 min 30 sec) for discovery as mentioned in the description. Started to investigate root cause for the issue. ---------------------------------------------------------------------- Comment By: Rick Lane (rvlane) Date: 2012-11-14 06:54 Message: Two servers running in Active/Standby mode. Both servers run openhpid (OAM function), however only the Active has our HPI client running collecting HPI information from the platform. ---------------------------------------------------------------------- Comment By: Hemantha Beecherla (hemanthreddy) Date: 2012-11-13 14:39 Message: Hi Rick, Could you please let me know what is Active/Standby OAM servers. ---------------------------------------------------------------------- Comment By: Rick Lane (rvlane) Date: 2012-10-25 15:28 Message: Attached is messages.gen8long showing the messages generated during the hpitree which was started at 10/25 15:10:10 ---------------------------------------------------------------------- Comment By: Hemantha Beecherla (hemanthreddy) Date: 2012-10-25 09:26 Message: Rick, If you have /var/log/messages, please do share with me, This plugin do not provide much debug information like OA_SOAP. Let me try to reproduce this with your setup mentioned in your reply. Cached information is not available for ilo2_ribcl plugin (HP ProLiant Plugin), like OA_SOAP have this Cache technique. Regards, Hemantha Reddy ---------------------------------------------------------------------- Comment By: Rick Lane (rvlane) Date: 2012-10-25 07:34 Message: Actually, why does this take any time after openhpid collects all resource/sensor information from all iLOs? Isn't the point of openhpid to have cached information for all servers so that discovery is quick? ---------------------------------------------------------------------- Comment By: Rick Lane (rvlane) Date: 2012-10-24 07:23 Message: We have a rack of 12 DL380p Gen8 servers with duplex internal network with even servers on one switch and odd servers on another switch. This is the same way we have wired previous G5/G6 servers. openhpid runs on two servers which are Active/Standby OAM servers. Our client process runs on the Active OAM server. It took about 17 minutes to discover the 12 servers, which is why this came up. I seemed to take 3-4 minutes max with 8 servers in the past, so this was very noticeable. Is there any traces I can turn on to see what is happening during this time? ---------------------------------------------------------------------- Comment By: Hemantha Beecherla (hemanthreddy) Date: 2012-10-19 07:55 Message: Hi Rick, I have tried to recreate the issue with configuring in three different scenarios below, 1. configured the openhpi.conf file with 7 duplicate handler for dl380g8-1-ilo.hpi.telco with different entity paths (Duration : 5 miutes to discover 7 duplicate gen8 servers) 2. Configured the openhpi.conf file with 4 different Gen8p server (Duration: It took 3 minutes 43 sec to discover 4 different gen8 servers ) 3. Configured the openhpi.conf file with 3 dl380 g5 and 1 dl380g6 server. (duration: 3 minutes 30 sec for hpitree -a ) ) By looking at scenarios 2 and 3 there is only 13 sec more with 4Gen8 server as compared with 3g5 and 1 g6 servers Could you please let me know the setup details Like System configuration details where Openhpi daemon is running. Thanks, Hemantha Reddy ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=532251&aid=3577918&group_id=71730 |