#1769 Excessive hpitree run time on DL380p Gen8

Future
closed-works-for-me
OpenHPI Daemon
5
2015-06-01
2012-10-17
Rick Lane
No

HPI (hpitop and hpitree as well as our HPI client) are taking excessive time to collect all resource/sensor information from HP ProLiant DL380p Gen8 servers (iLO4). Compared with an 8 server DL380 G6 system which took < 50 seconds, an 8 server DL380p Gen8 system takes about 13 minutes and a 12 system Gen8 system takes over 17 minutes!
This time is how long the saHpiDiscover() API takes until it returns

Attached data are the dated times that hpitree took on (a) a 3-server Gen8 system, and (b) on a system with

Discussion

  • Rick Lane

    Rick Lane - 2012-10-17

    hpitree run time on 3-Gen8 server almost 5 minutes

     
  • Rick Lane

    Rick Lane - 2012-10-17

    hpitree run time on 10-server (7G8,3G6) 14 minutes

     
  • dr_mohan

    dr_mohan - 2012-10-18
    • assigned_to: dr_mohan --> hemanthreddy
     
  • Hemantha Beecherla

    Hi Rick,
    I have tried to recreate the issue with configuring in three different scenarios below,

    1. configured the openhpi.conf file with 7 duplicate handler for dl380g8-1-ilo.hpi.telco with different entity paths (Duration : 5 miutes to discover 7 duplicate gen8 servers)

    2. Configured the openhpi.conf file with 4 different Gen8p server (Duration: It took 3 minutes 43 sec to discover 4 different gen8 servers

    )

    3. Configured the openhpi.conf file with 3 dl380 g5 and 1 dl380g6 server. (duration: 3 minutes 30 sec for hpitree -a )

    )

    By looking at scenarios 2 and 3 there is only 13 sec more with 4Gen8 server as compared with 3g5 and 1 g6 servers

    Could you please let me know the setup details
    Like System configuration details where Openhpi daemon is running.

    Thanks,
    Hemantha Reddy

     
  • Rick Lane

    Rick Lane - 2012-10-24

    We have a rack of 12 DL380p Gen8 servers with duplex internal network with even servers on one switch and odd servers on another switch. This is the same way we have wired previous G5/G6 servers. openhpid runs on two servers which are Active/Standby OAM servers. Our client process runs on the Active OAM server.

    It took about 17 minutes to discover the 12 servers, which is why this came up. I seemed to take 3-4 minutes max with 8 servers in the past, so this was very noticeable.

    Is there any traces I can turn on to see what is happening during this time?

     
  • Rick Lane

    Rick Lane - 2012-10-25

    Actually, why does this take any time after openhpid collects all resource/sensor information from all iLOs? Isn't the point of openhpid to have cached information for all servers so that discovery is quick?

     
  • Hemantha Beecherla

    Rick,
    If you have /var/log/messages, please do share with me,
    This plugin do not provide much debug information like OA_SOAP.
    Let me try to reproduce this with your setup mentioned in your reply.

    Cached information is not available for ilo2_ribcl plugin (HP ProLiant Plugin), like OA_SOAP have this Cache technique.

    Regards,
    Hemantha Reddy

     
  • Rick Lane

    Rick Lane - 2012-10-25

    Attached is messages.gen8long showing the messages generated during the hpitree which was started at 10/25 15:10:10

     
  • Rick Lane

    Rick Lane - 2012-10-25

    messages during hpitree

     
  • Hemantha Beecherla

    Hi Rick,

    Could you please let me know what is Active/Standby OAM servers.

     
  • Rick Lane

    Rick Lane - 2012-11-14

    Two servers running in Active/Standby mode. Both servers run openhpid (OAM function), however only the Active has our HPI client running collecting HPI information from the platform.

     
  • Hemantha Beecherla

    Rick,
    At last we finally got 12 Gen8 rackmount servers borrowed from other team, and tested the scenario.
    We have observed that Openhpid is taking excessive time (16 min 30 sec) for discovery as mentioned in the description.
    Started to investigate root cause for the issue.

     
  • Hemantha Beecherla

    Rick,
    Looks like This isssue is recreating only when we run any of the hpi client and kill immediatly multiple times the dicovery time is getting adds up in the back ground and leading to exceessive discovery time at the end.
    Let me know if this is the scenario?
    The reason for this is that ilo2_ribcl plugin do not have RPT Cache.

     
  • Rick Lane

    Rick Lane - 2013-03-20

    No this is not the scenario. This would take long even after a fresh system restart and because of this issue I allow 45 minutes for discovery to complete, so nothing would have restarted it.

     
  • Hemantha Beecherla

    Hello Rick,

    I have tried a lot to recreate this scenario but I could not able do it.

    Please share with me if you have anything more information with you to create this problem.

    Thanks,
    Hemantha Reddy

     
  • Hemantha Beecherla

    Hi Rick,

    Could please let me know steps to recreate this issue.

    Thanks,
    Hemantha Reddy

     
  • dr_mohan

    dr_mohan - 2013-10-21
    • 3.4.0: 3.0.x --> Future
     
  • Tariq Shureih

    Tariq Shureih - 2013-10-21

    *ATTENTION**
    This account is disabled and is no longer accessed by the recipient.
    Please remove it from your address book.

    Thanks

     
  • Hemantha Beecherla

    Hi Rick,

    I have re-tried to reproduce this issue but I could not observe any problem, and it takes not more than 3 minutes for all the 7 gen8 and 3 g6 machines to complete the discovery when hpitree is executed.

    I could only see one difference between your setup and my setup, which is the iLO Firmware version of Gen 8 machines.
    Most of the Gen 8 servers in my setup contains iLO Firmware version >=2.03. when it comes to your setup for Gen 8 server iLO Firmware version is = 1.10 or 1.01.
    I can suggest you to try upgrading the iLO version of Gen8 machines to >=2.03 or latest and see if the issue still exist and Please let us know.

    Thanks& regards,
    Hemantha Reddy

     
  • Hemantha Beecherla

    • status: open --> closed-works-for-me
    • Subsystem: --> OpenHPI Daemon
     

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks