From: Dirk W. <di...@re...> - 2001-10-17 16:50:13
|
Hi, is there anyway to deconfigure EMP support for a node which only has serial console access? the resulting GUI in hoover is somewhat confusing, because all those nodes are listed as dead. the entries in vacm_configuration look like this: #Definition for node 'my_node' NODE:my_node MODMAP: VAR:EMP:SERIAL_PORT:NONE VAR:EMP:PASSWORD:NONE VAR:SERCON:BAUDRATE:9600 VAR:SERCON:DEVICE_ADDRESS:xxx.xx.xxx.xx-port END_NODE #Definition for node 'another_node' NODE:another_node MODMAP: VAR:GLOBAL:IP_ADDRESS:xxx.xx.xx.xx VAR:EMP:SERIAL_PORT:NONE VAR:EMP:PASSWORD:NONE VAR:SERCON:BAUDRATE:38400 VAR:SERCON:DEVICE_ADDRESS:/dev/ttyRxx VAR:SYSSTAT:AUTH_PASSWORD:XXXXXX VAR:RSH:AGENT:RSH VAR:RSH:USER:root END_NODE all emp relevant entries were according to the docu created with "ipc localhost EMP:CONFIGURATION:node_name:NONE:NONE". another issue i worry about: i have only 4 machines speaking and configured EMP, but 147 emp.loose processes! lsof'ing those emp.loose processes, it appears to me like every process is only talking to my 4 emp capable nodes, i only see /dev/ttyX connections to those machines (and each ~500 pipes, they are a shared amongst the emp.loose processes). pstree output is like: -nexxus-+-apcups.loose---apcups.loose---apcups.loose---48*[apcups.loose] |-baytech.loose---baytech.loose---baytech.loose---48*[baytech.loose] |-emp.loose---emp.loose---emp.loose---144*[emp.loose] |-icmp_echo.loose |-msc.loose---msc.loose---msc.loose---48*[msc.loose] |-rsh.loose---rsh.loose---rsh.loose---48*[rsh.loose] |-sbt2.loose---sbt2.loose---sbt2.loose---48*[sbt2.loose] |-sercon.loose---sercon.loose---sercon.loose---48*[sercon.loose] |-sys_stat.loose |-va1000.loose `-vasenet.loose---vasenet.loose---vasenet.loose---48*[vasenet.loose] if there's somebody still listening here, i would be happy if somebody could tell me how to get rid of the confusing "dead" entries in hoover. also i would like to know, how to restrict the number of "*-loose" processes. i am afraid if i hook up the next 100 node cluster i am running out of resources on my management node. thx, ~dirkw ----------- Dirk Wetter Renaissance Technologies/NY |