|
From: Louis Z. <lou...@li...> - 2003-04-02 02:54:38
|
Dear all,
It seems we have some argeements on the architecture. So I resend the
proposal for comments.
1. Adminstrator should know which protocol managed node supports and
help OpenHPI to choose the right plugin to do remote management like
following configuration file.
---------------------------------------------------------------------
#Following is an openhpi.conf example
#Node topology can be found on following diagram.
#This is 'HPI standard RPC' config section
#The RPC is used for node to be managed even when
#the node doesn't support remote management protocol.
#The call route is:
#[management app] <-> [libopenhpi.so] <-> [hpi_rpc.so]
# <--"RPC to remote node"-> [ohpid] <-> [hpi impl.]
#Note: How does the hpi impl. do management is impl-dependent.
#For examp, if the remote hpi impl. is openhpi too, the openhpi
#should in turn call HW plugin like 'lm_sensor.so' to do managenet.
plugin hpi_rpc.so
interface rpc,192.168.1.127 #managed node 1
interface rpc,192.168.2.45 #managed node 3
#This is IPMI config section
plugin ipmi_lan.so
# INTERFACE is the entry point of ipmi
# Following directive tells OpenHPI controll
# ipmi BMCs by remote LAN
interface lan,192.168.34.45 #managed node 2
plugin ipmi_smi.so
# Following directive tells OpenHPI controll local BMC
interface smi,/dev/ipmi0 #node0 can be managed too
#This is CIM config section
plugin cim.so
interface cim,192.168.34.46 #nodeX ;-)
...
----------------------------------------------------------------------
2. I perfer to use native remote protocol. Only if there is no remote
protocol such as (lm_sensors), we use the standard remote protocol in
the following diagram. I think it is a too strong conditional that Open
HPI needs all managed node to run 'ohpid' even when the managed node
supports remote protocol management natively. But anyway, the following
diagram I proposed does not restrict which protocol you use. For
example, node0 is a management node, node1 is a IPMI-enable node. I
perfer to
application->libopenhpi.so->ipmi_lan.so (node0)--'ipmi_lan'--->(node1)
IPMI BMC
But you also can do management in following path:
aplication->libopenhpi.so->hpi_rfc.so (node0)--'HPI RPC'--->(node1)
ohpid->libopenhpi.so->ipmi_smi.so->IPMI BMC
After all, I'd like to propose the following architecture,
+------------------------------------------------+
| |
| System Management UI (HPI Application) | Application using Open HPI
| |
|------------------------------------------------|
| |
| Open HPI Library(libopenhpi) |
| | Open HPI implementation
| |
| (Plugin Layer) |
+------------------------------------------------+
| HPI Remote | SNMP | CIM | IPMI | IPMI | Plugins
| | | |(ipmi_lan)|(impi_smi)|
+------------------------------------------------+
^ ^ ^
| | |
| +---------------+ |
v | +-----------+
+-----------------------+ | |
| hpid | | |
|-----------------------| | |
| libopenhpi.so | | |
|-----------------------| managed node1 |
| lm_sensor plugin | | |
+-----------------------+ | |
| | IPMI LAN protocol
+---+ |
| v
| +---------------------+
| | (LAN port) |
| | BMC | managed node2
v +---------------------+
+-----------------------+
| hpid |
|-----------------------| managed node3
| other HPI impl. lib |
+-----------------------+
* hpid (HPI daemon) can use any HPI-compatible implementation as well as Open HPI.
--
Yours truly,
Louis Zhuang
---------------
My words are my own...
Fault Injection Test Harness Project
BK tree: http://fault-injection.bkbits.net/linux-2.5
Home Page: http://sf.net/projects/fault-injection
Open HPI Project
Home Page: http://sf.net/projects/openhpi
|