OpenSSI has been known to support Infiniband (IP over IB). Stan Smith has used OpenSSI with Infiniband. However our CVS code respository at SourceForge contains a new optimization for CFS over TCP/IP - an internal CFS read/write cache but this optimization can be disabled for Infiniband. SHM is not affected.
OpenSSI provides a single IPC namespace. If I remember correctly, the single namespace means shared memory segments are accessible cluster-wide (see below). These SHM segments do migrate but only during process migration of the owning process. The load-balancing of just SHM segments (without process migration) is not implemented.
In OpenSSI, procfs provides an extra column: node_num. All IPC entities in the cluster are visible.
$ cat /proc/sysvipc/shm
key shmid perms size cpid lpid nattch uid gid cuid cgid atime dtime ctime view node_num
0 1671168 1666 294912 200617 200620 10 0 0 0 0 1346607713 1346607607 1346607603 default 3
0 1572876 1666 294912 69603 69606 9 0 0 0 0 1346607713 1346587446 1346473273 default 1
$ cat /proc/sysvipc/sem
key semid perms nsems uid gid cuid cgid otime ctime view node_num
0 4947968 666 1 0 0 0 0 1346639170 1346607607 default 3
0 3997698 666 1 0 0 0 0 1346639050 1346587447 default 1
If we may be honest about SSI in general:
For my 8 node $200 a box cluster having a SSI would be really GREAT.
And i can't use big enough capital letters for it.
For HPC with big supercomputers it's total nonsense to do things in a
Yet where it is interesting to make 1 big supercomputer out of my
$3200 set of 8 machines
which eat together by the way around 1.4 kilowatt under full load, so
that's similar to a
8 socket machine of $200k.
Now latest 8 socket machines sure are faster than this 64 cores i
yet realize the price i paid for them.
So it would be as powerful nearly as a 8 socket machine
given the right software and decent interconnects.
So you want the choice of which interconnect to use, to take that
Let me assure you that built in ethernet cards are not useful to
combine into a chessmachine. The latency
is too bad of the cheap ethernet cards (sure solarlare at $1000 a
card will work, but that's my point exactly).
So where the price of the network is already a problem, you sure
don't want to pay big bucks for the software.
So software that makes 1 SSI out of it, that IS interesting, provided
it is cheap. Anything that's $xxx a port is
nonsense to use. In fact even $xx a node already is nonsense
considering that infiniband software is for free
and also networks from the past usually, not all of them, spreaded
the software for free.
A great latency interconnect was for example Quadrics. I still have
an old network here. Bandwidth is a joke of it
compared to todays pci-e cards, but latency still is beating the
fastest and most expensive TCP solution on the planet.
Software was for free downloadable. I still have that here somewhere.
OpenSSI was for free, that is what is good of it. Yet look what
distro's it used to work for. Same with OpenMosix.
They never really were compatible with HPC networks that usually work
at different distro's with different kernel requirements.
Fixing that costs money. Open source software is great, yet usually
it's paid persons who maintain / carry it out.
Someone needs to pay that maintenance bill. I'm not, and as it
appears no one is willing to do that.
Yet where having a small SSI using free software, the promise that
OpenSSI delivered, it appears to be
the pipedream that never matured. It appears no one is prepared to
fund some people to build this.
Now that's sad, but it's the reality of the market simply.
Let's just move on instead of giving attention to the few who still
try to make big bucks based upon something that's
easy to program for and with, yet which simply always was too expensive.
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
Ssic-linux-users mailing list