Re: [SSI-users] ssi on podcast
Brought to you by:
brucewalker,
rogertsang
|
From: Vincent D. <di...@xs...> - 2012-09-03 17:01:49
|
hi Roger, Infiniband indeed can also function as TCP conection. it's 10 gbit then. the cards i have are however 2 x 40 gbit. TCP is an impossible slow protocol for a SSI. One way pingpong latencies from fastest most expensive form of TCP (solarflare at this moment) are around a 3 microseconds, infiniband is by default 3x faster. Bandwidth see above, that's a huge bandwidth difference in bandwidth. To get infiniband working, the distro's used by OpenSSI are not compatible to load OpenFED. In first place with such networks you need to run the networks and get them to work. To get them to work one needs OpenFED. That requires a specific kernel simply. AFAIK OpenSSI is not compatible with this. What would be interesting is use the technology available at its fastest incarnation to do memory migration and page fetches. Migrating pages of shared memory in case of my software is an absolute necessity and also is what IRIX was doing. One doesn't want to migrate everything at startup, but one wants to migrate pages of say 2-4 kilobyte at a time. If that would go over TCP that slows down significantly, besides that one first needs to be able to load the infiniband drivers and load OpenFED. The Kernel of allowed kernels always follows the kernel version that RHEL and SLES use, so one would require to upgrade OpenSSI to that. On Sep 3, 2012, at 5:08 AM, Roger Tsang wrote: > FYI: http://wiki.openssi.org/go/Features > > OpenSSI has been known to support Infiniband (IP over IB). Stan > Smith has used OpenSSI with Infiniband. However our CVS code > respository at SourceForge contains a new optimization for CFS over > TCP/IP - an internal CFS read/write cache but this optimization can > be disabled for Infiniband. SHM is not affected. > > OpenSSI provides a single IPC namespace. If I remember correctly, > the single namespace means shared memory segments are accessible > cluster-wide (see below). These SHM segments do migrate but only > during process migration of the owning process. The load-balancing > of just SHM segments (without process migration) is not implemented. > > In OpenSSI, procfs provides an extra column: node_num. All IPC > entities in the cluster are visible. > > $ cat /proc/sysvipc/shm > key shmid perms size cpid lpid nattch uid gid > cuid cgid atime dtime ctime view node_num > 0 1671168 1666 294912 200617 200620 10 > 0 0 0 0 1346607713 1346607607 1346607603 > default 3 > 0 1572876 1666 294912 69603 69606 9 0 > 0 0 0 1346607713 1346587446 1346473273 default 1 > > $ cat /proc/sysvipc/sem > key semid perms nsems uid gid cuid cgid > otime ctime view node_num > 0 4947968 666 1 0 0 0 0 > 1346639170 1346607607 default 3 > 0 3997698 666 1 0 0 0 0 > 1346639050 1346587447 default 1 > > > > On Sun, Aug 26, 2012 at 5:46 PM, Vincent Diepeveen <di...@xs...> > wrote: > Brock, > > If we may be honest about SSI in general: > > For my 8 node $200 a box cluster having a SSI would be really GREAT. > And i can't use big enough capital letters for it. > > For HPC with big supercomputers it's total nonsense to do things in a > semi-centralized manner. > > Yet where it is interesting to make 1 big supercomputer out of my > $3200 set of 8 machines > which eat together by the way around 1.4 kilowatt under full load, so > that's similar to a > 8 socket machine of $200k. > > Now latest 8 socket machines sure are faster than this 64 cores i > have here, > yet realize the price i paid for them. > > So it would be as powerful nearly as a 8 socket machine > given the right software and decent interconnects. > > So you want the choice of which interconnect to use, to take that > yourself. > > Let me assure you that built in ethernet cards are not useful to > combine into a chessmachine. The latency > is too bad of the cheap ethernet cards (sure solarlare at $1000 a > card will work, but that's my point exactly). > > So where the price of the network is already a problem, you sure > don't want to pay big bucks for the software. > > So software that makes 1 SSI out of it, that IS interesting, provided > it is cheap. Anything that's $xxx a port is > nonsense to use. In fact even $xx a node already is nonsense > considering that infiniband software is for free > and also networks from the past usually, not all of them, spreaded > the software for free. > > A great latency interconnect was for example Quadrics. I still have > an old network here. Bandwidth is a joke of it > compared to todays pci-e cards, but latency still is beating the > fastest and most expensive TCP solution on the planet. > Software was for free downloadable. I still have that here somewhere. > > OpenSSI was for free, that is what is good of it. Yet look what > distro's it used to work for. Same with OpenMosix. > They never really were compatible with HPC networks that usually work > at different distro's with different kernel requirements. > > Fixing that costs money. Open source software is great, yet usually > it's paid persons who maintain / carry it out. > Someone needs to pay that maintenance bill. I'm not, and as it > appears no one is willing to do that. > > Yet where having a small SSI using free software, the promise that > OpenSSI delivered, it appears to be > the pipedream that never matured. It appears no one is prepared to > fund some people to build this. > > Now that's sad, but it's the reality of the market simply. > > Let's just move on instead of giving attention to the few who still > try to make big bucks based upon something that's > easy to program for and with, yet which simply always was too > expensive. > > Price matters. > > > > > > > ---------------------------------------------------------------------- > -------- > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. > Discussions > will include endpoint security, mobile security and the latest in > malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Ssic-linux-users mailing list > Ssi...@li... > https://lists.sourceforge.net/lists/listinfo/ssic-linux-users > |