From: Arlindo da S. <da...@al...> - 2011-10-13 02:36:49
|
On Wed, Oct 12, 2011 at 8:38 PM, Love, Mr. Gary, Contractor, Code 7542 < gar...@nr...> wrote: > Arlindo,**** > > ** ** > > We have some solid state disks that function as local disks because the > global file systems running gpfs are too slow. > gpfs has been a big disappointment... > So our challenge is how to accomplish how to do the “shared memory” which > we also thought of using.**** > > ** > Do you mean standard "Unix shared memory" or "openMP" type of shared memory parallel programing? I have looked into Unix Shared Memory as an approach for UDF data exchange but found it a bit too clumsy to program into. (I take it back: it is much simpler if you only need to read from it.) But, really, I'd try the ramdisk approach first because it is so simple. I am assuming you are on a linux platform, correct? > ** > > Do you know if anyone else has this problem and who could benefit from the > solution?**** > > ** > I used a similar approach recently. As you know, gxyat() is both an off-line translator or grads metafiles into png, pdf, etc, as well as a replacement for printim. I use absolutely the same code for both. In the on-line (printim replacement) I have #define fread udx_mread where udx_mread() is a function that works pretty much like fread() but instead of reading from a file it reads from the grads graphics (memory) buffer. Now, this makes wonder. Do you really want to cache the map/font database or the rendering "strokes" associated with it? For example, we could have this functionality: ga-> enable gx_cache joseph ga-> draw map ga-> disable gx_cache where "joseph" is the name of the gx cache. Next time you wanted that map drawn you could simply draw it ga-> draw gx_cache joseph and the map would be redrawn, possibly even with clipping, without reading any file. This would be trivial to implement; all you would need to do is to make a copy of the graphics buffer between enable/disable gx_cache. Arlindo > ** > > Thanks for your input. I’ll ask for more when we get into the project.*** > * > > ** ** > > Gary**** > > ** ** > > *From:* Arlindo da Silva [mailto:da...@al...] > *Sent:* Wednesday, October 12, 2011 4:41 PM > *To:* ope...@li... > *Cc:* Brian Doty; Frost, Mr. Michael > *Subject:* Re: [Opengrads-devel] Dat File re-reads**** > > ** ** > > On Wed, Oct 12, 2011 at 3:16 PM, Love, Mr. Gary, Contractor, Code 7542 < > gar...@nr...> wrote:**** > > Hi Brian, Jennifer, > > We use GrADS extensively in a production mode and have found when profiling > GrADS runs that 2/3 rds of the data reads are the coastlines and font files. > This may not sound like much, but at FNMOC when running 100's of charts for > each of 32 multi-grid projects, the coastline and font reads amount to > terabytes for each 12 hour set of runs. > > The re-reads occur for each and every display of a variable from the same > file after it is opened. We are going to try to store these files in memory > after the first read. > > My first question is whether anyone else has looked at this and has a > solution? My next questions, do you have any ideas or guidance to help us > do this? > > I know that GrADS was designed to operate well on small platforms with > little memory, so our goal is to implement a memory caching capability that > would be controlled by a 'set' command and would be moderated by the amount > of memory available.**** > > ** ** > > As Brian mentions, the map and font database are relatively small. So, it > would be very feasible to setup a ramdisk to hold these files which would > probably give you a nice performance boost with virtually no programming > involved. Depending on your system architecture, this ramdisk could > effectively function as "shared memory" used by multiple cores. (Because > grads is not thread safe, having an explicit memory buffer to hold this > would require that each grads process duplicate the memory necessary to hold > the maps/font databases.) One hardware solution is to place grads and its > data on a solid state disk, these are quite affordable these days.**** > > ** ** > > Just my 2 lazy cents.**** > > ** ** > > Arlindo**** > > **** > > -- > Arlindo da Silva > da...@al...**** > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2d-oct > _______________________________________________ > Opengrads-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opengrads-devel > > -- Arlindo da Silva da...@al... |