Re: [swanmodel-users] Swan fails to merge output
Brought to you by:
mzijlema
|
From: Clement C. <cle...@uc...> - 2020-06-04 11:14:15
|
Dear João, Thank you for your reply, the "highest file ref. number" value is already very high (99999999). How can I guess the number of files generated I'll be dealing with during my run though? The number of output and input files obviously matter, but I can't guess the impact of the number of processors and the number of time steps in my run (which seems to matter). I am using the SWAN version within COAWST, I have the print files (per processor) but they usually don't show any error message, sometimes Error files are generated by SWAN but none are generated when SWAN fails to merge the output. I can sometimes find an error message in the standard output (log) of the run generated by COAWST, but it is not consistent. Kind regards, Clement On Wed, 3 Jun 2020 at 23:51, João Albuquerque <ja...@au...> wrote: > Dear Clement, > > When I went through the "Too many open files" message _in a completely > different setup_ I changed the "highest file ref. number" in my swaninit > file. SWAN creates this file at its first run, so check the value you have > and increase it to a number that is compatible with the amount of files > you'll be dealing with during your runs. > > I see you are looking for error messages. Are you looking for them in > SWAN's .prt and .erf files? > > Best, > João > > On Thu, 4 Jun 2020 at 05:21, Clement Calvino < > cle...@uc...> wrote: > >> Dear Users, >> >> I have tried recently to run long simulations of my SWAN stand-alone >> model for a coastal application (4 months long with 180s time-step). But if >> short runs are terminating without issue for some reasons when running >> longer simulations SWAN fails to merge properly the output files. I am >> using the SWAN version within COAWST, but running SWAN as a stand-alone >> application. >> >> Very rarely I manage to get an error message about "Too many open files", >> I have tried without success to increase the limit when submitting my job >> (ulimit -n 999999). >> At best I managed to get a merged output file but the files per processor >> were not deleted, it happened when I asked for one single table output. I >> have tried reducing the duration of the run to only two or one months, >> reducing the number of processors, number of output files and number of >> boundary input files but nothing solved the issue... >> >> What annoys me is that I don't know what parameters I should focus on to >> reduce the number of files opened during the run, would anyone know which >> subroutine could be responsible for opening massive amounts of files? >> >> Thank you in advance, >> Clement >> >> -- >> Clement Calvino >> PhD Student >> School of Mathematics and Statistics >> University College Dublin >> _______________________________________________ >> swanmodel-users mailing list >> swa...@li... >> https://lists.sourceforge.net/lists/listinfo/swanmodel-users >> > > > -- > João Claudio Albuquerque > PhD. Candidate, School of Environment, The University of Auckland, > Level 4, Room 449, Science Centre, 23 Symonds Street, Auckland, NZ. > +64 021 02365705 > _______________________________________________ > swanmodel-users mailing list > swa...@li... > https://lists.sourceforge.net/lists/listinfo/swanmodel-users > -- Clement Calvino PhD Student School of Mathematics and Statistics University College Dublin |