You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
(10) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(2) |
Dec
|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2012 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2024 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Rambaks, A. <and...@rw...> - 2024-02-09 10:06:17
|
Dear PARAMESH community,
for a few years now I have been working with PARAMESH 4.1, primarily for serial execution and validation of my own code. Recently, I started playing around with it in parallel mode and found that it crashed for unknown reason. Today I finally figured out the problem - the values in the array pe_source(:) of MPI_MORTON always remain -1.
The reason for this is twofold:
1. Subroutine mpi_amr_store_comm_info has some test code still embedded in it, which always assigns pe_source(:) = -1:
! TEST
to_be_received(2,:,:) = -1
to_be_sent(2,:,:) = -1
pe_source = -1
!!!
2. The main problem is that the source code responsible for assigning correct values to pe_source(:) is missing in version 4.1, but are present in version 4.0. I used GREP -rniw 'pe_source' to find the instances in the mpi_source/ and source/ folders. I haven't yet checked whether the are more discrepancies between the two versions.
Version 4.0
Version 4.1
mpi_source/mpi_unpack_tree_info.F90:79: i_pe = pe_source(ii)
mpi_source/mpi_amr_local_surr_blks.F90:350: if(pe_source(k1).eq.rem_pe) kk = k1
mpi_source/mpi_amr_local_surr_blks.F90:464: if(pe_source(k1).eq.rem_pe) kk = k1
mpi_source/mpi_mort_comm_for_surrblks.F90:175: pe_source = -1
mpi_source/mpi_mort_comm_for_surrblks.F90:467: pe_source(kstack) = i
mpi_source/mpi_mort_comm_for_surrblks.F90:556: pe_source = -1
mpi_source/mpi_mort_comm_for_surrblks.F90:569: pe_source(k) = isrc+1
mpi_source/mpi_mort_comm_for_surrblks.F90:636: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_mort_comm_for_surrblks.F90:724: i_pe = pe_source(i)
mpi_source/mpi_mort_comm_for_surrblks.F90:741: pe_source = -1
mpi_source/mpi_mort_comm_for_surrblks.F90:749: pe_source(kstack) = i
mpi_source/mpi_amr_refine_derefine.F90:655:! because it uses pe_source and r_mortonbnd which are reset in the
mpi_source/mpi_morton_bnd_restrict.F90:187: pe_source = -1
mpi_source/mpi_morton_bnd_restrict.F90:770: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_restrict.F90:891: pe_source = -1
mpi_source/mpi_morton_bnd_restrict.F90:904: pe_source(k) = isrc+1
mpi_source/mpi_morton_bnd_restrict.F90:981: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd_restrict.F90:1088: i_pe = pe_source(i)
mpi_source/mpi_morton_bnd_restrict.F90:1105: pe_source = -1
mpi_source/mpi_morton_bnd_restrict.F90:1113: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_restrict.F90:1329: ip = pe_source(jp)
mpi_source/mpi_morton_bnd.F90:220: pe_source = -1
mpi_source/mpi_morton_bnd.F90:824: pe_source(kstack) = i
mpi_source/mpi_morton_bnd.F90:943: pe_source = -1
mpi_source/mpi_morton_bnd.F90:956: pe_source(k) = isrc+1
mpi_source/mpi_morton_bnd.F90:1033: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd.F90:1139: i_pe = pe_source(i)
mpi_source/mpi_morton_bnd.F90:1156: pe_source = -1
mpi_source/mpi_morton_bnd.F90:1164: pe_source(kstack) = i
mpi_source/mpi_morton_bnd.F90:1378: ip = pe_source(jp)
mpi_source/mpi_amr_mirror_blks.F90:372: pe_source = -1
mpi_source/mpi_amr_mirror_blks.F90:422: pe_source(kstack) = i
mpi_source/mpi_amr_mirror_blks.F90:519: pe_source = -1
mpi_source/mpi_amr_mirror_blks.F90:532: pe_source(k) = isrc+1
mpi_source/mpi_amr_mirror_blks.F90:609: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_amr_store_comm_info.F90:67: pe_source_guard = pe_source
mpi_source/mpi_amr_store_comm_info.F90:117: pe_source = pe_source_guard
mpi_source/mpi_amr_store_comm_info.F90:218: pe_source_prol = pe_source
mpi_source/mpi_amr_store_comm_info.F90:269: pe_source = pe_source_prol
mpi_source/mpi_amr_store_comm_info.F90:374: pe_source_flux = pe_source
mpi_source/mpi_amr_store_comm_info.F90:426: pe_source = pe_source_flux
mpi_source/mpi_amr_store_comm_info.F90:499: pe_source_restrict(:) = pe_source(:)
mpi_source/mpi_amr_store_comm_info.F90:590: pe_source(:) = pe_source_restrict(:)
mpi_source/mpi_morton_bnd_prolong1.F90:163: pe_source = -1
mpi_source/mpi_morton_bnd_prolong1.F90:516: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_prolong1.F90:637: pe_source = -1
mpi_source/mpi_morton_bnd_prolong1.F90:650: pe_source(k) = isrc+1
mpi_source/mpi_morton_bnd_prolong1.F90:727: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd_prolong1.F90:833: i_pe = pe_source(i)
mpi_source/mpi_morton_bnd_prolong1.F90:850: pe_source = -1
mpi_source/mpi_morton_bnd_prolong1.F90:858: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_prolong1.F90:1074: ip = pe_source(jp)
mpi_source/mpi_lib.F90:550: allocate( pe_source(1:nprocs) )
mpi_source/mpi_lib.F90:589: if(allocated(pe_source)) deallocate( pe_source )
mpi_source/mpi_morton_bnd_fluxcon.F90:150: pe_source = -1
mpi_source/mpi_morton_bnd_fluxcon.F90:357: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_fluxcon.F90:444: pe_source = -1
mpi_source/mpi_morton_bnd_fluxcon.F90:456: pe_source(k) = isrc+1
mpi_source/mpi_morton_bnd_fluxcon.F90:547: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd_fluxcon.F90:861: pe_source = -1
mpi_source/mpi_morton_bnd_fluxcon.F90:913: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_fluxcon.F90:1002: pe_source = -1
mpi_source/mpi_morton_bnd_fluxcon.F90:1015: pe_source(k) = isrc+1
mpi_source/mpi_morton_bnd_fluxcon.F90:1090: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd_fluxcon.F90:1163: if(pe_source(k).eq.rem_pe) kk = k
mpi_source/mpi_morton_bnd_fluxcon.F90:1268: i_pe = pe_source(i)
mpi_source/mpi_morton_bnd_fluxcon.F90:1285: pe_source = -1
mpi_source/mpi_morton_bnd_fluxcon.F90:1293: pe_source(kstack) = i
mpi_source/mpi_morton_bnd_fluxcon.F90:1509: ip = pe_source(jp)
source/amr_mpi_find_blk_in_buffer.F90:104: no_of_comms = size(pe_source)
source/amr_mpi_find_blk_in_buffer.F90:110: if(rem_pe.eq.pe_source(jpe0)-1) then
source/amr_1blk_copy_soln.F90:271:! because it uses pe_source and r_mortonbnd which are reset in the
source/amr_mpi_find_blk_in_buffer.F90:136: no_of_comms = size(pe_source)
source/amr_mpi_find_blk_in_buffer.F90:142: If (rem_pe == pe_source(jpe0)-1) Then
mpi_source/mpi_unpack_tree_info.F90:79: i_pe = pe_source(ii)
mpi_source/mpi_amr_store_comm_info.F90:38: pe_source = -1
mpi_source/mpi_amr_store_comm_info.F90:78: pe_source_guard = pe_source
mpi_source/mpi_amr_store_comm_info.F90:129: pe_source = pe_source_guard
mpi_source/mpi_amr_store_comm_info.F90:202: pe_source = -1
mpi_source/mpi_amr_store_comm_info.F90:240: pe_source_prol = pe_source
mpi_source/mpi_amr_store_comm_info.F90:292: pe_source = pe_source_prol
mpi_source/mpi_amr_store_comm_info.F90:368: pe_source = -1
mpi_source/mpi_amr_store_comm_info.F90:406: pe_source_flux = pe_source
mpi_source/mpi_amr_store_comm_info.F90:459: pe_source = pe_source_flux
mpi_source/mpi_amr_store_comm_info.F90:534: pe_source = -1
mpi_source/mpi_amr_store_comm_info.F90:540: pe_source_restrict(:) = pe_source(:)
mpi_source/mpi_amr_store_comm_info.F90:632: pe_source(:) = pe_source_restrict(:)
mpi_source/mpi_lib.F90:547: allocate( pe_source(1:nprocs) )
mpi_source/mpi_lib.F90:584: if(allocated(pe_source)) deallocate( pe_source )
I could not find a post explaining the missing files in 4.1. I recommend sticking with version 4.0, which is also used by the FLASH group.
Kind regards
Andris Rambaks
|
|
From: Lianhua Z. <zhu...@gm...> - 2014-06-04 04:45:21
|
Hi, I'm new to paramesh. I was going through the tutorial case (2D diffusion problem tutorial link <http://www.physics.drexel.edu/~olson/paramesh-doc/Users_manual/amr_tutorial.html>). I followed the instructions. But at the Step 14, after calling the amr_guadcell in main program, I check the guadcells' data of the finest block centered at (1,1). The cell data in the finest block centered at (1,1) is 1 1.0000 10.0000 10.0000 1.0000 1.0000 1.0000 2 1.0000 10.0000 10.0000 1.0000 1.0000 1.0000 3 1.0000 10.0000 10.0000 1.0000 1.0000 1.0000 4 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 5 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 6 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 We can see the left gaudcells' data is wrong. I don't know why. And I have search the source code of paramesh, but can't find the implementation of the amr_guadcell subrouting. Best regards, Lianhua Zhu |
|
From: Angel de V. <an...@ia...> - 2012-08-29 08:15:55
|
Hi, last days I have been trying Paramesh. I'm using version 4.1, and despite being 4 years old, I managed to more or less make it work with one of the newest Intel compilers , the latest OpenMPI and the latest HDF5. But I had two big issues: 1) with the latest MPICH2 version, the code works OK only if I run it in one processor, giving bogus results or aborting whenever I try to run it with more processors. 2) with OpenMPI everything looks fine and I can run the heat equation code in the Paramesh tutorial with any number of processors. The problem arrives when writing the output file in the Chombo format. If I run it in only one processor the file is OK at all levels of refinement (I have four refinement levels). If I run it in two processors, the data in the file is OK for all levels of refinement, but the metadata for the last two levels is missing. If I run it in four processors the metadata for all but the coarsest level are missing (the data is still all there). I'm going to try different combinations of compilers/MPI/HDF5, but I was wondering if anyone reading this list has had similar issues or would know of a combination of compiler/MPI/HDF5 that works fine with Paramesh 4.1? Thanks, -- Ángel de Vicente http://www.iac.es/galeria/angelv/ --------------------------------------------------------------------------------------------- ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Proteccie Datos, acceda a http://www.iac.es/disclaimer.php WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en |
|
From: Hunger, L. <Lar...@ui...> - 2012-02-28 13:14:30
|
Hi all, I'm using Flash 3.3 whihc uses Paramesh 4 and encountered an odd problem while doing so. Since Flash3.3 uses Paramesh and I believe that the problem lies within the Paramesh part, I write to this list too. When I run the code on a number of processors that is 2^n it runs smoothly. But as soon as I choose a number of cores !=2^n the code crashes with the following error: Error in mpi_amr_local_surr_blks_lkup : remote block 1 0 not located on pe 8 while processing blk 1 8 Depending on how many cores I try to use, one or more of these errors appear. I already tried to set lrefine_min to a value that is >1, with this change the code stopps at the grid initialisiation with the line: "starting MORTON ORDERING". With 16/32/64 cores the code runs without problems. the error is located in the paramesh/paramesh4/Paramesh4.0/PM4_package/mpi_source subfolder in the file mpi_amr_local_surr_blks.F90 . Unfortunately I'm not very good in programming Fortran. Perhaps someone can help me. Thanks for the help, Lars Hunger |
|
From: Klaus W. <kl...@fl...> - 2011-11-09 15:23:20
|
On Wed, 9 Nov 2011, Gross, Markus wrote: > Just looking at paramesh for cylindrical/spherical coordinates I noticed > that, in my test setup anyway, the guard cells are not filled correctly > at r=0. Here we have to take into account the special geometry, i.e. map > in reverse order the adjacent block. However, paramesh seems to happily > just copy. > > Has anybody else noticed that? Coded a workaround or knows which flags > to set to avoid this? Markus, Have you tried turning lsingular_line on (set to T in amr_runtime_parameters) ? That may only work for spherical coordinates along the axis, not sure whether it is supposed to work with cylindrical or polar coordinates. I have never tried it myself. Klaus |
|
From: Gross, M. <mar...@me...> - 2011-11-09 12:30:17
|
Hi, Just looking at paramesh for cylindrical/spherical coordinates I noticed that, in my test setup anyway, the guard cells are not filled correctly at r=0. Here we have to take into account the special geometry, i.e. map in reverse order the adjacent block. However, paramesh seems to happily just copy. Has anybody else noticed that? Coded a workaround or knows which flags to set to avoid this? Trying to be a bit more illustrative, if the row at r=0 is filled with sequential numbers such as G G G G G 1 2 3 4 5 Assuming only one block, with five gridpoints, the guardcells (G) should be filled such that 5 4 3 2 1 1 2 3 4 5 But I only get 1 2 3 4 5 1 2 3 4 5 Of course, a bit more tricky with leafs and more blocks. Many thanks for you time! Regards, Markus |
|
From: TAY wee-b. <zo...@gm...> - 2010-11-24 11:45:26
|
Hi, I am thinking of adding adaptive mesh refinement to my current fortran code. I am using an immersed boundary code with staggered cartesian grid. I am using PETSc to solve my momentum equations and PETSc/hypre to solve my poisson equation. The code now runs in parallel. May I know if it is possible to integrate paramesh together with PETSc? Are there people who manage to do that? Is there anything which I need to consider too? I may thinking that as long as I can create a matrix from paramesh and store it as a PETSc format matrix, everything should be fine when I try to solve Ax=b. Is it the same for the parallel case too? Thank you very much. -- Yours sincerely, TAY wee-beng |
|
From: TAY wee-b. <zo...@gm...> - 2010-11-24 11:21:32
|
Hi, I am thinking of adding adaptive mesh refinement to my current fortran code. I am using an immersed boundary code with staggered cartesian grid. I am using PETSc to solve my momentum equations and PETSc/hypre to solve my poisson equation. The code now runs in parallel. May I know if it is possible to integrate paramesh together with PETSc? Are there people who manage to do that? Is there anything which I need to consider too? I may thinking that as long as I can create a matrix from paramesh and store it as a PETSc format matrix, everything should be fine when I try to solve Ax=b. Is it the same for the parallel case too? Thank you very much. -- Yours sincerely, TAY wee-beng |
|
From: Sathish V. <vs...@se...> - 2010-09-01 23:28:07
|
Hello, I am following the steps in the paramesh tutorial for 2-d diffusion solver. I am not getting the correct values for the guard cell layer at the end of step 14 of the tutorial. e.g., instead of: 1 10 10 10 1 1 1 in the first line, I get: 1 0.625 1.875 1.9375 0.8125 0.25 0.25 Because of this, the subsequent results are also different. Note that I get the correct values for the interior 4x4 region. I have double checked that I followed all the steps. Please help. Thanks Sathish Vadhiyar -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. |
|
From: Klaus W. <kl...@fl...> - 2009-06-01 21:00:56
|
[I had written:] > > IF( [We are being call from amr_prolong not amr_guardcell] > > .AND. [Neighbor has nodetype 1] > > .AND. [Neighbor is NOT a newchild] > > ...) THEN > > [Use neighbor's face var values at the surface instead of > > values from interpolation] > > > Implementation of the third condition may not be easily possible, since > > the newchild flag for a neighbor may not be available locally if the > > neighbor is remote. > > > On Mon, 1 Jun 2009, tingxing dong wrote: > According to my current understandings about paramesh's > communication mechanism, tree info is also stored in send buffer > and communicated to the local. But I am not sure if it is right to > use newchild(remote_blk) directly, since I did use the third condition > [Neighbor is NOT a newchild] in my codes. If newchild flag is not available > locally, I'm afraid my codes was wrong. Tingxing Dong, It appears that the newchild flags of neighboring blocks do get cached locally, so this should not be a problem after all. I had not checked for this when I wrote the above, and made a false assumption that newchild of neighbors was not always available. > However, anyway,so far, it has not reported any error or > warnings.Besides, the result obtained seems to be acceptable ,although it is > far from being perfect. If there was a mistake in the logic, it would perhaps only lead to occasionally copying of data from a neighbor when they shouldn't have been copied, or occasionally NOT copying the slightly more accurate version from a neighbor. But the neighbor's data should be nearly the same anyway. So such mistakss could very easily be missed. But anyway, I am not claiming that there is anything wrong in your logic. [By the way if this exchange continues, I will limit my messages to Paramesh-Users, there is no need to send to both -Users and -Developers.] Klaus |
|
From: tingxing d. <tin...@gm...> - 2009-06-01 05:39:54
|
Dear Klaus: I am sorry I did not notice your email on 2009/5/27 until now. Oh,new version of gmail is rather complicated. As you mentioned > This appears to be a very reasonable approach for initializing face var > values on block boundaries. But I think in order to do this correctly in > amr_1blk_fc_prol_user, the IF you mention above would have to be quite > complex, somethink like: > IF( [We are being call from amr_prolong not amr_guardcell] > .AND. [Neighbor has nodetype 1] > .AND. [Neighbor is NOT a newchild] > ...) THEN > [Use neighbor's face var values at the surface instead of > values from interpolation] > (Btw, you may not want the second condition.) > Implementation of the third condition may not be easily possible, since > the newchild flag for a neighbor may not be available locally if the > neighbor is remote. > According to my current understandings about paramesh's communication mechanism, tree info is also stored in send buffer and communicated to the local. But I am not sure if it is right to use newchild(remote_blk) directly, since I did use the third condition [Neighbor is NOT a newchild] in my codes. If newchild flag is not available locally, I'm afraid my codes was wrong. However, anyway,so far, it has not reported any error or warnings.Besides, the result obtained seems to be acceptable ,although it is far from being perfect. So, could you and Kevin or any other developer please describe what happens in mpi_amr_comm_setup.F90? Any contribution would be highly appreciated. If anywhere in my letter is wrong,please correct it. Best wishes. |
|
From: Klaus W. <kl...@fl...> - 2009-05-27 15:09:33
|
On Wed, 27 May 2009, tingxing dong wrote:
> Dear Kevin and Klaus:
>
> Thanks for your suggestions and explanations.
>
> According to my understanding of the codes,force_consistency flag in
> paramesh indicates temporary storage of solution( storing in
> gt_facevarXX,etc) of face variables before prolongation.Although, it's aim
> is to eliminate round-off errors ,which is exactly the same with my
> objective ,it is still intended to help guardcell filings,because temporary
> copy gt_facevarXX are ultimately used in amr_1blk_fc_cp_remote.F90.
> But in my codes, get_remote_block serves as a complement of divergence-free
> reconstruction after amr_refine_derefine.
Dear Tingxing Dong,
The force_consistency flag DOES affect the face variables at the block
boundaries that you are interested in, if I understood you correctly.
To be clear, consider this 1D picture. This is a left side of a block,
with nguard=4.
The first line shows the cc numbering (used to index unk),
the second line shows the fc numbering (used to index facevarx).
cc: | 1 | 2 | 3 | 4 || 5 | ...
fc: 1 2 3 4 5 6 ...
Normally, guard cell filling fills cc#1 - cc#4 and fc#1 - fc#4.
But with force_consistency, guard cell filling has the side effect
of also (potentially) modifying fc#5. This is done in an equivalent
manner at the right side of this block's left neighbor, in a way so
that fc#5 in this block and fc#(nxb+5) in the left neighbor end up
having the same value.
The gt_facevarx are merely used temporarily to hold copies of the original
values at fc#5 and fc#(nxb+5) locations of blocks (assuming
no_permanent_guardcells is false).
Klaus
|
|
From: Klaus W. <kl...@fl...> - 2009-05-27 14:23:27
|
On Wed, 27 May 2009, Kevin Olson wrote: > All, > > Using the 'force_consistency' flag will ensure that face variables are > IDENTICAL at block boundaries between blocks of the same refinement level. > The user need not write any new routines. Kevin, In my understanding of Paramesh, the following is true; please correct if wrong. When 'force_consistency' is set, face variables will have identical values at block boundaries between leaf blocks of the same level, but only after guard cells have been filled at least once. Values are not equalized in this way in the permanent facevarx(y,z) arrays for newchild blocks immediately after amr_prolong returns. Also, while 'force_consistency' may ensure that face variables have identical values at block boundaries, it does not ensure that that value is the 'best'. To do that, for the definition of 'best' introduced by Tingxing Dong, it seems the user does need to write new code. Klaus |
|
From: Kevin O. <Kev...@dr...> - 2009-05-27 11:53:31
|
All, Using the 'force_consistency' flag will ensure that face variables are IDENTICAL at block boundaries between blocks of the same refinement level. The user need not write any new routines. Best. Kevin Olson On May 26, 2009, at 10:51 PM, Klaus Weide wrote: > On Sun, 24 May 2009, tingxing dong wrote: > >> But my purpose of calling get_remote_block.F90 (in fc_prol_user.F90) >> is not >> to fill guardcell.Instead,I use it for newly created children after >> amr_refine_derefine.(Considering fc_prol_user.F90 is used both in >> amr_guardcell and amr_prolong,a divergence is needed.but this's not >> difficult. Adding a IF would be OK) > > Dear Tingxing Dong, > > Thank you for clarifying the context. > >> My presumption is that for a new created child N, if its parent has a >> more >> refined neighbor(which is already exist before amr_refine_define but >> now is >> the same refined level with the child )in one direction, then the >> child's >> face values in their shared faces should copy from its neighbor >> instead of >> using interpolated value from its parents. Since old values usually >> are more >> precise,while interpolation would introduce errors more or less. > > This appears to be a very reasonable approach for initializing face var > values on block boundaries. But I think in order to do this correctly > in > amr_1blk_fc_prol_user, the IF you mention above would have to be quite > complex, somethink like: > IF( [We are being call from amr_prolong not amr_guardcell] > .AND. [Neighbor has nodetype 1] > .AND. [Neighbor is NOT a newchild] > ...) THEN > [Use neighbor's face var values at the surface instead of > values from interpolation] > > (Btw, you may not want the second condition.) > Implementation of the third condition may not be easily possible, > since > the newchild flag for a neighbor may not be available locally if the > neighbor is remote. > >> Actually, until now I have not find a mechanism in Paramesh to ensure >> that,so I call get_remote_block this API fuction myself to acchive it. > > Perhaps paramesh should provide a wys for doing this in a simpler way. > >> 2) Therefore in my codes, mpi_amr_get_remote_block is used to gain >> remote >> blocks' face values in the shared faces but not full request of >> interior >> cells. However, Paramesh does use the subrountine for full request in >> mpi_1blk_guardcell.F90, in which case,the >> '(dtype.ne.14.And.dtype.ne.14+ >> 27)' check sentence is necessary. >> >> 3) amr_1blk_fc_cp_remote does access subset of cells but it is >> devised for >> guardcell filling between same refine level blocks. Besides my >> objective is >> not for guard cells >> >> 4) 'vtype=14 or 14+27' check is necessary in most cases,since it is >> only >> used in amr_1blk_guardcell.F90 in Paramesh. But it does not apply to >> special >> cases,for instance,like mine.Thus I mean although right now this >> check is >> probably ok, but a substitute including all cases would make this >> check more >> perfect. > > You can of course have a variant of mpi_amr_get_remote_block (under a > different name) that does not have the dtype check. Then call the > variant, > rather than the original, from your fc_prolong_user. But your variant > routine > should > (1) not drop the dtype.ne.14... check entirely, but replace it with a > less > strict check, which would depend on the face. Basically, check that > the dtype > is such that all the needed face values from the neighbor are present. > (2)make sure that when copying face var values from an incomplete > (i.e., > dtype.ne.14) received block in the buffer, you are copying the values > from the correct locations. > > Actually it seems that the existing file > mpi_amr_get_remote_block_fvar.F90 > implements some of this logic. > > Hope this was helpful, > > Klaus > > ----------------------------------------------------------------------- > ------- > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity > professionals. Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp as they present alongside digital heavyweights like > Barbarian > Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com > _______________________________________________ > Paramesh-developers mailing list > Par...@li... > https://lists.sourceforge.net/lists/listinfo/paramesh-developers > |
|
From: tingxing d. <tin...@gm...> - 2009-05-27 10:09:48
|
Dear Kevin and Klaus: Thanks for your suggestions and explanations. According to my understanding of the codes,force_consistency flag in paramesh indicates temporary storage of solution( storing in gt_facevarXX,etc) of face variables before prolongation.Although, it's aim is to eliminate round-off errors ,which is exactly the same with my objective ,it is still intended to help guardcell filings,because temporary copy gt_facevarXX are ultimately used in amr_1blk_fc_cp_remote.F90. But in my codes, get_remote_block serves as a complement of divergence-free reconstruction after amr_refine_derefine. My motivation of using get_remote_block is based on two papers-1) Divergence-Free Adaptive Mesh Refinement for Magnetohydrodynamics and, 2) “A novel approach of divergence-free reconstruction for adaptive mesh refinement", where in the section 2.1 a few lines read: "Note that if the coarse cell shares edges with cells already refined meshes,then the field component values on those refined meshed would be copied to the new finer mesh rather than using the interpolation". As I am from China mainland, Flash codes are inaccessible to me.Thus I am not quite clear about its details except several glimpses of its user-guide . But I believe divergence-free must has been addressed effectively. Hope it helps. 2009/5/27 Klaus Weide <kl...@fl...> > On Tue, 26 May 2009, Kevin Olson wrote: > > > Dear Tingxing, > > > > In order to ensure that the face variables are the same are the same at > the > > interfaces of different blocks, you should be using the > 'force_consistency' > > flag. You should not need to use the get_remote_block routine as you > > described. It probably will not work (Klaus gave a good description of > why it > > will not work). > > Kevin, > > Actually I am not sure it will not work, maybe it happens to be the case > that (for this specific use) all required face var values are present in > the receive-buffer after the communication for prolongation has been done? > > It seems that force_consistency doesn't exactly do what's desired here: > - It replaces local_value with 0.5 * ( local_value + remote_value), always > when guard cell filling > - Desired behavior: replace local_value with remote_value, *sometimes* > (i.e., only when filling in new children in amr_prolong) > > Also, in the past Dongwook Lee and I have observed some effects of using > force_consistency that made us a bit suspicious of this mode, including > slight differences in results depending on the number of processors. > (IIRC the differences were very small, and we never traced exactly were > they came from.) > > With force_consistency turned on, guard cell filling for face variables > acts in an unusual manner. This is the ony case I know of where execution > of amr_guardcell may change values in "interior" cells (namely, face > variables at the block boundaries) in addition to changing values in the > guard cell layers. Maybe this explains what we saw. Normally we (and the > FLASH code) don't expect any modification of "interior" values as a side > effect of guard cell filling. > > (Btw I am assuming no_permanent_guardcells is off.) > > Klaus > |
|
From: Klaus W. <kl...@fl...> - 2009-05-27 03:22:15
|
On Tue, 26 May 2009, Kevin Olson wrote: > Dear Tingxing, > > In order to ensure that the face variables are the same are the same at the > interfaces of different blocks, you should be using the 'force_consistency' > flag. You should not need to use the get_remote_block routine as you > described. It probably will not work (Klaus gave a good description of why it > will not work). Kevin, Actually I am not sure it will not work, maybe it happens to be the case that (for this specific use) all required face var values are present in the receive-buffer after the communication for prolongation has been done? It seems that force_consistency doesn't exactly do what's desired here: - It replaces local_value with 0.5 * ( local_value + remote_value), always when guard cell filling - Desired behavior: replace local_value with remote_value, *sometimes* (i.e., only when filling in new children in amr_prolong) Also, in the past Dongwook Lee and I have observed some effects of using force_consistency that made us a bit suspicious of this mode, including slight differences in results depending on the number of processors. (IIRC the differences were very small, and we never traced exactly were they came from.) With force_consistency turned on, guard cell filling for face variables acts in an unusual manner. This is the ony case I know of where execution of amr_guardcell may change values in "interior" cells (namely, face variables at the block boundaries) in addition to changing values in the guard cell layers. Maybe this explains what we saw. Normally we (and the FLASH code) don't expect any modification of "interior" values as a side effect of guard cell filling. (Btw I am assuming no_permanent_guardcells is off.) Klaus |
|
From: Klaus W. <kl...@fl...> - 2009-05-27 02:51:15
|
On Sun, 24 May 2009, tingxing dong wrote:
> But my purpose of calling get_remote_block.F90 (in fc_prol_user.F90) is not
> to fill guardcell.Instead,I use it for newly created children after
> amr_refine_derefine.(Considering fc_prol_user.F90 is used both in
> amr_guardcell and amr_prolong,a divergence is needed.but this's not
> difficult. Adding a IF would be OK)
Dear Tingxing Dong,
Thank you for clarifying the context.
> My presumption is that for a new created child N, if its parent has a more
> refined neighbor(which is already exist before amr_refine_define but now is
> the same refined level with the child )in one direction, then the child's
> face values in their shared faces should copy from its neighbor instead of
> using interpolated value from its parents. Since old values usually are more
> precise,while interpolation would introduce errors more or less.
This appears to be a very reasonable approach for initializing face var
values on block boundaries. But I think in order to do this correctly in
amr_1blk_fc_prol_user, the IF you mention above would have to be quite
complex, somethink like:
IF( [We are being call from amr_prolong not amr_guardcell]
.AND. [Neighbor has nodetype 1]
.AND. [Neighbor is NOT a newchild]
...) THEN
[Use neighbor's face var values at the surface instead of
values from interpolation]
(Btw, you may not want the second condition.)
Implementation of the third condition may not be easily possible, since
the newchild flag for a neighbor may not be available locally if the
neighbor is remote.
> Actually, until now I have not find a mechanism in Paramesh to ensure
> that,so I call get_remote_block this API fuction myself to acchive it.
Perhaps paramesh should provide a wys for doing this in a simpler way.
> 2) Therefore in my codes, mpi_amr_get_remote_block is used to gain remote
> blocks' face values in the shared faces but not full request of interior
> cells. However, Paramesh does use the subrountine for full request in
> mpi_1blk_guardcell.F90, in which case,the '(dtype.ne.14.And.dtype.ne.14+
> 27)' check sentence is necessary.
>
> 3) amr_1blk_fc_cp_remote does access subset of cells but it is devised for
> guardcell filling between same refine level blocks. Besides my objective is
> not for guard cells
>
> 4) 'vtype=14 or 14+27' check is necessary in most cases,since it is only
> used in amr_1blk_guardcell.F90 in Paramesh. But it does not apply to special
> cases,for instance,like mine.Thus I mean although right now this check is
> probably ok, but a substitute including all cases would make this check more
> perfect.
You can of course have a variant of mpi_amr_get_remote_block (under a
different name) that does not have the dtype check. Then call the variant,
rather than the original, from your fc_prolong_user. But your variant routine
should
(1) not drop the dtype.ne.14... check entirely, but replace it with a less
strict check, which would depend on the face. Basically, check that the dtype
is such that all the needed face values from the neighbor are present.
(2)make sure that when copying face var values from an incomplete (i.e.,
dtype.ne.14) received block in the buffer, you are copying the values
from the correct locations.
Actually it seems that the existing file mpi_amr_get_remote_block_fvar.F90
implements some of this logic.
Hope this was helpful,
Klaus
|
|
From: Kevin O. <Kev...@dr...> - 2009-05-26 14:38:32
|
Dear Tingxing, In order to ensure that the face variables are the same are the same at the interfaces of different blocks, you should be using the 'force_consistency' flag. You should not need to use the get_remote_block routine as you described. It probably will not work (Klaus gave a good description of why it will not work). Best, Kevin Olson On May 24, 2009, at 9:23 PM, tingxing dong wrote: > Dear Klaus : > > Lots of thanks for your so detailed remarks which are totally beyond > my expectation. Here I will answer all items of your remarks in > sequence. > > 1)First,I am extremely sorry because perhaps in my first letter(the > problem is fixed)I did not describe my problem quite clearly. Exactly > as the paramesh and you point out,during guardcell filling the guard > cells of a coarse (leaf) block, in a direction where the block's > neighbor is more refined, are not filled by interpolation at all, but > by direct copying from the neighbor at the same level (i.e., a parent > block) in that direction. > > But my purpose of calling get_remote_block.F90 (in fc_prol_user.F90) > is not to fill guardcell.Instead,I use it for newly created children > after amr_refine_derefine.(Considering fc_prol_user.F90 is used both > in amr_guardcell and amr_prolong,a divergence is needed.but this's not > difficult. Adding a IF would be OK) > > My presumption is that for a new created child N, if its parent has a > more refined neighbor(which is already exist before amr_refine_define > but now is the same refined level with the child )in one direction, > then the child's face values in their shared faces should copy from > its neighbor instead of using interpolated value from its parents. > Since old values usually are more precise,while interpolation would > introduce errors more or less. > > Actually, until now I have not find a mechanism in Paramesh to ensure > that,so I call get_remote_block this API fuction myself to acchive it. > > 2) Therefore in my codes, mpi_amr_get_remote_block is used to gain > remote blocks' face values in the shared faces but not full request of > interior cells. However, Paramesh does use the subrountine for full > request in mpi_1blk_guardcell.F90, in which case,the > '(dtype.ne.14.And.dtype.ne.14+ 27)' check sentence is necessary. > > 3) amr_1blk_fc_cp_remote does access subset of cells but it is devised > for guardcell filling between same refine level blocks. Besides my > objective is not for guard cells > > 4) 'vtype=14 or 14+27' check is necessary in most cases,since it is > only used in amr_1blk_guardcell.F90 in Paramesh. But it does not apply > to special cases,for instance,like mine.Thus I mean although right now > this check is probably ok, but a substitute including all cases > would make this check more perfect. > > The reason why I develop my own fc_prolong_user is that approaches > to applying divergenceless prolongation provided by paramesh are not > so effective as my own method in my codes. I test my codes with > benchmark of Vortex using CT scheme which I find it is also adopted > in Flash3.0 several days ago.And the results show that if considering > my concern(,shared face value copy from the refine to the coarse's > child,just as mentioned above), errors of divB could be limited within > e-11 to e_13. Otherwise,errors would quickly reach to e-5 to e-6. In > my opinion,that is possibly because,as the grid refines,there would be > more blocks and therefore more coarse-refine neighboring pairs. > > Thanks again for your contributions. > > Best wishes > > Tingxing > Dong------------------------------------------------------------------- > ----------- > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity > professionals. Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp asthey present alongside digital heavyweights like > Barbarian > Group, R/GA, & Big Spaceship. http://www.creativitycat.com > _______________________________________________ > Paramesh-developers mailing list > Par...@li... > https://lists.sourceforge.net/lists/listinfo/paramesh-developers |
|
From: Klaus W. <kl...@fl...> - 2009-05-22 18:31:46
|
On Fri, 22 May 2009, Jean-Noel Pederzani wrote: > Dear Klaus, > > what version of paramesh are you using? I had a similar issue using 4.0 but > the problem was solved by switching to 4.1 My remarks were based on Paramesh SourceForge CVS code from some months ago, so basically, 4.1. However, I think that all I wrote should also apply to 4.0. Klaus |
|
From: Jean-Noel P. <jp...@vi...> - 2009-05-22 17:48:00
|
Dear Klaus, what version of paramesh are you using? I had a similar issue using 4.0 but the problem was solved by switching to 4.1 Jean-Noel Pederzani On Fri, May 22, 2009 at 12:40 PM, Klaus Weide <kl...@fl...>wrote: > > > On Wed, 13 May 2009, Tingxing Dong wrote [to paramesh-developers]: > ============ begin quote ============ > I am a postgraduate student from China and now studying on paramesh. > Because of guidances from PARAMESH developers, here I share my problems > with all members of the mailing list. > After gmake successfully, in running an error jumps as following: > > Running on 4 processors > Paramesh error : pe 2 needed full blk 19951 2 > but could not find it or only found part of it in the message buffer. > Contact PARAMESH developers for help. > p2_18462: p4_error: : 0 > (the maxblocks I set in amr_runtime_parameter is 5000) > > this warning occurs in mpi_amr_get_remote_block.F90 which I explicitly > called in my own constructed subroutine of amr_1blk_fc_prol_user to ensure > that face values of coarse blocks copy from neighboring old refined blocks > rather than interpolating. > > Any help would be highly appreciated. > Best wishes! > sincerely > > tingxing dong > ============ end quote ============ > > Later, Tingxing Dong wrote [to me]: > ============ begin quote ============ > Dear Klaus: > > Unfortunately, the Paramesh-Developers mailing list has been idle away for > a > long time. Even somebody posted advertisements on the list. > > Therefore, I have to fix it myself. Luckily after a few weeks of > examination, I made it. And my solution is to comment out those lines > which > prompted errors in mpi_amr_get_remote_block.F90. > > The reason is that in my codes, coarse meshes only copy certain range of > Index from their adjacent fine meshes, not the full block request.Thus, the > limitation of vtype=14 or 14+27 is invalid for cases of requesting certain > parts of remote blocks . > > And I think that still leaving those prompting lines in paramesh is not > sensible, so here I make a bold suggestion that you deleted them or > replaced with new comments which cover exceptions to avoid incurring other > users' bewilderment. > Best wishes. > ============ end quote ============ > > My remarks on this: > > 1) I am not sure why you need to call mpi_amr_get_remote_block in > order to "ensure that face values of coarse blocks copy from > neighboring old refined blocks rather than interpolating." I am not > not even sure why you need your own amr_1blk_fc_prol_user. In > general, the guard cells of a coarse (leaf) block, in a direction > where the block's neighbor is more refined, are not filled by > interpolation at all, but by direct copying from the neighbor at the > same level (i.e., a parent block) in that direction. To ensure that > there is valid data in the (interior) cells of parent block, the guard > cell fill operation should be preceded by a call to amr_restrict (if > you are not using advance_all_levels). > > If you are considering prolongation that may happened in the course of > guard cell filling: > If you use amr_guardcell (implying that you don't have > no_permanent_guardcells set), the call to amr_restrict is already > included in mpi_amr_guardcell.F90. > > If you are considering prolongation via amr_prolong to fill newly > created children after amr_refine_derefine: > A newly created child N can only have a more refined neighbor if the > more refined leaf blocks in that direction are also newly created AND > their parent, the same-level neighbor of new child N, existed as a > leaf block before the amr_refine_derefine and therefore presumably has > valid data in its interior cells. > > The above should be true for face variables as well as cell-centered. > (The only special consideration regarding face variables that I am > aware of is related to the force_consistency flag: If you have this > turned on, then face variables in some cells may change slightly. I > don't know if this is relevant here at all.) > > 2) In normal usage (as when called by paramesh itself), > mpi_amr_get_remote_block is only invoked where all interior cells are > needed. The '(dtype.ne.14.And.dtype.ne.14+27)' test is therefore > appropriate in normal usage. > > 3) The way to access cells of (possibly) remote blocks in situations > where only a subset of the block's cells is needed is by calling > amr_1blk_fc_cp_remote (and/or amr_1blk_cc_cp_remote, etc., depending > on the type of data). The region of cells needed (and specified as > actual arguments) should match what has actually been but in the > temprecv_buf by the previous communications step (invocation of > mpi_amr_comm_setup). > > Normally this should already have been called, via > amr_1blk_guardcell_srl, just before an amr_1blk_fc_prol_<something> > routine is invoked. > > 4) Your fix, removing the 'vtype=14 or 14+27' check, may happen to > work, as long as the cell regions of the remote block that you need > happen to be included in those actually received in the communication > step. But you now have no check to detect if there is a mismatch > between what was sent and what was expected (unless you add > additional tests). > > > Hoping this is useful, > > Klaus > > > ------------------------------------------------------------------------------ > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity professionals. > Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian > Group, R/GA, & Big Spaceship. http://www.creativitycat.com > _______________________________________________ > Paramesh-users mailing list > Par...@li... > https://lists.sourceforge.net/lists/listinfo/paramesh-users > |
|
From: Klaus W. <kl...@fl...> - 2009-05-22 16:40:43
|
On Wed, 13 May 2009, Tingxing Dong wrote [to paramesh-developers]:
============ begin quote ============
I am a postgraduate student from China and now studying on paramesh.
Because of guidances from PARAMESH developers, here I share my problems
with all members of the mailing list.
After gmake successfully, in running an error jumps as following:
Running on 4 processors
Paramesh error : pe 2 needed full blk 19951 2
but could not find it or only found part of it in the message buffer.
Contact PARAMESH developers for help.
p2_18462: p4_error: : 0
(the maxblocks I set in amr_runtime_parameter is 5000)
this warning occurs in mpi_amr_get_remote_block.F90 which I explicitly
called in my own constructed subroutine of amr_1blk_fc_prol_user to ensure
that face values of coarse blocks copy from neighboring old refined blocks
rather than interpolating.
Any help would be highly appreciated.
Best wishes!
sincerely
tingxing dong
============ end quote ============
Later, Tingxing Dong wrote [to me]:
============ begin quote ============
Dear Klaus:
Unfortunately, the Paramesh-Developers mailing list has been idle away for a
long time. Even somebody posted advertisements on the list.
Therefore, I have to fix it myself. Luckily after a few weeks of
examination, I made it. And my solution is to comment out those lines which
prompted errors in mpi_amr_get_remote_block.F90.
The reason is that in my codes, coarse meshes only copy certain range of
Index from their adjacent fine meshes, not the full block request.Thus, the
limitation of vtype=14 or 14+27 is invalid for cases of requesting certain
parts of remote blocks .
And I think that still leaving those prompting lines in paramesh is not
sensible, so here I make a bold suggestion that you deleted them or
replaced with new comments which cover exceptions to avoid incurring other
users' bewilderment.
Best wishes.
============ end quote ============
My remarks on this:
1) I am not sure why you need to call mpi_amr_get_remote_block in
order to "ensure that face values of coarse blocks copy from
neighboring old refined blocks rather than interpolating." I am not
not even sure why you need your own amr_1blk_fc_prol_user. In
general, the guard cells of a coarse (leaf) block, in a direction
where the block's neighbor is more refined, are not filled by
interpolation at all, but by direct copying from the neighbor at the
same level (i.e., a parent block) in that direction. To ensure that
there is valid data in the (interior) cells of parent block, the guard
cell fill operation should be preceded by a call to amr_restrict (if
you are not using advance_all_levels).
If you are considering prolongation that may happened in the course of
guard cell filling:
If you use amr_guardcell (implying that you don't have
no_permanent_guardcells set), the call to amr_restrict is already
included in mpi_amr_guardcell.F90.
If you are considering prolongation via amr_prolong to fill newly
created children after amr_refine_derefine:
A newly created child N can only have a more refined neighbor if the
more refined leaf blocks in that direction are also newly created AND
their parent, the same-level neighbor of new child N, existed as a
leaf block before the amr_refine_derefine and therefore presumably has
valid data in its interior cells.
The above should be true for face variables as well as cell-centered.
(The only special consideration regarding face variables that I am
aware of is related to the force_consistency flag: If you have this
turned on, then face variables in some cells may change slightly. I
don't know if this is relevant here at all.)
2) In normal usage (as when called by paramesh itself),
mpi_amr_get_remote_block is only invoked where all interior cells are
needed. The '(dtype.ne.14.And.dtype.ne.14+27)' test is therefore
appropriate in normal usage.
3) The way to access cells of (possibly) remote blocks in situations
where only a subset of the block's cells is needed is by calling
amr_1blk_fc_cp_remote (and/or amr_1blk_cc_cp_remote, etc., depending
on the type of data). The region of cells needed (and specified as
actual arguments) should match what has actually been but in the
temprecv_buf by the previous communications step (invocation of
mpi_amr_comm_setup).
Normally this should already have been called, via
amr_1blk_guardcell_srl, just before an amr_1blk_fc_prol_<something>
routine is invoked.
4) Your fix, removing the 'vtype=14 or 14+27' check, may happen to
work, as long as the cell regions of the remote block that you need
happen to be included in those actually received in the communication
step. But you now have no check to detect if there is a mismatch
between what was sent and what was expected (unless you add
additional tests).
Hoping this is useful,
Klaus
|
|
From: Klaus W. <kl...@fl...> - 2009-02-10 20:39:19
|
Tingxing Dong,
I believe that comments in PARAMESH like the following are
confusing because they only describe some, but not all situations.
> idest selects the storage space in data_1blk.fh which is to be used
> in this call. If the leaf node is having its guardcells filled then
> set this to 1, if its parent is being filled set it to 2.
This comment really only applies to guard cell filling, during processing
of amr_1blk_guardcell. In the routine amr_prolong, the two slots in
unk1 (and facevar{x,y,z}1, etc.) are used differently: slot 1 holds
the parent block of a newly created leaf block; the new leaf block
is assembled in slot 2.
I hope this helps,
Klaus
|
|
From: dongtingxing <don...@16...> - 2009-02-10 05:43:25
|
Dear professors:
I am a postgraduate student from Super Computing Center of Chinese Academy of Sciences. I have recently read your group's researches on Paramesh,and I am interested in trying your approach in applying for MHD in yeast.
A piece of codes relevant to "idest" in mpi_amr_prolong.F90 in derectory of mpi_source confused me.
the codes goes as following,
! Prolong data from working block to the new child block
idest = 2
..................
! cell-centered data
if(lcc) then
if(iopt.eq.1) then
call amr_1blk_cc_prol_gen_unk_fun( &
& unk1(:,:,:,:,1), &
& ia,ib,ja,jb,ka,kb,idest,ioff,joff,koff,mype, &
& lb,parent_pe,parent_blk)
elseif(iopt.ge.2) then
..................................
endif
endif
if we use Single Injection in subroutine amr_1blk_cc_prol_gen_unk_fun.F90, we will switch to amr_1blk_cc_prol_inject(recv,ia,ib,ja,jb,ka,kb,idest,ioff,joff,koff,mype,ivar).
in that subroutine,idest is inherited from amr_prolong.F90,that is idest is given 2.
and recv is inherited from unk1(:,:,:,:,1).
according to comment lines which appeard in many *.F90,for example,amr_1blk_cc_cp_remote.F90
idest selects the storage space in data_1blk.fh which is to be used in this call. If the leaf node is having its guardcells filled then set this to 1, if its parent is being filled set it to 2.
then I assume that unk1(:,:,:,:,1) store leaf nodes information,unk1(:,:,:,:,2) store parents'.
However, in subrountine amr_1blk_cc_prol_inject.F90
array recv originally which extracted from the solution array unk1(:,:,:,:,1)
performs a prolongation operation on unk1(...,idest).
How could children prolong to parents?
I am not clear about this question.
Any help from anyone will be greatly appreciated.
sincerely
Tingxing Dong
|
|
From: Kevin O. <Kev...@dr...> - 2008-03-03 19:29:34
|
Dear PARAMESH users, A new version of PARAMESH is available for download from the PARAMESH WEB site. The main change that will be visible to users is that PARAMESH MUST be run in LIBRARY mode (see the documentation). The other main changes in this version are internal to PARAMESH and should have in impact on performance only, not in the way PARAMESH is used. PLEASE NOTE: Due to a loss of funding, PARAMESH is no longer supported. It is likely that this will be the last released version of PARMESH for quite some time. This means that I (Kevin Olson) will not be able to make bug fixes or to respond to future feature requests. If you are having problems using PARAMESH, please try to use the paramesh users e-mail lis...@li.... It is very likely that someone else who uses PARAMESH may have already solved a similar problem and can help you. Best, Kevin Olson |
|
From: Sergey P. <se...@fr...> - 2007-04-02 19:26:34
|
Dear Kevin, I have tested new version of PARAMESH and here are my comments: 1. File amr_block_geometry.F90 is broken. This is not what we have done and= I=20 correct it again. I kindly ask you to be more careful with that. 2. To use advance_all_levels and curvilinear geometry, line 243 of file=20 mpi_morton_bnd_restrict.F90 must be as following (as its written in=20 mpi_morton_bnd_prolong1.F90) >>> if(nodetype(lb) =3D=3D 2 .or. advance_all_levels) then <<< 3. From you message "... If you are using curvilinear coordinates and you also=20 set 'curvilinear_conserve' to true, the code will automatically switch to=20 using direct injection during prolongation of data from course to fine=20 blocks ..." I have not found that. Where it is? In any case, I kindly ask you to set-up= it=20 during initialization only. For example, I use my own 2nd order conservativ= e=20 prolongation for cylindrical geometry and I prefer to have a simple interfa= ce=20 when I just set interp_mask_unk to 20. 4. The most hard point is the computational speed. I have compared versions= =20 4.0 and 3.4 at the same conditions (the same Intel Xenon computer with=20 identical options for Intel fortran compiler). My test case (about 1-2 hour= s)=20 consists of Poisson and transport solvers mainly and the last version 4.0 i= s=20 always SLOWER by factor 1.3-1.5 for both single node and parallel=20 computations! Do you have any ideas on the matter? Best wishes, Sergey Pancheshnyi =2D-=20 Dr Sergey Pancheshnyi Associate Researcher CNRS Laboratoire Plasma et Conversion d=E2=80=99Energie Tel. +33 (0)5 61 55 60 54 =46ax. +33 (0)5 61 55 63 32 Web: http://www.laplace.univ-tlse.fr ******* PLEASE NOTE CHANGE OF E-MAIL ADDRESS ******* new address: ser...@la... |