Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
Close
From: Derek Gaston <friedmud@gm...>  20070702 22:35:49

Hey guys, I'm trying to run some redistribution + adaptivity... and I'm running into some odd refinement problems. I've attached 4 pictures, "111_domain.jpg" is a complete shot of step 111 so you can get some perspective. 111,112, and 121.jpg are all zooms of the very bottom left corner at different stages of my adaptive cycle. Here's what I'm doing: Firstly, I do 3 uniform refinements. Next, I'm using redistribution to generate meshes 100 through 111. Then at step 112 (these numbers are picked by tolerances BTW) the adaptivity kicks in. As soon as it does, it does some coarsening and refinement, it appears as though that coarsening is not quite working properly, as in, not aligning the hanging node correctly. Further problems ensue later on as you can see in 121.jpg... the "one level rule" is definitely _not_ being enforced. Ultimately the solution goes to crap after adding around 900 elements (121 is the last mesh). Before then, the error is going down at a pretty good rate (in fact, I'm pretty happy with the results). Take a look at the images, and let me know if you have questions. Derek 
From: John Peterson <peterson@cf...>  20070702 23:04:57

I think coarsening assumes that the prior mesh level is nested. Basically that means we assume we don't need to move any nodes to coarsen, we just need to throw away some elements. Unfortunately smoothing breaks this nestedness assumption. In order to coarsen properly, you'd have to relocate that hanging node by constraining it to lie midway along the coarse element's edge  basically where GMV has drawn that edge. I don't know how to do this, but it might just be a matter of generalizing the existing coarsening code? John Derek Gaston writes: > Hey guys, > > I'm trying to run some redistribution + adaptivity... and I'm running > into some odd refinement problems. I've attached 4 pictures, > "111_domain.jpg" is a complete shot of step 111 so you can get some > perspective. 111,112, and 121.jpg are all zooms of the very bottom > left corner at different stages of my adaptive cycle. > > Here's what I'm doing: Firstly, I do 3 uniform refinements. Next, I'm > using redistribution to generate meshes 100 through 111. Then at step > 112 (these numbers are picked by tolerances BTW) the adaptivity kicks > in. As soon as it does, it does some coarsening and refinement, it > appears as though that coarsening is not quite working properly, as > in, not aligning the hanging node correctly. > > Further problems ensue later on as you can see in 121.jpg... the "one > level rule" is definitely _not_ being enforced. Ultimately the > solution goes to crap after adding around 900 elements (121 is the > last mesh). Before then, the error is going down at a pretty good > rate (in fact, I'm pretty happy with the results). > > Take a look at the images, and let me know if you have questions. > > Derek 
From: Derek Gaston <friedmud@gm...>  20070703 15:08:08

Hmm... thanks John. I'll take a look at the coarsening code and the hanging node placement code for refinement. Any ideas on why it's not following the level 1 rule? Is it because that node is off from the edge and it's not finding neighbors? Derek On 7/2/07, John Peterson <peterson@...> wrote: > I think coarsening assumes that the prior mesh level is nested. > Basically that means we assume we don't need to move any nodes to > coarsen, we just need to throw away some elements. > > Unfortunately smoothing breaks this nestedness assumption. In order > to coarsen properly, you'd have to relocate that hanging node by > constraining it to lie midway along the coarse element's edge  > basically where GMV has drawn that edge. > > I don't know how to do this, but it might just be a matter of > generalizing the existing coarsening code? > > > John > > > Derek Gaston writes: > > Hey guys, > > > > I'm trying to run some redistribution + adaptivity... and I'm running > > into some odd refinement problems. I've attached 4 pictures, > > "111_domain.jpg" is a complete shot of step 111 so you can get some > > perspective. 111,112, and 121.jpg are all zooms of the very bottom > > left corner at different stages of my adaptive cycle. > > > > Here's what I'm doing: Firstly, I do 3 uniform refinements. Next, I'm > > using redistribution to generate meshes 100 through 111. Then at step > > 112 (these numbers are picked by tolerances BTW) the adaptivity kicks > > in. As soon as it does, it does some coarsening and refinement, it > > appears as though that coarsening is not quite working properly, as > > in, not aligning the hanging node correctly. > > > > Further problems ensue later on as you can see in 121.jpg... the "one > > level rule" is definitely _not_ being enforced. Ultimately the > > solution goes to crap after adding around 900 elements (121 is the > > last mesh). Before then, the error is going down at a pretty good > > rate (in fact, I'm pretty happy with the results). > > > > Take a look at the images, and let me know if you have questions. > > > > Derek > 
From: Derek Gaston <friedmud@gm...>  20070703 15:41:08

Good point about a simple test case... I'll work it from that angle. Thanks, Derek On 7/3/07, John Peterson <peterson@...> wrote: > Derek Gaston writes: > > Hmm... thanks John. > > > > I'll take a look at the coarsening code and the hanging node placement > > code for refinement. > > > > Any ideas on why it's not following the level 1 rule? Is it because > > that node is off from the edge and it's not finding neighbors? > > Right. It's probably a "tear" in the mesh. > > I'd try making a 2quad sidebyside mesh, unif. refine once, > move the center node slightly, and then coarsen either the > left or the right quartet of children. This would give you > a simple case where you could look at the neighbor information > and compare it directly to the picture you are seeing. A > tear would be present if any interior edges have NULL neighbors; > it's like having a boundary edge in the interior of the grid. > > This probably happens during the call to find_neighbors() when > prepare_for_use() is called at the end of refinement & coarsening, and > would likely be fixed if we figured out how to "fix" the hanging node > position. > > J > > > > Derek > > > > On 7/2/07, John Peterson <peterson@...> wrote: > > > I think coarsening assumes that the prior mesh level is nested. > > > Basically that means we assume we don't need to move any nodes to > > > coarsen, we just need to throw away some elements. > > > > > > Unfortunately smoothing breaks this nestedness assumption. In order > > > to coarsen properly, you'd have to relocate that hanging node by > > > constraining it to lie midway along the coarse element's edge  > > > basically where GMV has drawn that edge. > > > > > > I don't know how to do this, but it might just be a matter of > > > generalizing the existing coarsening code? > > > > > > > > > John > > > > > > > > > Derek Gaston writes: > > > > Hey guys, > > > > > > > > I'm trying to run some redistribution + adaptivity... and I'm running > > > > into some odd refinement problems. I've attached 4 pictures, > > > > "111_domain.jpg" is a complete shot of step 111 so you can get some > > > > perspective. 111,112, and 121.jpg are all zooms of the very bottom > > > > left corner at different stages of my adaptive cycle. > > > > > > > > Here's what I'm doing: Firstly, I do 3 uniform refinements. Next, I'm > > > > using redistribution to generate meshes 100 through 111. Then at step > > > > 112 (these numbers are picked by tolerances BTW) the adaptivity kicks > > > > in. As soon as it does, it does some coarsening and refinement, it > > > > appears as though that coarsening is not quite working properly, as > > > > in, not aligning the hanging node correctly. > > > > > > > > Further problems ensue later on as you can see in 121.jpg... the "one > > > > level rule" is definitely _not_ being enforced. Ultimately the > > > > solution goes to crap after adding around 900 elements (121 is the > > > > last mesh). Before then, the error is going down at a pretty good > > > > rate (in fact, I'm pretty happy with the results). > > > > > > > > Take a look at the images, and let me know if you have questions. > > > > > > > > Derek > > > > 
From: Roy Stogner <roystgnr@ic...>  20070703 16:06:39

On Tue, 3 Jul 2007, Derek Gaston wrote: > Good point about a simple test case... I'll work it from that angle. Check out MeshRefinement::add_point(). What's happening to create tears is probably something like this: You refine two adjoining elements, and move their shared midedge node. You coarsen one of the elements. The mesh is now geometrically screwed up (as seen in your .gmv file), and this will introduce errors in your calculations, but the mesh is probably still topologically okay. You refine that element again. It tries to add a midedge node with MeshRefinement::add_point(), but the existing midedge node isn't in the right place and so doesn't match its search criteria, so a new midedge node is created. The new child elements don't share all the nodes they should with their ostensible neighbors, and so don't get identified as neighbors. You're not running with METHOD=dbg, so the sanity check in Mesh::find_neighbors() is skipped and the problem isn't noticeable (except as incorrect numeric results) until you see apparent levelone rule violations. John's right that the fix is to reset hanging node positions after they're created by coarsening, but I'm not sure what the most elegant way to do that is. We could check for level mismatches in Elem::coarsen() and realign any hanging nodes before the child elements get deleted, but the code to do so might be complicated (particularly if there are greater than levelone mismatches) and is kind of unnecessary bloat for most users. We can't add a MeshRefinement::realign_hanging_nodes() function to be called by users, because it would have to be called from the middle of MeshRefinement::refine_and_coarsen_elements() in between the coarsening and the _prepare_for_use() call. Perhaps we could add a MeshRefinement::mesh_is_non_nested boolean for the user to set? Then the library could do the realignment in the middle of refine_and_coarsen_elements() but only if requested by the user.  Roy > On 7/3/07, John Peterson <peterson@...> wrote: >> Derek Gaston writes: >> > Hmm... thanks John. >> > >> > I'll take a look at the coarsening code and the hanging node placement >> > code for refinement. >> > >> > Any ideas on why it's not following the level 1 rule? Is it because >> > that node is off from the edge and it's not finding neighbors? >> >> Right. It's probably a "tear" in the mesh. >> >> I'd try making a 2quad sidebyside mesh, unif. refine once, >> move the center node slightly, and then coarsen either the >> left or the right quartet of children. This would give you >> a simple case where you could look at the neighbor information >> and compare it directly to the picture you are seeing. A >> tear would be present if any interior edges have NULL neighbors; >> it's like having a boundary edge in the interior of the grid. >> >> This probably happens during the call to find_neighbors() when >> prepare_for_use() is called at the end of refinement & coarsening, and >> would likely be fixed if we figured out how to "fix" the hanging node >> position. >> >> J >> >> >> > Derek >> > >> > On 7/2/07, John Peterson <peterson@...> wrote: >> >> I think coarsening assumes that the prior mesh level is nested. >> >> Basically that means we assume we don't need to move any nodes to >> >> coarsen, we just need to throw away some elements. >> >> >> >> Unfortunately smoothing breaks this nestedness assumption. In order >> >> to coarsen properly, you'd have to relocate that hanging node by >> >> constraining it to lie midway along the coarse element's edge  >> >> basically where GMV has drawn that edge. >> >> >> >> I don't know how to do this, but it might just be a matter of >> >> generalizing the existing coarsening code? >> >> >> >> >> >> John >> >> >> >> >> >> Derek Gaston writes: >> >> > Hey guys, >> >> > >> >> > I'm trying to run some redistribution + adaptivity... and I'm running >> >> > into some odd refinement problems. I've attached 4 pictures, >> >> > "111_domain.jpg" is a complete shot of step 111 so you can get some >> >> > perspective. 111,112, and 121.jpg are all zooms of the very bottom >> >> > left corner at different stages of my adaptive cycle. >> >> > >> >> > Here's what I'm doing: Firstly, I do 3 uniform refinements. Next, I'm >> >> > using redistribution to generate meshes 100 through 111. Then at step >> >> > 112 (these numbers are picked by tolerances BTW) the adaptivity kicks >> >> > in. As soon as it does, it does some coarsening and refinement, it >> >> > appears as though that coarsening is not quite working properly, as >> >> > in, not aligning the hanging node correctly. >> >> > >> >> > Further problems ensue later on as you can see in 121.jpg... the "one >> >> > level rule" is definitely _not_ being enforced. Ultimately the >> >> > solution goes to crap after adding around 900 elements (121 is the >> >> > last mesh). Before then, the error is going down at a pretty good >> >> > rate (in fact, I'm pretty happy with the results). >> >> > >> >> > Take a look at the images, and let me know if you have questions. >> >> > >> >> > Derek >> >> >> > >  > This SF.net email is sponsored by DB2 Express > Download DB2 Express C  the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: John Peterson <peterson@cf...>  20070703 16:15:26

Roy Stogner writes: > > John's right that the fix is to reset hanging node positions after > they're created by coarsening, but I'm not sure what the most elegant > way to do that is. We could check for level mismatches in > Elem::coarsen() and realign any hanging nodes before the child > elements get deleted, but the code to do so might be complicated > (particularly if there are greater than levelone mismatches) and is > kind of unnecessary bloat for most users. We can't add a > MeshRefinement::realign_hanging_nodes() function to be called by > users, because it would have to be called from the middle of > MeshRefinement::refine_and_coarsen_elements() in between the > coarsening and the _prepare_for_use() call. > > Perhaps we could add a MeshRefinement::mesh_is_non_nested boolean for > the user to set? Then the library could do the realignment in the > middle of refine_and_coarsen_elements() but only if requested by the > user. This is also what I had in mind, just another flag among the many already used by MeshRefinement. As far as the code itself goes, can we use the same/similar notion of recursive constraints that we do for constraining solution vectors? J 
From: Tahar Amari <amari@cp...>  20070704 21:49:54
Attachments:
Message as HTML

Hello, I do not how this happened but I configured with petsc2.2.3 as configure disablegmv disableshared Libmesh built OK, (I therefore guessed that I did not have to precise the link with my =20= lapack (intel) but to built the examples I had manually to go into each directory and add my path to MPICH in the link $(MPICH). Then all the examples worked except : 1) example 16 who asked for [ImacInteldeTaharAmari:libmesh/examples/ex16] amari% make Compiling C++ (in optimized mode) ex16.C... i686appledarwin8g++4.0.1: /Library/Frameworks/Intel_MKL.framework/=20= Headers: linker input file unused because linking not done Linking ex16opt... g++ DNDEBUG O2 felideconstructors funrollloops fstrict=20 aliasing Wdisabledoptimization ex16.i686appledarwin8.10.1.opt.o =20= o ex16opt /Users/amari/Desktop/Bazard/libmesh/lib/i686apple=20 darwin8.10.1_opt/libmesh.a I/usr/local/petsc/include L/usr/=20 local/petsc/lib/macx lpetscsnes lpetscvec lpetscmat =20 lpetsccontrib lpetscts lpetscdm lpetscksp lpetsc L/usr/local/=20 mpich/lib lmpich lmpichcxx lpmpich lfmpich lmpichf90 lz /usr/bin/ld: Undefined symbols: gzstreambase::gzstreambase(char const*, int) gzstreambase::~gzstreambase() gzstreambase::~gzstreambase() gzstreambase::~gzstreambase() typeinfo for gzstreambase virtual thunk to gzstreambase::~gzstreambase() virtual thunk to gzstreambase::~gzstreambase() _V_AddCmp _V_Constr _V_Destr _V_GetCmp _V_GetDim _V_SetAllCmp _V_SetCmp _METIS_PartGraphKway _METIS_PartGraphRecursive _AddAsgn_VV _Asgn_VV _MaxNorm_V _Mul_QV _Mul_SV _Mul_VV _l1Norm_V _l2Norm_V _ILUPrecond _Q_AddVal _Q_Destr _Q_GetDim _Q_GetEl _Q_SetEntry _BiCGIter _BiCGSTABIter _CGIter _CGNIter _CGSIter _GMRESIter _GetLastAccuracy _GetLastNoIter _JacobiIter _JacobiPrecond _LASResult _QMRIter _SSORIter _SSORPrecond _SetGMRESRestart _SetRTCAccuracy _WriteLASErrDescr _Q_Constr _Q_SetLen _dasum_ _dnrm2_ _daxpy_ _ddot_ _dscal_ _dswap_ _ISEqual _dgemv_ _dgemm_ _ParMETIS_V3_PartKway _dgetrf_ _dgetrs_ _dpotrf_ _dpotrs_ _ISCompressIndicesGeneral _ISCompressIndicesSorted _ISExpandIndicesGeneral _VecScatterCreateToZero _dgeev_ _dcopy_ _dpttrf_ _dpttrs_ _dstebz_ _dstein_ _dgesvd_ _MPID_Comm_builtin _MPID_Comm_direct _MPID_Comm_mem _MPIR_Err_create_code _MPIR_Err_preOrPostInit _MPIR_Err_return_comm _MPIR_Process _MPIR_ThreadSingle _MPIU_Handle_get_ptr_indirect _PMPI_Comm_delete_attr _PMPI_Comm_get_attr _PMPI_Comm_set_attr _PMPI_Comm_create_keyval _MPID_Datatype_builtin _MPID_Datatype_direct _MPID_Datatype_mem _MPID_Op_builtin _MPID_Op_direct _MPID_Op_mem _MPIR_Allreduce _MPIR_Allreduce_inter _MPIR_Op_check_dtype_table _MPIR_Barrier _MPIR_Barrier_inter _MPIU_Handle_obj_alloc _MPIU_Handle_obj_free _MPIR_Bcast _MPIR_Bcast_inter _MPIR_Alltoallv _MPIR_Alltoallv_inter _MPIR_Reduce _MPIR_Reduce_inter _MPIR_Gather _MPIR_Gather_inter _MPIR_Scatterv _MPIR_Gatherv _MPIR_Scan _MPIR_Allgather _MPIR_Allgather_inter _MPIR_Allgatherv _MPIR_Allgatherv_inter _MPIR_Alltoallw _MPIR_Alltoallw_inter _PMPI_Comm_group _PMPI_Comm_remote_group _PMPI_Group_compare _PMPI_Group_free _MPID_Abort _MPID_Group_builtin _MPID_Group_direct _MPID_Group_mem _MPID_VCRT_Create _MPID_VCRT_Get_ptr _MPID_VCR_Dup _MPID_VCR_Get_lpid _MPIR_Comm_create _MPIR_Free_contextid _MPIR_Get_contextid _MPIU_Internal_error_printf _MPIR_Comm_copy _MPIR_Comm_release _MPIR_Group_create _MPIU_Sort_inttable _PMPI_Allgather _MPIC_Sendrecv _MPID_GPID_Get _MPIR_Setup_intercomm_localcomm _PMPI_Allreduce _PMPI_Bcast _MPID_Type_commit _MPID_Datatype_set_contents _MPID_Type_contiguous _MPID_Datatype_free _MPID_Type_blockindexed _PMPI_Comm_create_errhandler _MPID_Errhandler_builtin _MPID_Errhandler_direct _MPID_Errhandler_mem _PMPI_Comm_set_errhandler _MPIR_Group_release _MPIR_Group_check_valid_ranks _PMPI_Comm_get_name _MPIR_Init_thread _MPID_Finalize _MPIR_Call_finalize_callbacks _MPID_Cancel_recv _MPID_Cancel_send _MPID_Request_direct _MPID_Request_mem _MPIR_Grequest_cancel _MPID_Iprobe _MPID_Irecv _MPID_Isend _MPIDI_CH3I_progress_completion_count _MPIDI_CH3_Progress_wait _MPIDI_CH3_Request_destroy _MPID_Recv _MPID_Recv_init _MPIDI_CH3_Progress_test _MPIR_Grequest_free _MPID_Rsend_init _MPID_Send _MPID_Send_init _MPID_Ssend_init _MPID_Startall _MPIR_Request_complete _MPID_Put _MPID_Win_direct _MPID_Win_mem _MPIR_Err_return_win _MPID_Info_direct _MPID_Info_mem _MPID_Win_create _MPID_Win_fence _MPID_Comm_spawn_multiple collect2: ld returned 1 exit status make: *** [ex16opt] Error 1 and 2) example 18 which compiled but at executiion gave [ImacInteldeTaharAmari:libmesh/examples/ex18] amari% mpirun np 2 =20= ex18opt Mesh Information: mesh_dimension()=3D2 spatial_dimension()=3D3 n_nodes()=3D1681 n_elem()=3D400 n_local_elem()=3D200 n_active_elem()=3D400 n_subdomains()=3D1 n_processors()=3D2 processor_id()=3D0 *** Warning, This code is untested, experimental, or likely to see =20 future API changes: src/solvers/diff_system.C, line 29, compiled Jul =20= 4 2007 at 21:10:55 *** EquationSystems n_systems()=3D1 System "NavierStokes" Type "Implicit" Variables=3D"u" "v" "p" Finite Element Types=3D"LAGRANGE" "LAGRANGE" "LAGRANGE" Approximation Orders=3D"SECOND" "SECOND" "FIRST" n_dofs()=3D3803 n_local_dofs()=3D1963 n_constrained_dofs()=3D0 n_vectors()=3D2 Solving time step 0, time =3D 0 Solving time step 1, time =3D 0.005 Inexact Newton step FAILED at step 2 [0] src/solvers/newton_solver.C, line 233, compiled Jul 4 2007 at =20 21:11:12 [1] src/solvers/newton_solver.C, line 233, compiled Jul 4 2007 at =20 21:11:12 [cli_0]: aborting job: application called MPI_Abort(comm=3D0x84000000, 1)  process 0 [cli_1]: aborting job: application called MPI_Abort(comm=3D0x84000000, 1)  process 1 rank 1 in job 22 ImacInteldeTaharAmari.local_62200 caused =20 collective abort of all ranks exit status of rank 1: return code 1 rank 0 in job 22 ImacInteldeTaharAmari.local_62200 caused =20 collective abort of all ranks exit status of rank 0: return code 1 SOmebody could tell me what I should learn from this for those two =20 examples ? I saw that example 16 seems to use SLEPc, are the link problems =20 related to this missing library ? Many thanks for your help Tahar  T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 47 53 fax: 33 1 69 33 30 08 email: <mailto:amari@...> URL : http://www.cpht.polytechnique.fr/cpht/amari Le 3 juil. 07 =E0 18:15, John Peterson a =E9crit : > Roy Stogner writes: >> >> John's right that the fix is to reset hanging node positions after >> they're created by coarsening, but I'm not sure what the most elegant >> way to do that is. We could check for level mismatches in >> Elem::coarsen() and realign any hanging nodes before the child >> elements get deleted, but the code to do so might be complicated >> (particularly if there are greater than levelone mismatches) and is >> kind of unnecessary bloat for most users. We can't add a >> MeshRefinement::realign_hanging_nodes() function to be called by >> users, because it would have to be called from the middle of >> MeshRefinement::refine_and_coarsen_elements() in between the >> coarsening and the _prepare_for_use() call. >> >> Perhaps we could add a MeshRefinement::mesh_is_non_nested boolean for >> the user to set? Then the library could do the realignment in the >> middle of refine_and_coarsen_elements() but only if requested by the >> user. > > This is also what I had in mind, just another flag among the many > already used by MeshRefinement. As far as the code itself goes, can > we use the same/similar notion of recursive constraints that we do for > constraining solution vectors? > > > J > > > =20= >  > This SF.net email is sponsored by DB2 Express > Download DB2 Express C  the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Tahar Amari <amari@cp...>  20070704 21:56:23
Attachments:
Message as HTML

Hello , Does anyone has some experience of libmesh with finite volume schemes. I mean like godunov scheme on tetrahedra using libmesh ? Would you suggest a precise strategy to handle with libmesh the creation of a class of RaviartThomas (H(div)) finite elements (of order 1 for example) on thetraedras? Those elements have four degrees of freedom (normal of a vector field defined on the faces of the thetraedra ) Many thanks Tahar  T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 47 53 fax: 33 1 69 33 30 08 email: <mailto:amari@...> URL : http://www.cpht.polytechnique.fr/cpht/amari 
From: Roy Stogner <roystgnr@ic...>  20070704 22:27:46

On Wed, 4 Jul 2007, Tahar Amari wrote: > Does anyone has some experience of libmesh with finite volume > schemes. I mean like godunov scheme on tetrahedra using libmesh ? For finite volume schemes where the volumes are the primal triangles, tetrahedra, etc. that libMesh supports, you can use one of our discontinuous finite element spaces like MONOMIAL or XYZ, and initialize the FE objects on interiors to calculate volume integrals and on element sides to handle flux integrals. For finite volume schemes where the volumes are centered on primal nodes and discontinuous on primal element interiors, I don't think there's an easy way to perform such calculations in libMesh. > Would you suggest a precise strategy to handle with libmesh > the creation of a class of RaviartThomas (H(div)) finite elements > (of order 1 for example) on thetraedras? > Those elements have four degrees of freedom (normal of a vector > field defined on the faces of the thetraedra ) There's two difficulties here. First, libMesh doesn't support vectorvalued elements; we solve vector problems by treating each vector component as a scalar variable. This works when your vectorvalued function spaces are just tensor products of scalarvalued spaces, but for spaces like the RaviartThomas produce that isn't true. Second, libMesh has a Tet10 class, but would need a Tet14 class to handle finite element spaces with face degrees of freedom on tets. The minimal set of changes you'd need to make to the library is to first add a 14node tetrahedron, then add a scalarvalued FE subclass that is continuous only at face centroids. Then, in your system assembly code you would translate quadrature point locations and scalar values into the isomorphic RaviartThomas values on each element. We'd appreciate your contribution, since a Tet14 class would be useful for high polynomial degree Hierarchic elements as well, but I warn you that it wouldn't be a simple task.  Roy 
From: Tahar Amari <amari@cp...>  20070705 10:34:35

Hello Roy, Excuse me I may not be familiar with the terminology. > > For finite volume schemes where the volumes are the primal triangles, > tetrahedra, etc. that libMesh supports, you can use one of our > discontinuous finite element spaces like MONOMIAL or XYZ, and > initialize the FE objects on interiors to calculate volume integrals > and on element sides to handle flux integrals. Do you mean, Tetrahedral mesh with cell centered unknowns ? > > For finite volume schemes where the volumes are centered on primal > nodes and discontinuous on primal element interiors, I don't think > there's an easy way to perform such calculations in libMesh. Do you mean, Dual mesh of the Tetrahedral mesh ? In this case with node centered unknowns ? > > There's two difficulties here. > > First, libMesh doesn't support vectorvalued elements; we solve vector > problems by treating each vector component as a scalar variable. This > works when your vectorvalued function spaces are just tensor products > of scalarvalued spaces, but for spaces like the RaviartThomas > produce that isn't true. Second, libMesh has a Tet10 class, but would > need a Tet14 class to handle finite element spaces with face degrees > of freedom on tets. > > The minimal set of changes you'd need to make to the library is to > first add a 14node tetrahedron, then add a scalarvalued FE subclass > that is continuous only at face centroids. Then, in your system > assembly code you would translate quadrature point locations and > scalar values into the isomorphic RaviartThomas values on each > element. We'd appreciate your contribution, since a Tet14 class would > be useful for high polynomial degree Hierarchic elements as well, but > I warn you that it wouldn't be a simple task. I was thinking of Tet4, only four degree of freedom, on the faces. Tahar >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20070705 13:01:19

On Thu, 5 Jul 2007, Tahar Amari wrote: > Excuse me I may not be familiar with the terminology. The mistakes may be mine; I know roughly how the finite volume method works but I've never implemented it or studied it in much detail. >> For finite volume schemes where the volumes are the primal triangles, >> tetrahedra, etc. that libMesh supports, you can use one of our >> discontinuous finite element spaces like MONOMIAL or XYZ, and >> initialize the FE objects on interiors to calculate volume integrals >> and on element sides to handle flux integrals. > > Do you mean, Tetrahedral mesh with cell centered unknowns ? Yes, exactly. (or triangle mesh with cell centered unknowns, hex mesh with cell centered unknowns, etc.) >> For finite volume schemes where the volumes are centered on primal >> nodes and discontinuous on primal element interiors, I don't think >> there's an easy way to perform such calculations in libMesh. > > Do you mean, Dual mesh of the Tetrahedral mesh ? > In this case with node centered unknowns ? Again, that is correct. >> We'd appreciate your contribution, since a Tet14 class would be >> useful for high polynomial degree Hierarchic elements as well, but >> I warn you that it wouldn't be a simple task. > > I was thinking of Tet4, only four degree of freedom, on the faces. Yes, but because of the way libMesh uses the Node class for both degree of freedom storage and mesh geometry, a tetrahedron with only four face nodes can't be used to create a continuous mesh; at the very least you would additionally need four vertex nodes to define the geometry, even though with your FE class these nodes would hold zero degrees of freedom each. And although your application only needs 8 nodes, if you're going to all that effort then you might as well add the extra 6 edge nodes and give us a topologically complete element. ;)  Roy 
From: Tahar Amari <amari@cp...>  20070705 13:12:55
Attachments:
Message as HTML

You are right Roy, I actually also need the edge nodes. Because for example a vector field H(div) with faces centrered degrees of freedom (its normals) , has its rotational H(rot), edge centered. And I also need several interpolation formulaes from cellcenter to =20 facescenter and edgemidle and the other ways too ... By the way is there somewhere in libmesh a way of determining to which tetrahedra belongs a points with '(x,y,z) coordinates given a priori, or do I have to make it myself ( I can figure out how to do it), but in =20 case it exists in libmesh I guess it will be optimized (there are several =20 possible algorithms but those are differently efficient !). Tahar  T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 47 53 fax: 33 1 69 33 30 08 email: <mailto:amari@...> URL : http://www.cpht.polytechnique.fr/cpht/amari Le 5 juil. 07 =E0 15:01, Roy Stogner a =E9crit : > On Thu, 5 Jul 2007, Tahar Amari wrote: > >> Excuse me I may not be familiar with the terminology. > > The mistakes may be mine; I know roughly how the finite volume method > works but I've never implemented it or studied it in much detail. > >>> For finite volume schemes where the volumes are the primal =20 >>> triangles, >>> tetrahedra, etc. that libMesh supports, you can use one of our >>> discontinuous finite element spaces like MONOMIAL or XYZ, and >>> initialize the FE objects on interiors to calculate volume integrals >>> and on element sides to handle flux integrals. >> >> Do you mean, Tetrahedral mesh with cell centered unknowns ? > > Yes, exactly. (or triangle mesh with cell centered unknowns, hex mesh > with cell centered unknowns, etc.) > >>> For finite volume schemes where the volumes are centered on primal >>> nodes and discontinuous on primal element interiors, I don't think >>> there's an easy way to perform such calculations in libMesh. >> >> Do you mean, Dual mesh of the Tetrahedral mesh ? >> In this case with node centered unknowns ? > > Again, that is correct. > >>> We'd appreciate your contribution, since a Tet14 class would be >>> useful for high polynomial degree Hierarchic elements as well, but >>> I warn you that it wouldn't be a simple task. >> >> I was thinking of Tet4, only four degree of freedom, on the faces. > > Yes, but because of the way libMesh uses the Node class for both > degree of freedom storage and mesh geometry, a tetrahedron with only > four face nodes can't be used to create a continuous mesh; at the very > least you would additionally need four vertex nodes to define the > geometry, even though with your FE class these nodes would hold zero > degrees of freedom each. And although your application only needs 8 > nodes, if you're going to all that effort then you might as well add > the extra 6 edge nodes and give us a topologically complete element. > ;) >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20070705 13:34:04

On Thu, 5 Jul 2007, Tahar Amari wrote: > By the way is there somewhere in libmesh a way of determining to which > tetrahedra belongs a points with '(x,y,z) coordinates given a priori, or > do I have to make it myself ( I can figure out how to do it), but in case it > exists in libmesh I guess it will be optimized (there are several possible > algorithms but those are differently efficient !). The PointLocator class currently defaults to PointLocatorTree, which builds an octree structure for finding the element containing a point. This probably costs a bit of overhead when you first create the structure (especially if you do much adaptive mesh refinement, which invalidates old PointLocators), but it should be roughly O(log N) for each point you find with it.  Roy 
From: Tahar Amari <amari@cp...>  20070705 13:54:55
Attachments:
Message as HTML

> The PointLocator class currently defaults to PointLocatorTree, which > builds an octree structure for finding the element containing a point. > This probably costs a bit of overhead when you first create the > structure (especially if you do much adaptive mesh refinement, which > invalidates old PointLocators), but it should be roughly O(log N) for > each point you find with it. Great Roy . Since I am new in libmesh , do you think that concerning documentation I should go in .h files , to learn about the various fonctionalities of the classes , what they do ..... Tahar  T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 47 53 fax: 33 1 69 33 30 08 email: <mailto:amari@...> URL : http://www.cpht.polytechnique.fr/cpht/amari 
From: Roy Stogner <roystgnr@ic...>  20070705 14:14:09

On Thu, 5 Jul 2007, Tahar Amari wrote: > Great Roy . Since I am new in libmesh , do you think that > concerning documentation I should go in .h files , to learn about > the various fonctionalities of the classes , what they do ..... I'm afraid we don't have anything like an official manual, so the example applications and the header files are the best way to learn. There are processed HTML versions here: http://libmesh.sourceforge.net/examples.php http://libmesh.sourceforge.net/doxygen/index.php Which (depending on your development environment) you may find more readable than the raw source code.  Roy 
From: Tahar Amari <amari@cp...>  20070705 20:15:16
Attachments:
Message as HTML

Roy, I read your slides about divergence free finite element . It is very interesting. It sounds you did not need to use RaviartThomas finite element. Would you suggest me that I could be able to solve MHD equation, now a vector field B is divergence free without having to use H(div) or H(rot) finite element, on Tetrahedra. Could you please be more precise ? Suppose I have a tetrahedral mesh, what kind of element could I use ? Degree of freedom on the vertices of the tetrahedra ? Do you rewrite the equaitons with a potential vector B=3Drot(A) such that div B=3D 0 (your slide #3 writes u=3Drot(psi). Many thanks Tahar  T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 47 53 fax: 33 1 69 33 30 08 email: <mailto:amari@...> URL : http://www.cpht.polytechnique.fr/cpht/amari Le 5 juil. 07 =E0 16:14, Roy Stogner a =E9crit : > On Thu, 5 Jul 2007, Tahar Amari wrote: > >> Great Roy . Since I am new in libmesh , do you think that >> concerning documentation I should go in .h files , to learn about >> the various fonctionalities of the classes , what they do ..... > > I'm afraid we don't have anything like an official manual, so the > example applications and the header files are the best way to learn. > There are processed HTML versions here: > http://libmesh.sourceforge.net/examples.php > http://libmesh.sourceforge.net/doxygen/index.php > Which (depending on your development environment) you may find more > readable than the raw source code. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20070705 20:46:48

On Thu, 5 Jul 2007, Tahar Amari wrote: > I read your slides about divergence free finite element . > It is very interesting. > It sounds you did not need to use RaviartThomas finite element. That's correct. > Would you suggest me that I could be able to solve > MHD equation, now a vector field B is divergence free > without having to use H(div) or H(rot) finite element, > on Tetrahedra. I don't think I would recommend it. The trouble with using divergencefree spaces generated from the curl operator is that in three dimensions the curl operator's kernel is huge. In 2D the kernel is one dimensional and so it's easy to constrain away to produce a positive definite system; in 3D you either need a very clever way of constraining away the kernel or you need your solvers to be able to handle highly underdefined systems of equations. My graduate studies committee claimed the problem was "too hard or impossible" in 3D, which was an overstatement, but it was turning out to be difficult and inefficient enough that I didn't argue about changing my dissertation topic. > Suppose I have a tetrahedral mesh, what kind of element could I use > ? Degree of freedom on the vertices of the tetrahedra ? > > Do you rewrite the equaitons with a potential vector > B=rot(A) such that div B= 0 (your slide #3 writes u=rot(psi). Yes; but unlike in 2D where psi is a scalar (with an implicit vector direction of +z), in 3D A is a full vector. Your finite element space is then three scalar elements, $C^1$ elements if you don't want end up with interface integrals in your weak formulation. The only 3D C^1 elements in libMesh are the Hermite cubes, though, so if you needed to use a tetrahedral mesh you would have to implement something like the AlfeldAwanouLai 4split tetrahedral macroelement. That last sentence was probably the only part of this email relevant to libMesh as a whole, so if you've got more questions on the subject we won't subject libmeshusers to the discussion. If I haven't discouraged you yet, send me a private email and I can dig through my thesis bibliography for you; I think I've got a reference or two to papers using similar techniques for MHD.  Roy 