Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(41) 
_{Feb}
(36) 
_{Mar}
(49) 
_{Apr}
(41) 
_{May}
(73) 
_{Jun}
(51) 
_{Jul}
(12) 
_{Aug}
(69) 
_{Sep}
(26) 
_{Oct}
(43) 
_{Nov}
(75) 
_{Dec}
(23) 
2018 
_{Jan}
(86) 
_{Feb}
(36) 
_{Mar}
(50) 
_{Apr}
(19) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1
(2) 
2
(1) 
3

4

5
(1) 
6

7
(9) 
8
(1) 
9
(2) 
10
(1) 
11
(2) 
12
(8) 
13
(18) 
14
(8) 
15
(3) 
16

17

18
(2) 
19
(5) 
20
(7) 
21
(13) 
22
(3) 
23
(9) 
24

25
(2) 
26
(13) 
27
(6) 
28
(24) 
29
(6) 
30
(5) 
31

From: Yujie <recrusader@gm...>  20091026 17:44:33

Dear Derek, Thank you very much for your detailed explanations. I have understood the basic mechanism. Thanks a lot:). Regards, Yujie On Mon, Oct 26, 2009 at 9:34 AM, Derek Gaston <friedmud@...> wrote: > Let me take a whack at this... > > Yujie.... there is no error involved.... the statements in example 3 will > work for any type of element. > > The trick is that example 3 is using a penalty equation to enforce a > Dirichlet BC. It does NOT enforce the Dirichlet BC in the traditional way > of modifying values at nodes. Instead... we are _integrating_ a penalty > condition over the boundary such that when that penalty is taken into > account the solution values on that boundary have no choice other than to > take on the values you are trying to enforce. > > It's the fact that we are _integrating_ this condition that allows us to > not care about element type. For every element type we know how to > integrate something over the boundary... > > As Roy mentioned... for most shape functions (including Lagrange) _phi_face > corresponding to an interior degree of freedom will be zero. This means > that those shape functions don't have any support on the boundary and > therefore their associated degrees of freedom aren't involved in the > selection of values corresponding to the boundary condition.... which means > their equations aren't modified by the penalty integration (again because > _phi_face is _zero_ for those shape functions). > > Hope that helps, > Derek > > > On Oct 25, 2009, at 7:56 PM, Yujie wrote: > > Dear Roy, >> >> " >> Simple: we don't guarantee that i is a boundary vertex. In fact, >> that would be an easy source of error when integrating Neumann or >> Robin type boundary conditions, where the interior vertices affect the >> gradients being integrated and you can't leave them out." >> >> Whether is the error avoided? If it is, how to do it? Thanks a lot. >> >> Regards, >> Yujie >> >> On Sun, Oct 25, 2009 at 7:35 PM, Roy Stogner <roystgnr@... >> >wrote: >> >> >>> On Sun, 25 Oct 2009, Yujie wrote: >>> >>> for each element, we have >>> >>>> " >>>> for (unsigned int i=0; i<phi.size(); i++) >>>> Fe(i) += JxW[qp]*fxy*phi[i][qp]; >>>> " >>>> for each boundary side, we have correspondingly >>>> " >>>> for (unsigned int i=0; i<phi_face.size(); i++) >>>> Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; >>>> " >>>> Assuming that we use tetrahedral element and linear Lagrange shape >>>> function, >>>> in this case, for an element, there are 4 vertices, that is i=0, 1, 2, >>>> 3. >>>> for a boundary side, how to guarantee the vertices (i=0, 1, 2) on the >>>> boundary? Thanks a lot. >>>> >>>> >>> Simple: we don't guarantee that i is a boundary vertex. In fact, >>> that would be an easy source of error when integrating Neumann or >>> Robin type boundary conditions, where the interior vertices affect the >>> gradients being integrated and you can't leave them out. >>> >>> But for Dirichlet conditions, the terms where i is an interior vertex >>> don't matter, because phi_face is evaluated on boundary sides and >>> we have phi_face[i_interior][qp] = 0. >>>  >>> Roy >>> >>>  >> Come build with us! The BlackBerry(R) Developer Conference in SF, CA >> is the only developer event you need to attend this year. Jumpstart your >> developing skills, take BlackBerry mobile applications to market and stay >> ahead of the curve. Join us from November 9  12, 2009. Register now! >> http://p.sf.net/sfu/devconference >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> > > 
From: Roy Stogner <roystgnr@ic...>  20091026 17:21:50

On Mon, 26 Oct 2009, Joa Ljungvall wrote: > What I'm trying to do is: > > 1) Refine the mesh > 2) Find and store hanging nodes > 3) Move points on boundary to the "real" boundary > then I do a do{...}While() including > 4) Check that Jacobian>0, if not switch node 0 and 2 (the pointers in the elem) > I'm not sure this doesn't mess up something for neighbors.. This merely hides a symptom without fixing the problem. If you have two elements ABC and CBD, and you invert one of them by moving a node too far: AC AC \ / \ .'/ \ /  \ D / \ /  \  / \ /  \/ BD B Changing CBD into CDB doesn't actually make that second mesh valid; instead of an inverted element you'd have two overlapping elements! And I'm pretty sure you'll break some of our mesh topology assumptions (and thereby break the find_neighbors routine, leading to that remote_elem bug) in the process.  Roy 
From: Joa Ljungvall <libmesh@jo...>  20091026 17:13:09

Hi, I've tried to get this running, but still have some problems... What I'm trying to do is: 1) Refine the mesh 2) Find and store hanging nodes 3) Move points on boundary to the "real" boundary then I do a do{...}While() including 4) Check that Jacobian>0, if not switch node 0 and 2 (the pointers in the elem) I'm not sure this doesn't mess up something for neighbors.. 5) Move the hanging nodes 6) If all Jacobian's>0 and all hanging nodes are in between there parents, go on... This works for a few iterations, then when looking for hanging nodes, i.e. the first thing I do after coursing and refining there is an element that has a neighbor with level==0 but that is not real... In the debugger Refining the mesh... [0] /usr/local/libmesh/include/geom/remote_elem.h, line 94, compiled Oct 11 2009 at 15:02:42 terminate called after throwing an instance of 'libMesh::LogicError' what(): Error in libMesh internal logic Program received signal SIGABRT, Aborted. 0x00007fff80dcdff6 in __kill () (gdb) up #1 0x00007fff80e6f072 in abort () (gdb) #2 0x00007fff811185d2 in __gnu_cxx::__verbose_terminate_handler () (gdb) #3 0x00007fff81116ae1 in __cxxabiv1::__terminate () (gdb) #4 0x00007fff81116b16 in std::terminate () (gdb) #5 0x00007fff81116bfc in __cxa_throw () (gdb) #6 0x000000010282d7b4 in RemoteElem::n_sides () (gdb) #7 0x000000010189880e in Elem::which_neighbor_am_i (this=0x101903b80, e=<value temporarily unavailable, due to optimizations>) at elem.h:1293 1293 for (unsigned int s=0; s<this>n_neighbors(); s++) (gdb) print *neigh No symbol "neigh" in current context. (gdb) up #8 0x0000000101889418 in AGATAGeFEM::libMESH::find_hanging_nodes_and_parents (mesh=<value temporarily unavailable, due to optimizations>, hanging_nodes=@0x7fff5fbfd4f0) at ../../libs/libmesh/LaplaceProblem.cc:339 warning: Source file is more recent than executable. 339 neigh>which_neighbor_am_i(ancestor); (gdb) print *neigh $1 = { <ReferenceCountedObject<Elem>> = { <ReferenceCounter> = { _vptr$ReferenceCounter = 0x102d0ad90, static _n_objects = { _val = 605251 }, static _mutex = {<No data fields>} }, <No data fields>}, <DofObject> = { <ReferenceCountedObject<DofObject>> = { <ReferenceCounter> = { _vptr$ReferenceCounter = 0x102d0af08, static _n_objects = { _val = 605251 }, static _mutex = {<No data fields>} }, <No data fields>}, members of DofObject: old_dof_object = 0x0, static invalid_id = 4294967295, static invalid_processor_id = 65535, _id = 4294967295, _processor_id = 65535, _n_systems = 0 '\0', _n_v_comp = 0x0, _dof_ids = 0x0 }, members of Elem: static type_to_n_nodes_map = {2, 3, 4, 3, 6, 4, 8, 9, 4, 10, 8, 20, 27, 6, 15, 18, 5, 2, 4, 6, 8, 16, 18, 6, 16, 1, 0}, _nodes = 0x0, _neighbors = 0x0, _parent = 0x0, _children = 0x0, _rflag = 1 '\001', _pflag = 1 '\001', _p_level = 0 '\0', _sbd_id = 0 '\0', static _bp1 = 65449, static _bp2 = 48661 } So, any ideas? Bugs in the my implementation of logical issues? cheers Joa On Wed, Oct 14, 2009 at 05:15:26PM 0500, Roy Stogner wrote: > > On Wed, 14 Oct 2009, Derek Gaston wrote: > > >I thought I ran into similar problems when I did mesh redistribution / > >smoothing + adaptivity. I was able to go in and "fix" the hanging > >nodes somehow... But I don't remember where I did it ( I thought it > >was down inside the library and I committed it back). > > Heh, and I thought your solution to that was to just do the adaptivity > first and the redistribution second. ;) > > >I'm not near a computer right now where I can go look at what I did > >(I'm traveling this week)... But I bet if you look back a few years > >(2007?) you might see a checkin from me referencing it. > > Aha! MeshTools::find_hanging_nodes_and_parents() looks like it's > exactly the trick we'd need. > > Call that function, get the hanging_nodes map. > > Do > hanging_node_moved = false > Loop over hanging_nodes > If a node isn't already in between its parents > move it > hanging_node_moved = true > While(hanging_node_moved == true) > > The loop probably isn't the most efficient way to handle recursive > constraints (maybe a depthfirst descent into hanging_nodes at each > entry?) but it would work. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20091026 16:11:31

On Mon, 26 Oct 2009, Tim Kroeger wrote: > 3. _n_v_comp should have identical entries for each DofObject in the > mesh For isoparametric LAGRANGE elements, yes. For other elements, there is still a lot of redundancy, but there can be many different cases. Vertices, edges, faces, and volumes will have different numbers of components per variable, and likewise for different varieties of hanging nodes. > that would save nearly 40 MB per thread for my case.) It would still be possible to save roughly this amount, by sharing _n_v_comp structures (and _n_systems) between DoFObjects. But you see now that it's not quite a trivial change. If we wanted to avoid even temporarily allocating that memory, the first idea that comes to mind is to build some kind of DoFMap factory object that manages referencecounted n_v_comp structures, then switch DofObject pointers from object to object as we assign new variables to them... not something I'd want to futz with unless I was really bored or literally running out of RAM. > 4. When I configured libMesh for 64bit mode, I did *not* do > "enableghosted". As far as I remember, this is the default now, or > should I be a fool and remember this wrong? Anyway, libmesh_config.h > contains "#define LIBMESH_ENABLE_GHOSTED 1", so this should be okay, > shouldn't it? Yes.  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20091026 15:39:32

Dear Kai and all, On Mon, 26 Oct 2009, Liu Kai wrote: > I guess the reason may be from the pointer length difference between > 32 bit and 64 bit. Many pointers are stored in Elem data structure. So > this amount will double after switching 32 bit to 64 bit. Let me try to find out what this means in detail. For each Elem, we have: member of Elem 32bit 64bit  DofObject* DofObject::old_dof_object; 4 8 unsigned int DofObject::_id 4 4 unsigned short int DofObject::_processor_id 2 2 unsigned char DofObject::_n_systems 1 1 (padding) 1 1 unsigned char** DofObject::_n_v_comp 4 8 unsigned int** DofObject::_dof_ids 4 8 Node** Elem::_nodes 4 8 Elem** Elem::_neighbors 4 8 Elem* Elem::_parent 4 8 Elem** Elem::_children 4 8 unsigned char Elem::_rflag 1 1 unsigned char Elem::_pflag 1 1 unsigned char Elem::_p_level 1 1 subdomain_id_type Elem::_sbd_id 1 1 (padding) 0 4  total 40 72 For each Node, we have: member of Node 32bit 64bit  DofObject* DofObject::old_dof_object; 4 8 unsigned int DofObject::_id 4 4 unsigned short int DofObject::_processor_id 2 2 unsigned char DofObject::_n_systems 1 1 (padding) 1 1 unsigned char** DofObject::_n_v_comp 4 8 unsigned int** DofObject::_dof_ids 4 8 T Node::_coords[LIBMESH_DIM] 24 24  total 44 56 This looks like quite an amount of extra memory for 64bit. Also, the _n_v_comp and _dof_ids members of both Elem and Node are arrays of pointers and hence store more pointers. _n_v_comp contains as many pointers as there are systems in the application (which I think is 8 in my case, i.e. additional 32 or 64 bytes of memory for 32 or 64bit mode, respectively). The same holds for _dof_ids. If we look only at the difference between 32bit and 64bit mode, we finally get 160 extra bytes of memory for each Elem and 140 extra bytes for each Node. In the first time step, I have 268152 Elem instances and 316744 Node instances. This results in a total surplus of 80913600 bytes (roughly 80 MB) in 64bit mode. Each cluster node has a total memory of 8 GB available, hence the difference between 32bit and 64bit should be within 1% of the total memory per thread. Hmm, this does not explain the behaviour I experienced. In my application, each thread now requires more than 30% of the available node memory (i.e. I cannot use more than 3 threads per node), where it was less than 16% when I ran in 32bit mode before my vacations (6 threads per node). (The total number of threads are unchanged.) Several thoughs come to my mind: 1. Did I miss something essential in my above calculation? 2. I now understand completely why you guys are working on the ParallelGrid so heavily. (: 3. _n_v_comp should have identical entries for each DofObject in the mesh; why is it stored for each DofObject individually? (Not doing that would save nearly 40 MB per thread for my case.) 4. When I configured libMesh for 64bit mode, I did *not* do "enableghosted". As far as I remember, this is the default now, or should I be a fool and remember this wrong? Anyway, libmesh_config.h contains "#define LIBMESH_ENABLE_GHOSTED 1", so this should be okay, shouldn't it? Anyway, I'm a little bit confused. Any suggestions are welcome. Best Regards, Tim  Dr. Tim Kroeger tim.kroeger@... Phone +494212187710 tim.kroeger@... Fax +494212184236 Fraunhofer MEVIS, Institute for Medical Image Computing Universitaetsallee 29, 28359 Bremen, Germany 
From: Derek Gaston <friedmud@gm...>  20091026 14:34:49

Let me take a whack at this... Yujie.... there is no error involved.... the statements in example 3 will work for any type of element. The trick is that example 3 is using a penalty equation to enforce a Dirichlet BC. It does NOT enforce the Dirichlet BC in the traditional way of modifying values at nodes. Instead... we are _integrating_ a penalty condition over the boundary such that when that penalty is taken into account the solution values on that boundary have no choice other than to take on the values you are trying to enforce. It's the fact that we are _integrating_ this condition that allows us to not care about element type. For every element type we know how to integrate something over the boundary... As Roy mentioned... for most shape functions (including Lagrange) _phi_face corresponding to an interior degree of freedom will be zero. This means that those shape functions don't have any support on the boundary and therefore their associated degrees of freedom aren't involved in the selection of values corresponding to the boundary condition.... which means their equations aren't modified by the penalty integration (again because _phi_face is _zero_ for those shape functions). Hope that helps, Derek On Oct 25, 2009, at 7:56 PM, Yujie wrote: > Dear Roy, > > " > Simple: we don't guarantee that i is a boundary vertex. In fact, > that would be an easy source of error when integrating Neumann or > Robin type boundary conditions, where the interior vertices affect the > gradients being integrated and you can't leave them out." > > Whether is the error avoided? If it is, how to do it? Thanks a lot. > > Regards, > Yujie > > On Sun, Oct 25, 2009 at 7:35 PM, Roy Stogner > <roystgnr@...>wrote: > >> >> On Sun, 25 Oct 2009, Yujie wrote: >> >> for each element, we have >>> " >>> for (unsigned int i=0; i<phi.size(); i++) >>> Fe(i) += JxW[qp]*fxy*phi[i][qp]; >>> " >>> for each boundary side, we have correspondingly >>> " >>> for (unsigned int i=0; i<phi_face.size(); i++) >>> Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; >>> " >>> Assuming that we use tetrahedral element and linear Lagrange shape >>> function, >>> in this case, for an element, there are 4 vertices, that is i=0, >>> 1, 2, 3. >>> for a boundary side, how to guarantee the vertices (i=0, 1, 2) on >>> the >>> boundary? Thanks a lot. >>> >> >> Simple: we don't guarantee that i is a boundary vertex. In fact, >> that would be an easy source of error when integrating Neumann or >> Robin type boundary conditions, where the interior vertices affect >> the >> gradients being integrated and you can't leave them out. >> >> But for Dirichlet conditions, the terms where i is an interior vertex >> don't matter, because phi_face is evaluated on boundary sides and >> we have phi_face[i_interior][qp] = 0. >>  >> Roy >> >  > Come build with us! The BlackBerry(R) Developer Conference in SF, CA > is the only developer event you need to attend this year. Jumpstart > your > developing skills, take BlackBerry mobile applications to market and > stay > ahead of the curve. Join us from November 9  12, 2009. Register now! > http://p.sf.net/sfu/devconference > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Liu Kai <liukai.sea@gm...>  20091026 12:48:36

Hi, I guess the reason may be from the pointer length difference between 32 bit and 64 bit. Many pointers are stored in Elem data structure. So this amount will double after switching 32 bit to 64 bit. This is just a guess. Any comments are appreciated. Best regards, Kai Tim Kroeger 写道: > Dear all, > > After switching from 32 bit to 64 bit, I notice that my application > requires essentially more memory than before. I am not 100% sure that > the 32>64 bit switch is the decisive change since during my vacation > in September there might have been more changes in other parts of our > insitute's software, but what I guess is that some data structure in > libMesh which is used very frequently (such as Elem) is affected very > disadvantageously by this change. Can anybody comment on that? Is > there any easy solution? Would you recommend to switch back to 32 > bit? > > Best Regards, > > Tim > 
From: Tim Kroeger <tim.kroeger@ce...>  20091026 12:30:22

Dear all, After switching from 32 bit to 64 bit, I notice that my application requires essentially more memory than before. I am not 100% sure that the 32>64 bit switch is the decisive change since during my vacation in September there might have been more changes in other parts of our insitute's software, but what I guess is that some data structure in libMesh which is used very frequently (such as Elem) is affected very disadvantageously by this change. Can anybody comment on that? Is there any easy solution? Would you recommend to switch back to 32 bit? Best Regards, Tim  Dr. Tim Kroeger tim.kroeger@... Phone +494212187710 tim.kroeger@... Fax +494212184236 Fraunhofer MEVIS, Institute for Medical Image Computing Universitaetsallee 29, 28359 Bremen, Germany 
From: Yujie <recrusader@gm...>  20091026 01:56:58

Dear Roy, " Simple: we don't guarantee that i is a boundary vertex. In fact, that would be an easy source of error when integrating Neumann or Robin type boundary conditions, where the interior vertices affect the gradients being integrated and you can't leave them out." Whether is the error avoided? If it is, how to do it? Thanks a lot. Regards, Yujie On Sun, Oct 25, 2009 at 7:35 PM, Roy Stogner <roystgnr@...>wrote: > > On Sun, 25 Oct 2009, Yujie wrote: > > for each element, we have >> " >> for (unsigned int i=0; i<phi.size(); i++) >> Fe(i) += JxW[qp]*fxy*phi[i][qp]; >> " >> for each boundary side, we have correspondingly >> " >> for (unsigned int i=0; i<phi_face.size(); i++) >> Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; >> " >> Assuming that we use tetrahedral element and linear Lagrange shape >> function, >> in this case, for an element, there are 4 vertices, that is i=0, 1, 2, 3. >> for a boundary side, how to guarantee the vertices (i=0, 1, 2) on the >> boundary? Thanks a lot. >> > > Simple: we don't guarantee that i is a boundary vertex. In fact, > that would be an easy source of error when integrating Neumann or > Robin type boundary conditions, where the interior vertices affect the > gradients being integrated and you can't leave them out. > > But for Dirichlet conditions, the terms where i is an interior vertex > don't matter, because phi_face is evaluated on boundary sides and > we have phi_face[i_interior][qp] = 0. >  > Roy > 
From: Yujie <recrusader@gm...>  20091026 01:55:48

According the index of nodes of the tetrahedral element, if one knows which side is on the boundary, it means that he can know which nodes is not on the boundary side. However, how to know the corresponding relationship between the nodes of the boundary side and "i" of Fe(i)? Thanks. Regards, Yujie On Sun, Oct 25, 2009 at 8:17 PM, Roy Stogner <roystgnr@...>wrote: > > On Sun, 25 Oct 2009, Yujie wrote: > > Thanks, Roy. It means that one needs to judge which triangle (which three >> vertices) of an element is on boundary for Neumann or Robin type boundary >> conditions? >> > > You need to know which side is on the boundary (elem>side(s) == NULL > in libMesh) for every boundary condition, but you don't need to > explicitly know which nodes are on the side for any boundary > conditions. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20091026 01:18:08

On Sun, 25 Oct 2009, Yujie wrote: > Thanks, Roy. It means that one needs to judge which triangle (which three > vertices) of an element is on boundary for Neumann or Robin type boundary > conditions? You need to know which side is on the boundary (elem>side(s) == NULL in libMesh) for every boundary condition, but you don't need to explicitly know which nodes are on the side for any boundary conditions.  Roy 
From: Yujie <recrusader@gm...>  20091026 00:48:22

Thanks, Roy. It means that one needs to judge which triangle (which three vertices) of an element is on boundary for Neumann or Robin type boundary conditions? Regards, Yujie On Sun, Oct 25, 2009 at 7:35 PM, Roy Stogner <roystgnr@...>wrote: > > On Sun, 25 Oct 2009, Yujie wrote: > > for each element, we have >> " >> for (unsigned int i=0; i<phi.size(); i++) >> Fe(i) += JxW[qp]*fxy*phi[i][qp]; >> " >> for each boundary side, we have correspondingly >> " >> for (unsigned int i=0; i<phi_face.size(); i++) >> Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; >> " >> Assuming that we use tetrahedral element and linear Lagrange shape >> function, >> in this case, for an element, there are 4 vertices, that is i=0, 1, 2, 3. >> for a boundary side, how to guarantee the vertices (i=0, 1, 2) on the >> boundary? Thanks a lot. >> > > Simple: we don't guarantee that i is a boundary vertex. In fact, > that would be an easy source of error when integrating Neumann or > Robin type boundary conditions, where the interior vertices affect the > gradients being integrated and you can't leave them out. > > But for Dirichlet conditions, the terms where i is an interior vertex > don't matter, because phi_face is evaluated on boundary sides and > we have phi_face[i_interior][qp] = 0. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20091026 00:35:35

On Sun, 25 Oct 2009, Yujie wrote: > for each element, we have > " > for (unsigned int i=0; i<phi.size(); i++) > Fe(i) += JxW[qp]*fxy*phi[i][qp]; > " > for each boundary side, we have correspondingly > " > for (unsigned int i=0; i<phi_face.size(); i++) > Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; > " > Assuming that we use tetrahedral element and linear Lagrange shape function, > in this case, for an element, there are 4 vertices, that is i=0, 1, 2, 3. > for a boundary side, how to guarantee the vertices (i=0, 1, 2) on the > boundary? Thanks a lot. Simple: we don't guarantee that i is a boundary vertex. In fact, that would be an easy source of error when integrating Neumann or Robin type boundary conditions, where the interior vertices affect the gradients being integrated and you can't leave them out. But for Dirichlet conditions, the terms where i is an interior vertex don't matter, because phi_face is evaluated on boundary sides and we have phi_face[i_interior][qp] = 0.  Roy 